Kubernetes - Network isolation with NetworkPolicy

As your number of deployed applications within Kubernetes grows, you may want to isolate them from a network point of view. By default, Kubernetes does not offer any network isolation, all pods of all your namespaces can talk to each other without any isolation, and even on network port that you have not defined. Yes, that's scary!

There are different approaches and tools to do network isolation; let's take a look at the NetworkPolicy.

Pierre Mavro

Pierre Mavro

July 23, 2020 · 3 min read
Kubernetes - Network isolation with NetworkPolicy - Qovery

#Kubernetes Networking plugin

Kubernetes provides a resource called NetworkPolicy that allows rules to allow/deny network traffic, which works like a network firewall. By default using this resource doesn't do anything. To make it work, you need first to add a Kubernetes Networking plugin that implements it.

Some Kubernetes cluster providers propose their implementation, like GKS and AKS. On the other side, you can use Calico, like recommended by AWS with EKS.

This page assumes you have installed the Kubernetes Networking Plugin (See below).

#Installation

Here are the links to install the Kubernetes Networking plugin according to your Cloud provider.

#Configuration

Implementing Network Isolation is the same rule of thumb as configuring a firewall - block every inbound request and allow what you need.

#Block all incoming traffic

In the example below, we will configure the production to be isolated from all other namespaces but still allow any pods deployed within the production namespace to talk to each other.

First, let's create a namespace:

apiVersion: v1
kind: Namespace
metadata:
  name: production
  labels:
    role: production

Then, blocking incoming traffic for this namespace looks like this:

#...
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: no-inbound-traffic
  namespace: production
spec:
  policyTypes:
  - Ingress
  podSelector:
    matchLabels: {}

The rule is:

  • policyTypes=Ingress to select only the incoming traffic
  • an empty set in podSelector/matchLabels, to apply the rule to all pods within the namespace.
  • no ingress rules have been defined, so everything is blocked

#Allow traffic between pods within the same namespace

To allow any pods within the production namespace to communicate to each other, add a NetworkPolicy rule:

#...
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-same-namespace-traffic
  namespace: production
spec:
  policyTypes:
  - Ingress
  podSelector:
    matchLabels: {}
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          role: production

The ingress rules indicate that we want to allow all traffic from the namespace with the label role=production.

#Allow incoming traffic from outside.

Let's now imagine that you have a web application listening on port 8000. To make it publicly accessible, we need to add one more rule:

#...
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-port-8000
  namespace: production
spec:
  policyTypes:
  - Ingress
  podSelector:
    matchLabels:
      app=web-server
  ingress:
  - ports:
    - port: 8000

Instead of selecting all pods, I pick only those with the label app=web-server of the productions namespace. Then the ingress: rule allows anybody to connect to the port 8000 of my web-server.

#Block outgoing traffic

NetworkPolicy can also be used to prevent traffic from going out. For instance, we may not want an application to read the AWS metadata server information.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: disable-aws-metadata
  namespace: production
spec:
  policyTypes:
  - Egress
  podSelector:
    matchLabels: {}
  egress:
  - to:
    - ipBlock:
        cidr: 0.0.0.0/0
        except:
          - 169.254.169.254/32

#Going further

NetworkPolicy is useful for simple network traffic filtering but not enough to have perfect control over pods communication. Filtering rules are made only with Pod and Namespace selectors. A person with bad intentions can still connect directly to the application port (here 8000) and bypass your Ingress resources and Loadbalancer setup once the network port is open.

In a forthcoming post, we will see how we can have fine-grained filtering with a sidecar service called Istio.

#Resources

Your Favorite Internal Developer Platform

Qovery is an Internal Developer Platform Helping 50.000+ Developers and Platform Engineers To Ship Faster.

Try it out now!
Your Favorite Internal Developer Platform
Qovery white logo

Your Favorite Internal Developer Platform

Qovery is an Internal Developer Platform Helping 50.000+ Developers and Platform Engineers To Ship Faster.

Try it out now!
KubernetesEngineering