Kubernetes Network (4)

Ingress and Network Policies

4. Ingress and Network Policy

At the end of last article, when we talking about the external request, have mentioned the control over 3rd layer or 4th layer (OSI model) traffic flow — Ingress and the Network Policy. Therefore in this part, we will have a glance of it (just a glance…)

4.1 Ingress

First let’s distiguish the some concepts, LoadBalancer, Ingress and Ingress Controller.

  • LoadBalancer: Provisions an external IP per service (direct external access).

  • Ingress: Provides a single external entry point (via a Load Balancer) to route traffic to multiple services using rules for HTTP/HTTPS traffic.

  • Ingress Controller: Acts as the router and passes the request to the right Service (api-service).

Therefore, the difference betwe Ingress and Ingress Controller is:

  1. Ingress is just a set of routing rules, not a traffic receiver.

  2. Ingress Controller

    • An Ingress Controller is responsible for handling and routing the actual traffic.

    • The Ingress Controller is usually exposed externally via a LoadBalancer Service or a NodePort.

There are different kinds of Ingress Controllers, which can be found here.

4.1.1 Workflow

Client → LoadBalancer (external IP) → Ingress Controller → Service → Pods Traffic flows like this:

  1. Client makes a request to the external IP or domain.

  2. The traffic hits the LoadBalancer in front of the Ingress Controller.

  3. The Ingress Controller checks the Ingress rules and routes the traffic to the correct Service.

  4. The Service routes the traffic to the appropriate Pods.

4.1.2 Example

External traffic comes to a single external IP (the one provided by the LoadBalancer Service associated with the Ingress Controller). The Ingress Controller listens for the traffic. The Ingress Controller reads the rules defined in the Ingress resource and forwards traffic to the appropriate backend Services based on paths or hostnames.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-ingress
spec:
  rules:
  - host: example.com
    http:
      paths:
      - path: /app1
        backend:
          service:
            name: app1-service
            port:
              number: 80
      - path: /app2
        backend:
          service:
            name: app2-service
            port:
              number: 80

The Ingress Controller (which has a LoadBalancer in front of it) receives traffic on https://example.com. Based on the path (/app1 or /app2), the Ingress Controller routes traffic to the correct Service (app1-service or app2-service).

4.1.3 Why Ingress

The most obvious difference is, without Ingress we need to use NodePort or multiple LoadBalancers to expose to external, but with Ingress, only single LoadBalancer or NodePort is enough, which means we can use one external IP or one domain for multiple services.

4.2 Network Policy

Another control of the Network traffic flow “at the IP address or port level (OSI layer 3 or 4)” is Network Policy. The concepts are simple, which is about controlling the come in/out traffic, so it’s better to check more details in the official document.

4.2.1 Basic parameters

# NetworkPolicy for space1 - allows egress to space2 and DNS
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: np
  namespace: space1
spec:
  podSelector: {}
  policyTypes:
    - Egress
  egress:
    - to:
        - namespaceSelector:
            matchLabels:
              app: source-ns
    - ports:
        - protocol: TCP
          port: 53
  • podSelector: Targets only the pods in podSelector will the policy be applied on.

  • policyTypes: Specifies that this policy includes both Ingress and Egress rules.

  • ingress/egress:

    • Ingress / Egress is the incoming/outgoing traffic flow.

      • ingress: allows outgoing traffic from any pod.

      • egress: allows outgoing traffic to any pod.

    • There are podSelector/namespaceSelector under it, which mean to tell which traffic can come in and which traffic can go out. And it matched by labels, here for example, the matchLabels is app=source-ns.

    • Another restriction is about ports, where we can define which port we want the incoming/outgoning traffic on.

4.2.2 Ingress/Egress default plan

If no policy settings, all the traffic ingress/egress are allowed in the namespace. Here we can see some default examples, which can be use as default settings to allow/deny all ingress/egress traffic, deny all traffic, more examples can be find in the official document. here just give us a glimspe at how it looks like in the settings.

# 
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-all-ingress
spec:
  podSelector: {}
  ingress:
  - {}
  policyTypes:
  - Ingress

---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-egress
spec:
  podSelector: {}
  policyTypes:
  - Egress

---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress

4.2.3 when testing DNS traffics

Here is one more thing when you want to checking the DNS traffics. DNS traffic is typically initiated as an outgoing request from a pod to a DNS server (such as CoreDNS) within the cluster.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: my-policy
  namespace: default
spec:
  podSelector: {}
  policyTypes:
    - Engress
  engress:
    - to:
        - namespaceSelector:
            matchLabels:
              app: source-ns
    - ports:
        - protocol: TCP
          port: 53  # Allow outgoing TCP DNS traffic on port 53
        - protocol: UDP
          port: 53  # Allow outgoing UDP DNS traffic on port 53

Note: curl isn’t suitable for directly testing DNS traffic on port 53 because curl is primarily designed for HTTP/HTTPS requests, which typically run on ports 80 and 443. DNS, however, operates on port 53 with a different protocol, so curl won’t perform a proper DNS query or validate DNS connectivity on that port.

Useful Commands for DNS Testing on Port 53

  1. Using nslookup:

     kubectl exec -n space1 app1-0 -- nslookup google.com
    

    nslookup: Tests if a DNS server is reachable on port 53 and performs a DNS lookup.

  2. Using dig (UDP):

     kubectl exec -n space1 app1-0 -- dig google.com
    

    dig: Provides more detailed DNS query results and can test both UDP and TCP connections.

  3. Using dig for TCP (optional to test TCP over port 53):

     kubectl exec -n space1 app1-0 -- dig +tcp google.com
    

These commands will accurately verify if DNS traffic over port 53 is allowed as per your NetworkPolicy.

4.3 Reference

Here is an link for the website graphically explaining the concepts of Network Policy, which will give you a better understanding of above concepts.