Kubernetes Network (3)

Simple Kubernetes Network Traffic Illustration

3. Examples

In this article we will use one master two worker nodes (kube-worker1 and kube-worker2) to realise the different scenarios previous article present, which include: container to container, pod to pod (intra-node and inter-nodes traffics), and pod to service, also external request.

First let’s see the whole structrue:

  • kube-worker1:

    • pod01:

      • app: pod01

      • container01

        • nginx: 80
      • container02

        • netshoot
    • pod02

      • app: pod02
    • svc01

      • selector: app:pod01

      • port: 8081

    • svc02

      • selector: app: pod01

      • port: 8082

  • kube-worker2

    • pod03

      • app: pod03
    • svc03:

      • selector: app:pod03
    • svc04:

      • selector: app:pod03

      • NodePort:

        • port/target port 80

        • nodePort: 30001

Here is the running results:

lee@kube0:~$ kubectl get node -owide
NAME           STATUS   ROLES           AGE    VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE           KERNEL-VERSION     CONTAINER-RUNTIME
kube-worker1   Ready    <none>          124d   v1.30.2   192.168.1.106   <none>        Ubuntu 24.04 LTS   6.8.0-35-generic   containerd://1.6.33
kube-worker2   Ready    <none>          124d   v1.30.2   192.168.1.108   <none>        Ubuntu 24.04 LTS   6.8.0-35-generic   containerd://1.6.33
kube0          Ready    control-plane   124d   v1.30.2   192.168.1.105   <none>        Ubuntu 24.04 LTS   6.8.0-35-generic   containerd://1.6.33
lee@kube0:~$ kubectl get pod -o wide
NAME    READY   STATUS    RESTARTS   AGE   IP           NODE           NOMINATED NODE   READINESS GATES
pod03   1/1     Running   0          18m   10.244.1.2   kube-worker2   <none>           <none>
lee@kube0:~$ kubectl get pod -n ns01 -o wide
NAME    READY   STATUS    RESTARTS   AGE   IP           NODE           NOMINATED NODE   READINESS GATES
pod01   2/2     Running   0          18m   10.244.2.7   kube-worker1   <none>           <none>
pod02   1/1     Running   0          18m   10.244.2.8   kube-worker1   <none>           <none>
lee@kube0:~$ kubectl get svc -o wide
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE    SELECTOR
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        124d   <none>
svc03        ClusterIP   10.109.163.44   <none>        8083/TCP       148m   app=pod03
svc04        NodePort    10.108.2.128    <none>        80:30001/TCP   40m    app=pod03
lee@kube0:~$ kubectl get svc -n ns01 -o wide
NAME    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE   SELECTOR
svc01   ClusterIP   10.101.68.123   <none>        8081/TCP       19m   app=pod01
svc02   ClusterIP   10.109.62.215   <none>        8082/TCP       19m   app=pod02

3.1 Container to Container

As we can communicate directly using localhost and exposed ports.

  • Two containers, container01 and container02 run in the same Pod pod1. If container01 exposes a service on port 80, container02 can access it using localhost:80.

    • container01 - nginx

      • port: 80
    • container02 - busybox

Check Network:

From one container, we can use curl to test if it can reach the other container via localhost.

kubectl exec -it pod01 -n ns01 -c container02 -- curl localhost:80
# can also run curl localhost:8080 inside the container02

3.2 Pod to Pod

pod01 and pod02 running on the same Node. pod01 can access pod02 by directly using pod02's IP address.

Check Network:

# get IP of pod2
kubectl get pod pod2 -o wide

# pod01 -> pod02
kubectl exec -it pod01 -n ns01 -c container02 -- curl 10.244.2.8:80

And we will see the html file of nginx successfully.

3.3 Pod to Service

pod01 sends traffic to a service svc02 (or others svc03, svc04). Because of CoreDNS could we accessed directly using the service name (svc02) inside the cluster.

Check Network:

# pod01 -> svc01 (inside pod)
kubectl exec -it pod01 -n ns01 -c container02 -- curl svc01:8081
# pod03 -> svc02 (cross node) (wait some time)
kubectl exec -it pod03 -- curl svc02.ns01.svc.cluster.local:8082
# pod01 -> svc02 (intra pod)
kubectl exec -it pod01 -n ns01 -c container02 -- curl svc02:8082
kubectl exec -it pod01 -n ns01 -c container02 -- nslookup svc02
# pod01 -> svc03 (cross node)
kubectl exec -it pod01 -n ns01 -c container02 -- curl svc03.default.svc.cluster.local:8083
kubectl exec -it pod01 -n ns01 -c container02 --  nslookup svc03.default.svc.cluster.local
# pod01 -> svc04 (cross node)
kubectl exec -it pod01 -c container02 -n ns01 -- curl svc04.default.svc.cluster.local:80
kubectl exec -it pod01 -n ns01 -c container02 --  nslookup svc03.default.svc.cluster.local
# cross node need FQDN format for a service
# <service-name>.<namespace-name>.svc.cluster.local
# external -> svc04 (external)
# log out the cluster
# curl http://192.168.1.108:30001

here we shoud notice the cross node need FQDN format for a service: <service-name>.<namespace-name>.svc.cluster.local

3.4 Exteranl request

Also we can check the external request, by using any Browser you like to check it with NodeIP:NodePort, as following:

3.6 Reference

The yaml file can be checked here.

Next

We will have a glance at Ingress (Gateway API) and Network Policy.