Server Fault Asked by Umesh on November 4, 2021
[Problem statement]
Ping from pod {busybox-minion1} running on worker1 {named as : minion1} to the pod {busybox-minion2} on worker2 {named as : minion2} is not working.
It is a three-node cluster created on google cloud using the below commands using the centos 7 image. All nodes and pods are up and running.
[Kubeadm init command]:
kubeadm init --apiserver-advertise-address=10.128.0.5 --pod-network-cidr=192.168.0.0/16
[Overlay Network]:
kubectl create -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml
[Pre-requisites]
[IP Tables settings]
sudo sysctl net.bridge.bridge-nf-call-iptables=1
sudo sysctl net.ipv4.ip_forward=1
sudo sysctl --system
sudo echo "1" > /proc/sys/net/ipv4/ip_forward
[Working Setup details]
[user@k8master ~]$ **kubectl version --short**
Client Version: v1.18.6
Server Version: v1.18.6
[user@k8master ~]$
[user@k8master ~]$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8master Ready master 5h5m v1.18.6 10.128.0.5 <none> CentOS Linux 7 (Core) 3.10.0-1127.13.1.el7.x86_64 docker://1.13.1
minion1 Ready <none> 4h54m v1.18.6 10.128.0.6 <none> CentOS Linux 7 (Core) 3.10.0-1127.13.1.el7.x86_64 docker://1.13.1
minion2 Ready <none> 4h54m v1.18.6 10.128.0.7 <none> CentOS Linux 7 (Core) 3.10.0-1127.13.1.el7.x86_64 docker://1.13.1
[user@k8master ~]$
[user@k8master ~]$
[user@k8master ~]$ kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default busybox-minion1 1/1 Running 2 4h41m 192.168.34.1 **minion1** <none> <none>
default busybox-minion2 1/1 Running 2 4h40m 192.168.179.194 **minion2** <none> <none>
kube-system calico-kube-controllers-578894d4cd-zjbvj 1/1 Running 1 4h25m 192.168.179.195 minion2 <none> <none>
kube-system calico-node-fv4w9 1/1 Running 1 4h25m 10.128.0.6 minion1 <none> <none>
kube-system calico-node-l92x6 1/1 Running 1 4h25m 10.128.0.5 k8master <none> <none>
kube-system calico-node-s62gl 1/1 Running 1 4h25m 10.128.0.7 minion2 <none> <none>
kube-system coredns-66bff467f8-xwhl8 1/1 Running 1 5h5m 192.168.125.193 k8master <none> <none>
kube-system coredns-66bff467f8-zgvg2 1/1 Running 1 5h5m 192.168.125.194 k8master <none> <none>
kube-system etcd-k8master 1/1 Running 1 5h5m 10.128.0.5 k8master <none> <none>
kube-system kube-apiserver-k8master 1/1 Running 1 5h5m 10.128.0.5 k8master <none> <none>
kube-system kube-controller-manager-k8master 1/1 Running 1 5h5m 10.128.0.5 k8master <none> <none>
kube-system kube-proxy-49862 1/1 Running 1 4h55m 10.128.0.6 minion1 <none> <none>
kube-system kube-proxy-7c99z 1/1 Running 1 4h54m 10.128.0.7 minion2 <none> <none>
kube-system kube-proxy-dxq9d 1/1 Running 1 5h5m 10.128.0.5 k8master <none> <none>
kube-system kube-scheduler-k8master 1/1 Running 1 5h5m 10.128.0.5 k8master <none> <none>
[user@k8master ~]$
[user@k8master ~]$ kubectl exec -it busybox-minion1 -- ping 192.168.179.194**
PING 192.168.179.194 (192.168.179.194): 56 data bytes
^C
--- 192.168.179.194 ping statistics ---
5 packets transmitted, 0 packets received, 100% packet loss
command terminated with exit code 1
[user@k8master ~]$
[Image for reference]
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP