Skip to main content

Error page workshop kubernetes


Detail problem

root@master:/home/barkah# kubeadm init \
>   --cri-socket /run/containerd/containerd.sock \
>   --pod-network-cidr=192.168.0.0/16
W0110 07:35:17.463086   12614 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/run/containerd/containerd.sock". Please update your configuration!
I0110 07:35:18.243766   12614 version.go:256] remote version is much newer: v1.29.0; falling back to: stable-1.28
[init] Using Kubernetes version: v1.28.5
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR CRI]: container runtime is not running: output: time="2024-01-10T07:35:18Z" level=fatal msg="validate service connection: CRI v1 runtime API is not implemented for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

Solusi

root@master:/home/barkah# rm /etc/containerd/config.toml
root@master:/home/barkah# systemctl restart containerd
root@master:/home/barkah# kubeadm init
I0110 07:36:38.348918   12801 version.go:256] remote version is much newer: v1.29.0; falling back to: stable-1.28
[init] Using Kubernetes version: v1.28.5

Referensi

https://k21academy.com/docker-kubernetes/container-runtime-is-not-running/ 


Detail problem

tidak bisa melakukan inisialisasi cluster , karena sudah ada konfigurasi sebelumnya

root@master:/home/barkah# kubeadm init --apiserver-advertise-address=kubemaster  --apiserver-cert-extra-sans=kubemaster --pod-network-cidr=192.168.0.0/16
couldn't use "kubemaster" as "apiserver-advertise-address", must be ipv4 or ipv6 address
To see the stack trace of this error execute with --v=5 or higher
root@master:/home/barkah# ping kubemaster
PING kubemaster (192.168.56.102) 56(84) bytes of data.
64 bytes from kubemaster (192.168.56.102): icmp_seq=1 ttl=64 time=0.672 ms
64 bytes from kubemaster (192.168.56.102): icmp_seq=2 ttl=64 time=0.060 ms
^C
--- kubemaster ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1009ms
rtt min/avg/max/mdev = 0.060/0.366/0.672/0.306 ms
root@master:/home/barkah#
root@master:/home/barkah#
root@master:/home/barkah# kubeadm init --apiserver-advertise-address=192.168.56.102  --apiserver-cert-extra-sans=192.168.56.102 --pod-network-cidr=192.168.0.0/16
I0110 11:35:49.070365  100197 version.go:256] remote version is much newer: v1.29.0; falling back to: stable-1.28
[init] Using Kubernetes version: v1.28.5
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR Port-6443]: Port 6443 is in use
        [ERROR Port-10259]: Port 10259 is in use
        [ERROR Port-10257]: Port 10257 is in use
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
        [ERROR Port-10250]: Port 10250 is in use
        [ERROR Port-2379]: Port 2379 is in use
        [ERROR Port-2380]: Port 2380 is in use
        [ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

Solusi

lakukan reset konfigurasi lalu buat konfigurasi baru dengan melakukan init 

root@master:/home/barkah# kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0110 11:41:35.748239  102680 preflight.go:56] [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
W0110 11:41:43.169611  102680 cleanupnode.go:99] [reset] Failed to remove containers: [failed to stop running pod f55d30b43620502722138fca94c49b1034338d2ac0f12145c63c857ea959cfad: output: E0110 11:41:42.578570  103352 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f55d30b43620502722138fca94c49b1034338d2ac0f12145c63c857ea959cfad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f55d30b43620502722138fca94c49b1034338d2ac0f12145c63c857ea959cfad"
time="2024-01-10T11:41:42Z" level=fatal msg="stopping the pod sandbox \"f55d30b43620502722138fca94c49b1034338d2ac0f12145c63c857ea959cfad\": rpc error: code = Unknown desc = failed to destroy network for sandbox \"f55d30b43620502722138fca94c49b1034338d2ac0f12145c63c857ea959cfad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
: exit status 1, failed to stop running pod 2adf001403567ea3925a3434169d87185613c157fbefa4573532b32ff702ea15: output: E0110 11:41:43.167470  103479 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2adf001403567ea3925a3434169d87185613c157fbefa4573532b32ff702ea15\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2adf001403567ea3925a3434169d87185613c157fbefa4573532b32ff702ea15"
time="2024-01-10T11:41:43Z" level=fatal msg="stopping the pod sandbox \"2adf001403567ea3925a3434169d87185613c157fbefa4573532b32ff702ea15\": rpc error: code = Unknown desc = failed to destroy network for sandbox \"2adf001403567ea3925a3434169d87185613c157fbefa4573532b32ff702ea15\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
: exit status 1]
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
root@master:/home/barkah# rm -rf /etc/cni/net.d
root@master:/home/barkah# kubeadm init --apiserver-advertise-address=192.168.56.102  --apiserver-cert-extra-sans=192.168.56.102 --pod-network-cidr=192.168.0.0/16