tjtjtjのメモ

自分のためのメモです

kubernetes 学習 クラスタ作り直し

knative 試してたらグッチャグッチャになったので、クラスタを再構築したメモ。

マスターでワーカーノードを削除

$ kubectl get node
NAME   STATUS     ROLES    AGE   VERSION
kb1    Ready      master   14d   v1.13.2
kb2    Ready      <none>   13d   v1.13.2
kb3    Ready      <none>   13d   v1.13.2

$ kubectl delete node kb2
node "kb2" deleted

$ kubectl delete node kb3
node "kb3" deleted

$ kubectl get node
NAME   STATUS   ROLES    AGE   VERSION
kb1    Ready    master   14d   v1.13.2

ワーカーからkubeadm reset

$ sudo kubeadm reset

[reset] WARNING: changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] are you sure you want to proceed? [y/N]: y
[preflight] running pre-flight checks
[reset] no etcd config found. Assuming external etcd
[reset] please manually reset etcd to prevent further issues
[reset] stopping the kubelet service
[reset] unmounting mounted directories in "/var/lib/kubelet"
[reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]
[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually.
For example:
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

マスターでも kubeadm reset

$ sudo kubeadm reset
[reset] WARNING: changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] are you sure you want to proceed? [y/N]: y
[preflight] running pre-flight checks
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oya ml'
[reset] stopping the kubelet service
[reset] unmounting mounted directories in "/var/lib/kubelet"
[reset] deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /etc/cni/net.d /va r/lib/dockershim /var/run/kubernetes]
[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/boot strap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually.
For example:
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

マスターで kubeadm init

ここからはいつもの流れ。join のtokenをメモ

$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16
:
:
  kubeadm join 192.168.0.101:6443 --token nkm1pm.4wmal1abbb9j366m --discovery-token-ca-cert-hash sha256:658c4ff9233cab17542af787a0ae02002a8584f4badbcd2d30742ad8b59825ed
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown core:core $HOME/.kube/config
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.2", GitCommit:"cff46ab41ff0bb44d8584413b598ad8360ec1def", GitTreeState:"clean", BuildDate:"2019-01-10T23:35:51Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:00:57Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}

$ kubectl get nodes
NAME   STATUS     ROLES    AGE   VERSION
kb1    NotReady   master   61s   v1.13.2

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                          READY   STATUS    RESTARTS   AGE
kube-system   coredns-86c58d9df4-6n9c8      0/1     Pending   0          76s
kube-system   coredns-86c58d9df4-jbldk      0/1     Pending   0          76s
kube-system   etcd-kb1                      1/1     Running   0          24s
kube-system   kube-apiserver-kb1            1/1     Running   0          16s
kube-system   kube-controller-manager-kb1   1/1     Running   0          35s
kube-system   kube-proxy-44cb5              1/1     Running   0          76s
kube-system   kube-scheduler-kb1            1/1     Running   0          21s

coredns が Pending のまま動かない。

「kubectl describe pod coredns-...」 してエラー「0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.」 なので、 Tolerations/Taints を調べ、pod がヨゴレ仕事(taint)で node がヨゴレ役を引き受ける(toleration)ってイメージと逆で悩んだり、 CriticalAddonsOnly を taint するのかとか試したりもした。

Kubernetesのtaintsとtolerationsについて https://qiita.com/sheepland/items/8fedae15e157c102757f

が、そういうことでなく、こういうことだった。

「これは予想通りで、設計の一部です。 kubeadmはネットワークプロバイダに依存しないため、管理者は選択したPodネットワークソリューションをインストールする必要があります。 CoreDNSを完全に展開する前に、Podネットワークをインストールする必要があります。 したがって、ネットワークがセットアップされる前のPending状態です。」 https://kubernetes.io/docs/setup/independent/troubleshooting-kubeadm/#coredns-or-kube-dns-is-stuck-in-the-pending-state

つまりこれやってなかったのだ。

$ kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
$ kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

running になりました。

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                          READY   STATUS    RESTARTS   AGE
kube-system   calico-node-hdjxz             2/2     Running   0          23s
kube-system   coredns-86c58d9df4-6n9c8      1/1     Running   0          5m39s
kube-system   coredns-86c58d9df4-jbldk      1/1     Running   0          5m39s
kube-system   etcd-kb1                      1/1     Running   0          4m47s
kube-system   kube-apiserver-kb1            1/1     Running   0          4m39s
kube-system   kube-controller-manager-kb1   1/1     Running   0          4m58s
kube-system   kube-proxy-44cb5              1/1     Running   0          5m39s
kube-system   kube-scheduler-kb1            1/1     Running   0          4m44s

ワーカーで kubeadm join

$ sudo kubeadm join 192.168.0.101:6443 --token nkm1pm.4wmal1abbb9j366m --discovery-token-ca-cert-hash sha256:658c4ff9233cab17542af787a0ae02002a8584f4badbcd2d30742ad8b59825ed

マスターでクラスタ確認

$ kubectl get node
NAME   STATUS     ROLES    AGE   VERSION
kb1    Ready      master   23m   v1.13.2
kb2    Ready      <none>   55s   v1.13.2
kb3    NotReady   <none>   4s    v1.13.2