DevOps Buzz
Search…
Bash / Shell
Bitbucket
Distros
Elasticsearch
General
Guidelines / Standards
microk8s
Prometheus
RabbitMQ
VirtualBox
Change serviceSubnet CIDR
Change IP range of your services.
The following process has a problem: after everything configured, the pods come up with the old IP as a DNS nameserver in /etc/resolv.conf.
Since I still did not find the solution, i had to reset the entire cluster with kubeadm reset and init it again.

Scenario

You have your cluster up and running services, let's say, on CIDR 10.10.0.0/24 and you want to change it to 10.5.0.0/24.

Double check if your CIDR conflicts

It is a good idea to check if you new CIDR does not overlap anything else in your network, for example, your pod subnet.
To do so, run the following python code:
1
import ipaddr
2
# Your pod subnet
3
n1 = ipaddr.IPNetwork('10.244.0.0/16')
4
# Your new service subnet
5
n2 = ipaddr.IPNetwork('10.5.0.0/24')
6
n1.overlaps(n2)
Copied!
pip install ipaddr if you need.

Dump your current cluster config

SSH to your master node and run:
1
cd /etc
2
kubeadm config view > kubeadmconf-2019-06-07.yml
3
cp kubeadmconf-2019-06-07.yml kubeadmconf-2019-06-07-NEW.yml
Copied!

Add you new service CIDR

Edit your new config file.
1
cd /etc
2
nano kubeadmconf-2019-06-07-NEW.yml
Copied!
Change serviceSubnet config:
1
...
2
networking:
3
dnsDomain: cluster.local
4
podSubnet: 10.244.0.0/16
5
# FROM serviceSubnet: 10.10.0.0/24
6
serviceSubnet: 10.5.0.0/24
7
...
Copied!

Update certificates

By the time of this writing (2019-06-07), "kubeadm upgrade" does not support updating API server certSANs: https://github.com/kubernetes/kubeadm/issues/1447
To do so, follow the steps below.

Check your current certificate

First of all, check your current certificate:
1
openssl x509 \
2
-in /etc/kubernetes/pki/apiserver.crt \
3
-text -noout
Copied!
You will see a section like this one:
1
...
2
X509v3 Subject Alternative Name:
3
DNS:k8s-non-prod-001-master-001, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, DNS:k8s-non-prod-001-master.brazilsouth.cloudapp.azure.com, IP Address:10.10.0.1, IP Address:10.0.0.4, IP Address:10.0.0.4, IP Address:191.234.160.212, IP Address:191.238.210.88
4
...
Copied!
Check all "IP Address" sections. You will note your new service CIDR IPs are not there yet.

Delete all certificates

1
mkdir /backup
2
cp /etc/kubernetes/pki/apiserver.* /backup/
3
rm /etc/kubernetes/pki/apiserver.*
Copied!

Generate new certificates

1
kubeadm init phase certs apiserver \
2
--config=/etc/kubeadmconf-2019-06-07-NEW.yml
Copied!
Check your new certificate and note the first of your new service IP CIDR, in this case, IP Address:10.5.0.1.

Restart services

Restart daemon and kubelet.
1
systemctl daemon-reload
2
systemctl restart kubelet
Copied!
Restart kube-apiserver docker container.
1
docker ps | grep apiserver
2
# get the container name, for example.
3
docker restart k8s_kube-apiserver_kube-apiserver-k8s-non-prod-001-master-001_kube-system_fc4ca5d2a58c3647572c064b74f7c5a4_0
Copied!

Test it

You can test the new certificates running:
1
openssl s_client -connect 10.0.0.4:6443 | openssl x509 -noout -text
Copied!
Change 10.0.0.4 with your server IP.

References

Redeploy kube-dns

Now you have two options. Both will throw an error. Both should have the same result.
Long story short, kubeadm upgrade apply will recreate the kube-dns service for you.

1) Delete kube-dns BEFORE upgrading the cluster.

1
kubectl -n kube-system delete service kube-dns
2
kubeadm upgrade apply \
3
--config /etc/kubeadmconf-2019-06-07-NEW.yml
Copied!
You will see this error:
[upgrade/apply] FATAL: failed to retrieve the current etcd version: context deadline exceeded
Just run the upgrade command again.
1
kubeadm upgrade apply \
2
--config /etc/kubeadmconf-2019-06-07-NEW.yml
Copied!

2) Delete kube-dns AFTER upgrading the cluster.

1
kubeadm upgrade apply \
2
--config /etc/kubeadmconf-2019-06-07-NEW.yml
Copied!
You will see this error:
[upgrade/postupgrade] FATAL post-upgrade error: unable to create/update the DNS service: Service "kube-dns" is invalid: spec.clusterIP: Invalid value: "10.10.0.10": field is immutable
Delete the service and run the upgrade command again.
1
kubectl -n kube-system delete service kube-dns
2
kubeadm upgrade apply \
3
--config /etc/kubeadmconf-2019-06-07-NEW.yml
Copied!
Check the service:
1
kubectl -n kube-system get service kube-dns
Copied!
Your kube-dns your CLUSTER-IP should be in your new services CIDR.
1
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
2
kube-dns ClusterIP 10.5.0.10 <none> 53/UDP,53/TCP,9153/TCP 83m
Copied!

Fix kubelet ConfigMap

Not completely sure how to do it yet.
Edit kubelet ConfigMap:
1
kubectl get cm -n kube-system kubelet-config-1.16 -o yaml
Copied!
Fix the DNS IP with your new DNS IP.
Then run.
1
kubeadm upgrade node
2
systemctl restart kubelet
Copied!

Redeploy kubernetes service

This service exposes the API.
Delete it and it should be recreated automatically:
1
kubectl -n default delete service kubernetes
2
sleep 3
3
kubectl -n default get service kubernetes
Copied!
Your kube-dns your CLUSTER-IP should be in your new services CIDR.
1
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
2
kubernetes ClusterIP 10.5.0.1 <none> 443/TCP 85m
Copied!

Redeploy the ingress

If you do not have and ingress, go to the next section.
Delete and redeploy it.
1
kubectl delete -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
2
sleep 3
3
kubectl delete -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
4
sleep 3
5
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
6
sleep 3
7
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
Copied!
This example is for Azure.
Check the service.
1
kubectl -n ingress-nginx get service ingress-nginx
Copied!
The CLUSTER-IP should be in new services CIDR.
1
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
2
ingress-nginx LoadBalancer 10.5.0.77 191.238.222.97 80:31022/TCP,443:31035/TCP 67m
Copied!

References

Redeploy the dashboard

1
kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
2
sleep 3
3
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
Copied!
Run kube proxy and test it.

References

Redeploy helm and tiller
1
kubectl -n kube-system delete service tiller-deploy
2
sleep 3
3
# Make to force and upgrade
4
helm init --upgrade --service-account=test
5
sleep 3
6
helm init --upgrade --service-account=tiller
Copied!

Redeploy all your services

Backup and restore all your services, so they will get a new IP in your CIDR.
1
kubectl -n YOUR-NAMESPACE get service YOUR-SERVICE -o yaml > YOUR-SERVICE.yml
Copied!
Edit the manifest and delete all imutable fields, like uid.
Delete the field clusterIP.
Delete the section status.
Delete and recreate your service.
1
kubectl -n YOUR-NAMESPACE delete service YOUR-SERVICE
2
kubectl apply YOUR-SERVICE.yml
Copied!
Backup all you need like Load Balancer info, etc.
Last modified 1yr ago