DevOps Buzz
Bash / Shell
Guidelines / Standards
Change serviceSubnet CIDR
Change IP range of your services.
The following process has a problem: after everything configured, the pods come up with the old IP as a DNS nameserver in /etc/resolv.conf.
Since I still did not find the solution, i had to reset the entire cluster with kubeadm reset and init it again.


You have your cluster up and running services, let's say, on CIDR and you want to change it to

Double check if your CIDR conflicts

It is a good idea to check if you new CIDR does not overlap anything else in your network, for example, your pod subnet.
To do so, run the following python code:
import ipaddr
# Your pod subnet
n1 = ipaddr.IPNetwork('')
# Your new service subnet
n2 = ipaddr.IPNetwork('')
pip install ipaddr if you need.

Dump your current cluster config

SSH to your master node and run:
cd /etc
kubeadm config view > kubeadmconf-2019-06-07.yml
cp kubeadmconf-2019-06-07.yml kubeadmconf-2019-06-07-NEW.yml

Add you new service CIDR

Edit your new config file.
cd /etc
nano kubeadmconf-2019-06-07-NEW.yml
Change serviceSubnet config:
dnsDomain: cluster.local
# FROM serviceSubnet:

Update certificates

By the time of this writing (2019-06-07), "kubeadm upgrade" does not support updating API server certSANs:
To do so, follow the steps below.

Check your current certificate

First of all, check your current certificate:
openssl x509 \
-in /etc/kubernetes/pki/apiserver.crt \
-text -noout
You will see a section like this one:
X509v3 Subject Alternative Name:
DNS:k8s-non-prod-001-master-001, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local,, IP Address:, IP Address:, IP Address:, IP Address:, IP Address:
Check all "IP Address" sections. You will note your new service CIDR IPs are not there yet.

Delete all certificates

mkdir /backup
cp /etc/kubernetes/pki/apiserver.* /backup/
rm /etc/kubernetes/pki/apiserver.*

Generate new certificates

kubeadm init phase certs apiserver \
Check your new certificate and note the first of your new service IP CIDR, in this case, IP Address:

Restart services

Restart daemon and kubelet.
systemctl daemon-reload
systemctl restart kubelet
Restart kube-apiserver docker container.
docker ps | grep apiserver
# get the container name, for example.
docker restart k8s_kube-apiserver_kube-apiserver-k8s-non-prod-001-master-001_kube-system_fc4ca5d2a58c3647572c064b74f7c5a4_0

Test it

You can test the new certificates running:
openssl s_client -connect | openssl x509 -noout -text
Change with your server IP.


Redeploy kube-dns

Now you have two options. Both will throw an error. Both should have the same result.
Long story short, kubeadm upgrade apply will recreate the kube-dns service for you.

1) Delete kube-dns BEFORE upgrading the cluster.

kubectl -n kube-system delete service kube-dns
kubeadm upgrade apply \
--config /etc/kubeadmconf-2019-06-07-NEW.yml
You will see this error:
[upgrade/apply] FATAL: failed to retrieve the current etcd version: context deadline exceeded
Just run the upgrade command again.
kubeadm upgrade apply \
--config /etc/kubeadmconf-2019-06-07-NEW.yml

2) Delete kube-dns AFTER upgrading the cluster.

kubeadm upgrade apply \
--config /etc/kubeadmconf-2019-06-07-NEW.yml
You will see this error:
[upgrade/postupgrade] FATAL post-upgrade error: unable to create/update the DNS service: Service "kube-dns" is invalid: spec.clusterIP: Invalid value: "": field is immutable
Delete the service and run the upgrade command again.
kubectl -n kube-system delete service kube-dns
kubeadm upgrade apply \
--config /etc/kubeadmconf-2019-06-07-NEW.yml
Check the service:
kubectl -n kube-system get service kube-dns
Your kube-dns your CLUSTER-IP should be in your new services CIDR.
kube-dns ClusterIP <none> 53/UDP,53/TCP,9153/TCP 83m

Fix kubelet ConfigMap

Not completely sure how to do it yet.
Edit kubelet ConfigMap:
kubectl get cm -n kube-system kubelet-config-1.16 -o yaml
Fix the DNS IP with your new DNS IP.
Then run.
kubeadm upgrade node
systemctl restart kubelet

Redeploy kubernetes service

This service exposes the API.
Delete it and it should be recreated automatically:
kubectl -n default delete service kubernetes
sleep 3
kubectl -n default get service kubernetes
Your kube-dns your CLUSTER-IP should be in your new services CIDR.
kubernetes ClusterIP <none> 443/TCP 85m

Redeploy the ingress

If you do not have and ingress, go to the next section.
Delete and redeploy it.
kubectl delete -f
sleep 3
kubectl delete -f
sleep 3
kubectl apply -f
sleep 3
kubectl apply -f
This example is for Azure.
Check the service.
kubectl -n ingress-nginx get service ingress-nginx
The CLUSTER-IP should be in new services CIDR.
ingress-nginx LoadBalancer 80:31022/TCP,443:31035/TCP 67m


Redeploy the dashboard

kubectl delete -f
sleep 3
kubectl apply -f
Run kube proxy and test it.


Redeploy helm and tiller
kubectl -n kube-system delete service tiller-deploy
sleep 3
# Make to force and upgrade
helm init --upgrade --service-account=test
sleep 3
helm init --upgrade --service-account=tiller

Redeploy all your services

Backup and restore all your services, so they will get a new IP in your CIDR.
kubectl -n YOUR-NAMESPACE get service YOUR-SERVICE -o yaml > YOUR-SERVICE.yml
Edit the manifest and delete all imutable fields, like uid.
Delete the field clusterIP.
Delete the section status.
Delete and recreate your service.
kubectl -n YOUR-NAMESPACE delete service YOUR-SERVICE
kubectl apply YOUR-SERVICE.yml
Backup all you need like Load Balancer info, etc.