Change serviceSubnet CIDR
Change IP range of your services.
The following process has a problem: after everything configured, the pods come up with the old IP as a DNS nameserver in /etc/resolv.conf.
Since I still did not find the solution, i had to reset the entire cluster with kubeadm reset
and init
it again.
Scenario
You have your cluster up and running services, let's say, on CIDR 10.10.0.0/24 and you want to change it to 10.5.0.0/24.
Double check if your CIDR conflicts
It is a good idea to check if you new CIDR does not overlap anything else in your network, for example, your pod subnet.
To do so, run the following python code:
pip install ipaddr
if you need.
Dump your current cluster config
SSH to your master node and run:
Add you new service CIDR
Edit your new config file.
Change serviceSubnet
config:
Update certificates
By the time of this writing (2019-06-07), "kubeadm upgrade" does not support updating API server certSANs: https://github.com/kubernetes/kubeadm/issues/1447
To do so, follow the steps below.
Check your current certificate
First of all, check your current certificate:
You will see a section like this one:
Check all "IP Address" sections. You will note your new service CIDR IPs are not there yet.
Delete all certificates
Generate new certificates
Check your new certificate and note the first of your new service IP CIDR, in this case, IP Address:10.5.0.1
.
Restart services
Restart daemon and kubelet.
Restart kube-apiserver docker container.
Test it
You can test the new certificates running:
Change 10.0.0.4
with your server IP.
References
https://github.com/kubernetes/kubeadm/issues/1447#issuecomment-490494999
Redeploy kube-dns
Now you have two options. Both will throw an error. Both should have the same result.
Long story short, kubeadm upgrade apply
will recreate the kube-dns service for you.
1) Delete kube-dns BEFORE upgrading the cluster.
You will see this error:
[upgrade/apply] FATAL: failed to retrieve the current etcd version: context deadline exceeded
Just run the upgrade command again.
2) Delete kube-dns AFTER upgrading the cluster.
You will see this error:
[upgrade/postupgrade] FATAL post-upgrade error: unable to create/update the DNS service: Service "kube-dns" is invalid: spec.clusterIP: Invalid value: "10.10.0.10": field is immutable
Delete the service and run the upgrade command again.
Check the service:
Your kube-dns
your CLUSTER-IP
should be in your new services CIDR.
Fix kubelet ConfigMap
Not completely sure how to do it yet.
Edit kubelet ConfigMap:
Fix the DNS IP with your new DNS IP.
Then run.
Redeploy kubernetes service
This service exposes the API.
Delete it and it should be recreated automatically:
Your kube-dns
your CLUSTER-IP
should be in your new services CIDR.
Redeploy the ingress
If you do not have and ingress, go to the next section.
Delete and redeploy it.
This example is for Azure.
Check the service.
The CLUSTER-IP
should be in new services CIDR.
References
https://kubernetes.github.io/ingress-nginx/deploy/#azure
Redeploy the dashboard
Run kube proxy
and test it.
References
https://github.com/kubernetes/dashboard
Redeploy helm and tiller
Redeploy all your services
Backup and restore all your services, so they will get a new IP in your CIDR.
Edit the manifest and delete all imutable fields, like uid
.
Delete the field clusterIP
.
Delete the section status
.
Delete and recreate your service.
Backup all you need like Load Balancer info, etc.
Last updated