Kubectl Cheat Sheet

Useful commands list.

General

Overview

https://kubernetes.io/docs/reference/kubectl/overview/

Install

curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
chmod +rx ./kubectl
sudo mv  ./kubectl /usr/local/bin

Enable autocomplete

sudo apt-get install bash-completion
source /usr/share/bash-completion/bash_completion
echo 'source <(kubectl completion bash)' >>~/.bashrc
sudo su -
kubectl completion bash >/etc/bash_completion.d/kubectl

Enable autocomplete for an alias.

alias k=kubectl
source <(kubectl completion bash | sed 's/kubectl/k/g')

References

https://kubernetes.io/docs/tasks/tools/install-kubectl/#enabling-shell-autocompletion

Explain components

kubectl explain pods

Run kubectl from inside a container

TTY connect to your container and make sure kubectl is installed.

Import your Kubernetes config

When you are connected to a container deployed in Kubernetes cluster, it already has access to Kubernetes config and certificates, you only need to import them:

kubectl config set-cluster \
  default --server=https://kubernetes.default \
  --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
  
kubectl config set-context default --cluster=default
token=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
kubectl config set-credentials user --token=$token
kubectl config set-context default --user=user
kubectl config use-context default

Do not replace any path or URL, you can use exactly the command above.

At this point you should have the file ~/.kube/config.

cat ~/.kube/config

WORKAROUND: if, by any change, you are having a hard time, you can get the /root/.kube/config file from your original installation and restore it inside your container.

Generate kubeconfig from ServiceAccount

server=https://192.168.99.101:8443
namespace=myproject-sysadmin
secretName=myproject-001-admin-token-wszv8

ca=$(kubectl -n $namespace get secret/$secretName -o jsonpath='{.data.ca\.crt}')
token=$(kubectl -n $namespace get secret/$secretName -o jsonpath='{.data.token}' | base64 --decode)
namespace=$(kubectl -n $namespace get secret/$secretName -o jsonpath='{.data.namespace}' | base64 --decode)

echo "
apiVersion: v1
kind: Config
clusters:
- name: default-cluster
  cluster:
    certificate-authority-data: ${ca}
    server: ${server}
contexts:
- name: default-context
  context:
    cluster: default-cluster
    namespace: default
    user: default-user
current-context: default-context
users:
- name: default-user
  user:
    token: ${token}
" > $secretName.kubeconfig

Cluster management

Get cluster name

kubectl config get-clusters

Get cluster endpoints

kubectl cluster-info

List all API resources

kubectl api-resources -o wide
kubectl api-resources --verbs=list -o name | xargs -n 1 kubectl get -o name

Logs

Get logs from a previous restart pod:

kubectl \
  -n nmp-fm-mcd-001 logs \
  POD-NAME \
  -c CONTAINER-NAME --previous

Namespaces

Force delete namespace (hanging on "Terminating")

kubectl delete namespaces --grace-period=0 --force my-namespace

If the namespace is not deleted, check its manifest:

kubectl get namespace my-namespace -o yaml

Check if it has any finalizers, for example:

...
finalizers:
  - controller.cattle.io/namespace-auth
...

Edit it:

kubectl edit namespace my-namespace

And delete the finalizers block.

If it does not work, export namespace manifest to a file.

kubectl get ns my-namespace -o json > my-namespace.json

Edit the file, on finalizers block, remove "kubernetes" (or any other existing finalizer).

kubectl replace --raw "/api/v1/namespaces/my-namespace/finalize" -f ./my-namespace.json

Nodes

Get nodes

kubectl get nodes --show-labels

Permission

can-i

kubectl auth can-i list deployment

Pods

Connect to pod TTY

The right way

List your pods:

kubectl get pods

Locate the one you want access, get its name, and run:

kubectl exec -it --user=root hal-66b97c4c88-b675b bash

Replace --user=root with your container user and hal-66b97c4c88-b675b with your pod name.

If your namespace has only one pod, your use only one command:

NAMESPACE=YOUR-NAMESPACE
kubectl -n $NAMESPACE \
  exec -it \
  $(kubectl -n $NAMESPACE get pods | sed -n 2p | awk '{print $1}') bash

Workaround

If by any reason you could not use kubectl exec (for example, if your container does not allow root auth), then SSH to your K8s worker node which is hosting your pod.

Locate the container you want to connect to:

docker ps |grep "halyard"

Replace halyard with any keyword you want.

Then connect to it:

docker exec -it --user root 261d763bf353 bash

Force delete pod

Never force pod deletion unless it is really necessary

If you have a pod which is referenced by a Replica Set that does not exist and you are stuck, force pod deletion.

kubectl -n PUT-YOUR-NAMESPACE-HERE \
  delete pod PUT-YOUR-POD-NAME-HERE \
  --grace-period=0 --force

Replace PUT-YOUR-NAMESPACE-HERE Replace PUT-YOUR-POD-NAME-HERE

References

RBAC

(Cluster)RoleBindings and the ServiceAccount(s) they reference with

kubectl get rolebindings,clusterrolebindings \
  --all-namespaces  \
  -o custom-columns='KIND:kind,NAMESPACE:metadata.namespace,NAME:metadata.name,SERVICE_ACCOUNTS:subjects[?(@.kind=="ServiceAccount")].name'

Resources

List pods resource limits

kubectl -n cxc get pod -o custom-columns=NAME:.metadata.name,MLIMIT:.spec.containers[].resources.limits.memory
kubectl -n myns get pods -o json | jq .items[].spec.containers.resources.limits.cpu

Last updated