Deployment examples

Some templates to help.

kubectl run

kubectl run --image=ubuntu:18.04 tmp-app --command -- tail -f /dev/null
kubectl run -it --rm aks-ssh --image=ubuntu:18.04
kubectl run -it --rm busybox --image=busybox --restart=Never -- sh

aks-mgmt

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: aks-mgmt
  labels:
    app: aks-mgmt
spec:
  replicas: 1
  selector:
    matchLabels:
      app: aks-mgmt
  template:
    metadata:
      labels:
        app: aks-mgmt
    spec:
      containers:
      - image: tadeugr/aks-mgmt
        name: aks-mgmt
        command: ["/bin/bash","-c"]
        args: ["/start.sh; tail -f /dev/null"]
        ports:
        - containerPort: 8080

Ubuntu

Nginx

Bitcoin mining

Create the manifest:

You must read alexellis’ documentation regarding each parameter, the most important one right now is the -u. It is your wallet address.

Also read about the -o. And find your nearest stratum servers address.

Deploy it:

Scale out:

Testing

Double check if your pods are running and healthy:

Access one of your nodes and make sure "cpuminer" is running and using your wallet address.

Rollback

Delete all resources:

References

https://github.com/alexellis/mine-with-docker

Inter-process communications (IPC)

Image credit: https://www.mirantis.com

Create the manifest:

Deploy it:

Testing

Get your pod’s endpoint:

Access your Load Balance endpoint in your browser:

Open your browser’s network inspector and check Response Headers. You should see TestProxy (which was added by nginx on the proxy container) and TestHTTP (which was added by nginx on the HTTP container).

References

https://bitbucket.org/devopsbuzz/devops/src/master/kubernetes/deploy/basic/ipc-proxy-000/

https://www.mirantis.com/blog/multi-container-pods-and-container-communication-in-kubernetes/

Shared volumes

Image credit: https://www.mirantis.com

Create the manifest:

Deploy it:

Testing

Check the index.html file being updated every second:

You must have kubectl autocomplete enabled autocomplete your pod’s name. Otherwise get your pod’s name running kubectl -n multi-container-shared-volume get pods before.

Get your pod’s endpoint:

Access your Load Balance endpoint in your browser, you should see something like this:

References

https://bitbucket.org/devopsbuzz/devops/src/master/kubernetes/deploy/basic/shared-volumes-000/

https://www.mirantis.com/blog/multi-container-pods-and-container-communication-in-kubernetes/

Spinnaker with Halyard

Keep in mind that there are several ways to deploy and use Spinnaker. For example, you can install it on your host server or run a docker image on any server. You are not obligated to deploy it on Kubernetes and deal with Halyard or Helm. If you are looking for a Quick Start, read this documentation: https://www.spinnaker.io/setup/quickstart/

I’m writing this post because this was the easiest, fastest and more reliable way I found. Also, I wanted an “all Kubernetes” solution, centralizing everything in my cluster.

Hardware requirements

  • At least 2 vCPU available;

  • Approximately 13GB of RAM available on the nodes (seriously, less than that is not enough and will result in a timeout during the deploy).

Create Spinnaker accounts

Create the manifest:

Deploy it:

Create tiller service account

Tiller acount will be used later for Helm.

Create tiller service account:

Create tiller cluster role binding:

Create Spinnaker namespace

To create Spinnaker namespace, run:

Create Spinnaker services

Create the manifest:

Deploy it:

At this point Kubernetes will create Load Balancers and allocate IPs.

Deploy Halyard docker image

Create Halyard deployment:

It will take a few minutes for Kubernetes to download the image create the pod. You can see the progress getting your deployments:

After your Halyard deployment is completed, let’s edit the serviceAccountName:

The configuration file you will be opened in your text editor.

Add the serviceAccountName to the spec just above the containers:

Save and close the file. Kubernetes will automatically edit the deployment and start a new pod with the new credentials:

Wait until Kubernetes finishes Terminating and ContainerCreating. So all pods must be Running.

Configure Halyard

Now you need to root access Halyard container.

At the time of this writing (2018-05-25), Halyard Docker container does not allow root auth:

So SSH to the node halyard was deployed, then TTY connect the its container.

So you need to follow the Workaround section of how to access a container TTY to use bash.

Halyard container already has kubectl installed, you only need to configure it and run kubectl from inside a container.

At this point you should have:

  • Spinnaker and Tiller accounts in your Kubernetes cluster.

  • Spinnaker namespace in your Kubernetes cluster.

  • Spinnaker services and Load Balancers endpoints in your Kubernetes cluster.

  • Halyard docker image deployed in a pod.

  • Root access to your Halyard docker image.

  • kubectl configured to manage your cluster.

Is everything OK? Let’s move on…

Connected with root in your Halyard docker image, allow spinnaker user to access root folder (temporarily):

Download and install Helm in your Halyard’s container:

Init Helm using tiller account we create earlier:

Configure Spinnaker

Configure Docker registry

I’m using Docker Hub, but Spinnaker supports different docker registries.

Export environment variables:

TIP: this config uses a custom Docker Hub account and repository. You can use any public one if you want ot keep it simple for now, for example (no username or password required):

Add Docker Registry provider:

Input your password.

Check if everything is OK:

Configure storage

I’m using AWS S3, but Spinnaker supports different storages.

Export your AWS credentials:

Replace all variables with your info.

Add storage:

Then apply your config:

You can access your S3 and see that Halyard created a bucket with the following prefix: spin-

Configure Kubernetes provider

Setup Spinnaker to deploy into Kubernetes:

Configure Spinnaker version

First check which is the latest halyard version available:

At the time of this writing (2018-05-25), the latest version is 1.7.4:

Configure Spinnaker Dashboard access

You could deploy Spinnaker now, but do not do it yet. If you do, Spinnaker itself will work, but you would need to deal with boring SSH tunneling stuff to access its dashboard.

There is an easier way: use your Load Balancer endpoint to access Spinnaker dashboard.

To do so, first you need to know the endpoints of spin-deck-np and spin-gate-np services.

Describe your services:

if you have too many services save the output of the command above in a file:

From your services description output (either on the screen or inside /tmp/output), let’s search your endpoints.

Find spin-deck section. Get the LoadBalancer Ingress URL inside spin-deck section.

Find spin-gate-np section. Get the LoadBalancer Ingress URL inside spin-gate-np section.

For example:

Update Halyard spin-deck-np config using your spin-deck-np endpoint:

Do not forget to use port 9000 for spin-deck-np.

Update Halyard spin-gate-np config using your spin-gate-np endpoint:

Do not forget to use port 8084 for spin-gate-np.

Deploy Spinnaker

Finally!

To Deploy Spinnaker, run:

Go grab a coffee (or tea, water). It will run for quite some time (for me, in a 16G RAM server, it took about 35min).

Open another terminal where you can use kubectl to connect to your cluster (it doesn’t need to be from inside Halyard container) and monitor the progress.

Wait until all pods are READY and RUNNING:

Expose Spinnaker ports

Go back to your Halyard TTY (the one you ran hal deploy apply earlier) and run:

Now you can press CTRL+C to exit the command above (the deploy connect is already done).

Testing

At this point you should be fine (a little stressed, but alive).

Access in your browser the spin-deck-np endpoint on port 9000.

For example (scroll all the way right):

You should see Spinnaker dashboard:

Click on Action, Create Application to make sure everything is OK.

Extra tips

Backup Halyard config in a safe place:

When I say a “safe place” it is outside the Halyard container and outside your cluster. If, by any reason, you need redeploy Spinnaker or build your entire cluster from scratch, Halyard config will be deleted.

You could restore everything running all the steps in this post again, but believe me, backing up Halyard config avoids headaches.

Troubleshooting

If you cannot see Spinnaker Dashboard and/or your deployments and pods are not healthy, start all steps from scratch (it can be complex if this is your fist time).

If you can see Spinnaker Dashboard but can’t load any other screen or can’t perform any action, chances are you missed exposing Spinnaker ports.

If you need further troubleshooting, learn how to redeploy Spinnaker.

Rollback

Clean up everything:

References

https://www.mirantis.com/blog/how-to-deploy-spinnaker-on-kubernetes-a-quick-and-dirty-guide/

https://blog.spinnaker.io/exposing-spinnaker-to-end-users-4808bc936698

Ubuntu with interface and NoVNC access

Create the deployment yml:

Then run it:

References

https://hub.docker.com/r/chenjr0719/ubuntu-unity-novnc/tags/

Unifi

Deploy Unifi controller

SSH to the node which will host the controller.

Create the unifi user:

Create the folder to store files:

Connect to your workstation with kubectl.

Create namespace:

Deploy the controller:

Expose Unifi ports

Last updated