Kubernetes Tutorial for Swift on the Server
In this tutorial, you’ll learn how to use Kubernetes to deploy a Kitura server that’s resilient, with crash recovery and replicas. You’ll start by using the kubectl CLI, then use Helm to combine it all into one command. By David Okun.
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress, bookmark, personalise your learner profile and more!
Create accountAlready a member of Kodeco? Sign in
Contents
Kubernetes Tutorial for Swift on the Server
35 mins
- Cloud Native Development and Swift
- Cloud Native Computing Foundation
- Swift and Kubernetes
- Getting Started
- Installing Docker and Kubernetes
- Enabling Kubernetes in Docker
- Creating Your RazeKube
- Installing the Kitura CLI
- Running RazeKube
- Crashing Your RazeKube
- Kubernetes and the State Machine
- Building and Running Your RazeKube Docker Image
- Tagging Your RazeKube Docker Image
- Deploying RazeKube to Kubernetes
- Creating a RazeKube Service
- Recovering From a Crash
- Deploying Replicas
- Cleaning Up
- Helm: The Kubernetes Package Manager
- What’s in a Chart?
- Setting Up Helm and Tiller
- Deploying RazeKube With Helm
- Behold Your Charted RazeKube!
- Where to Go From Here?
Deploying Replicas
Running/not-running isn’t the only state that can be managed by Kubernetes. Consider the scenario that a bunch of people have heard about the almighty Kube, and they want to check out its power. You’ll need to have more than one app running concurrently to handle all that traffic!
In Terminal, enter the following command:
kubectl scale --replicas=5 deployment razekube
Typically, with heavier apps, you could enter this command to watch this happen in real time:
kubectl rollout status deployment razekube
But this is a fairly lightweight app, so the change will happen immediately.
Enter kubectl get pods
and kubectl get deployments
to check out the new app state:
➜ kubectl get pods
NAME READY STATUS RESTARTS AGE
razekube-6dfd6844f7-74j7f 1/1 Running 4 32m
razekube-6dfd6844f7-88wr7 1/1 Running 0 1m
razekube-6dfd6844f7-b4snx 1/1 Running 0 1m
razekube-6dfd6844f7-tn6mr 1/1 Running 0 1m
razekube-6dfd6844f7-vnr7w 1/1 Running 0 1m
➜ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
razekube 5 5 5 5 33m
In this case, you’ve told etcd
the desired state of your cluster should be that there are 5 replicas for your razekube
deployment.
Hit your /uhoh
route a couple of times, and type kubectl get pods
over and over again in Terminal to observe the state of your pods as they work to maintain their dictated state!
Kubernetes can manage so much more than just these two examples. You can do things like:
- Manage TLS certificate secrets for encrypted traffic.
- Create an Ingress controller to handle where certain traffic goes into your cluster.
- Handle a load balancer so that deployments inside your cluster receive equal amounts of traffic.
And because you worked with a Docker container this whole time, this means that this tool isn’t native to just Swift — it works for any app that you can put into Docker ;].
Cleaning Up
Rather than dive deeper into more of those capabilities, you’re going to learn how to consolidate all of the steps you’ve run above with Helm! Before proceeding to work with Helm, use kubectl
to clean up your cluster like so:
kubectl delete service razekube
kubectl delete deployment razekube
When this is done, type kubectl get pods
to ensure that you have no resources in flight.
Helm: The Kubernetes Package Manager
Helm is a package manager that is designed to simplify deploying simple or complex apps to Kubernetes. Helm has two components that you need to know about:
- The client, referred to as
helm
on your command line and dictates deployment commands to your Kubernetes cluster. - The server, referred to as
tiller
, which takes commands fromhelm
and forwards them to Kubernetes.
Helm uses YAML and JSON files to manage deployments to Kubernetes, and they’re called Charts. One benefit to using the Kitura CLI is that the app generator will make these chart files for you!
What’s in a Chart?
In Terminal, make sure you are in the root directory of your RazeKube app, and type the following command:
cat chart/razekube/values.yaml
Notice the format of this document, particularly the top component:
replicaCount: 1
revisionHistoryLimit: 1
image:
repository: razekube-swift-run
tag: 1.0.0
pullPolicy: Always
resources:
requests:
cpu: 200m
memory: 300Mi
livenessProbe:
initialDelaySeconds: 30
periodSeconds: 10
service:
name: swift
type: NodePort
servicePort: 8080
In this one file, you are defining:
- The number of replicas you want to have for your deployment.
- The Docker image for the deployment you want to make.
- The service and port you want to create to expose the deployment.
Remember how you had to configure each of those things individually with kubectl
commands? This file makes it possible to do all of these with one swift command!
Now you’re going to configure Helm to work with your Kubernetes cluster, and make quick work of your deployment commands!
Setting Up Helm and Tiller
Good news — Helm is already technically installed, thanks to the Kitura CLI! However, your Kubernetes cluster isn’t yet set up to receive commands from Helm, which means you need to set up Tiller.
In Terminal, enter the following command:
helm init
If you see output that ends with “Happy Helming!”, then you’re ready to go. Type helm version
and make sure that your client and server versions match like so:
Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Deploying RazeKube With Helm
Next, you’re going to make two changes to your chart for RazeKube: Navigate to chart/razekube
and open values.yaml
in a text editor of your choice.
Update lines 3 and 8 of this file so that they look like so:
replicaCount: 5
revisionHistoryLimit: 1
image:
repository: razekube-swift-run
tag: 1.0.0
pullPolicy: IfNotPresent
Here’s what you just updated:
- Rather than deploy one replica of your app at first, then scaling to five, you are writing that your desired state should contain five replicas of your deployment.
- Also, when you are choosing to pull an image from a remote container registry, you only choose to look for a remote version of the container image if it is not present in your Docker file system already. You could update this to be any remote image you have access to if you want, but since this image is available locally, you are choosing to use what is present.
Save this file, and navigate back to the root directory of your app in Terminal. Enter the following command to do everything at once:
helm install -n razekube-app chart/razekube/
Behold Your Charted RazeKube!
After you run this command, Helm will give you output that should look very very similar to what you get when using kubectl
to check your app status:
NAME: razekube-app
LAST DEPLOYED: Wed Jul 10 17:29:15 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
razekube-application-service NodePort 10.105.48.55 <none> 8080:32086/TCP 1s
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
razekube-deployment 5 0 0 0 1s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
razekube-deployment-7f5694f847-9qnzc 0/1 Pending 0 0s
razekube-deployment-7f5694f847-9zfb8 0/1 Pending 0 0s
razekube-deployment-7f5694f847-dfp9v 0/1 ContainerCreating 0 0s
razekube-deployment-7f5694f847-pxn67 0/1 Pending 0 0s
razekube-deployment-7f5694f847-v5bq2 0/1 Pending 0 0s
Look at you! That was quite a bit easier than all those kubectl
commands, wasn’t it? It’s important to know how kubectl
works, but it’s equally as important to know that you can combine all of the work that those commands do into a Helm chart.
In my example, look at the port that was assigned to the service: 32086. This means that my app should be available at localhost:32086
. Open a web browser and navigate to the app at the port open on your service:
Nice work! Now, just like before, access the /uhoh
route for your port, and notice how the app crashes. Then access your homepage or the /kubed?number=4
route again, and notice that your app is back up and running!
In Terminal, enter the command helm list
— your output should look like this:
NAME REVISION UPDATED STATUS CHART
razekube-app 1 Wed Jul 10 17:29:15 2019 DEPLOYED razekube-1.0.0
APP VERSION NAMESPACE
default
This shows you the status of your deployments with Helm.
Now, run kubectl get all
to look at your output:
NAME READY STATUS RESTARTS AGE
pod/razekube-deployment-7f5694f847-9qnzc 1/1 Running 3 7m
pod/razekube-deployment-7f5694f847-9zfb8 1/1 Running 2 7m
pod/razekube-deployment-7f5694f847-dfp9v 1/1 Running 2 7m
pod/razekube-deployment-7f5694f847-pxn67 1/1 Running 2 7m
pod/razekube-deployment-7f5694f847-v5bq2 1/1 Running 3 7m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21h
service/razekube-application-service NodePort 10.105.48.55 <none> 8080:32086/TCP 7m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/razekube-deployment 5 5 5 5 7m
NAME DESIRED CURRENT READY AGE
replicaset.apps/razekube-deployment-7f5694f847 5 5 5 7m
Helm gives you a powerful tool to make deploying and managing your apps much easier than if you only had access to kubectl
. Again, it’s still important to have access to kubectl
, and to have a working understanding of it, so you can configure individual components of your app. More importantly, you can use these commands to learn how to automate your deployments with Helm too!
To clean up, type helm delete razekube-app
, and use either helm list
or kubectl get all
to check the status of everything after it’s been cleaned up.