In my previous article, I demonstrated how to install Kubernetes using k3s on a Raspberry Pi cluster. After you complete the 10 steps in that article, you have a tiny Pi-based cloud entirely at your disposal. You can use it to learn about Kubernetes, cloud architecture, cloud-native development, and so on.

Now that you have a Kubernetes cluster running, you can start running applications in containers. That’s what Kubernetes does: it orchestrates and manages containers. There’s a sequence to launching containers within Kubernetes, a specific order you need to follow, because there are lots of moving parts and those parts have to reference each other.

  • A namespace is a “project space” of Kubernetes.
  • A deployment lets you manage pods running containers.
  • A container runs within a namespace, and by default is unable to access a namespace other than its own.
  • A pod is a group of containers. By spawning a new pod, you scale your cluster.
  • A service is the front-end to a deployment. A deployment can be running quietly in the background and it’ll never see the light of day without a service pointing to it. And a service is only available to your cluster when you choose to expose it to the outside world.

1. Create a namespace

First, create a namespace called ktest for your test application.

2. Create a deployment

The Kubernetes project provides an example Nginx deployment definition:

This creates metadata named nginx-deployment. It also creates a label called app, and sets it to nginx. This metadata is used as selectors for pods and services later.

For now, create a deployment using the example:

Confirm that the deployment has generated and started new pods:

See the pods labelled with app: nginx:

3. Create a service

Now you must connect the Nginx instance with a Kubernetes Service.

The selector element is set to nginx to match pods running the Nginx application. Without this selector, there would be nothing to correlate your service with the pods running the application you want to serve.

Verify that the service exists:

A Service is backed by a group of pods. Pods are exposed through endpoints. A Service uses POST actions to populate endpoint objects named nginx-deployment. Should a pod die, it’s removed from the endpoints, but new pods matching the same selector are added to the endpoints. This is how Kubernetes ensures your application’s uptime.

To see more information:

Notice that the Endpoints value is set to a series of IP addresses. This confirms that instances of Nginx are accessible. The IP of the service is set to 10.43.251.104 in this example, and it’s running on port 80/TCP. That means you can log onto any of your nodes (“inside the cluster”) to interact with your Nginx app.

This doesn’t work from your control plane yet, only from a node.

Nginx is accessible, at least from inside your cluster.

The only thing left to do now is to route traffic from the outside world.

4. Expose the deployment

For a deployed application to be visible outside your cluster, you must route network traffic to it. There are many tools that provide that functionality, and one of them is MetalLB.

Install metallb on your control plane, placing XX.YY with the latest version:

Determine what network range you want your cluster to use. This must not overlap with what your DHCP server is managing. Create a configuration for MetalLB:

Save this as metallb.yaml and apply the configuration:

You now have a configmap for metallb, and metallb is running. Next, create a load balance service mapping your deployment’s ports (port 80 in this case, which you can verify with k3s kubectl -n ktest get all). Save this as loadbalance.yaml:

This service selects any deployment in the ktest namespace with an app name of nginx, and maps that container’s port 80 to a port 80 of an IP address within your address range (in my example, that’s 10.0.1.1/26, or 10.0.1.1-10.0.1.62).

Find out what external IP address it got:

Open a web browser and navigate to the external IP address listed (in this example, 10.0.1.3).

It works!

This is the process for running applications and services on your cluster. All you have to do now is decide what you want to run, and start deploying pods.

Author


Klaatu

Klaatu is a Linux geek and podcaster.

0 Comments

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *