In my previous article, I demonstrated how to use Podman to create and test a YAML manifest from a running pod and its containers. In this tutorial, I’m going to show you how to deploy a YAML manifest for a Confluence Server pod on a Kubernetes-In-Docker (KIND) cluster.


To follow along with this tutorial, you need an x64 Linux instance (physical or virtual). You also need Podman installed and configured on your Linux instance. You also need the YAML manifest created in the previous article. If you haven’t generated the YAML manifest, head over to that article and do it now!

Finally, you need a Kubernetes-In-Docker (KIND) or some Kubernetes cluster to deploy to. If you need to prepare your host to deploy a KIND cluster, that’s covered in my Deploying a Kubernetes-In-Docker (KIND) Cluster Using Podman on Ubuntu Linux tutorial.

Deploying a Kubernetes-In-Docker (KIND) Cluster

If you’re running a Kubernetes cluster then you’re ready to start, and you can skip this section. Assuming you’re running KIND, you need to create a cluster to deploy to. I deploy KIND using Podman, rootlessly, but you can use Docker if you prefer.

First, query for existing KIND clusters:

Should you have an existing cluster, you can either deploy to it, or delete it and deploy a new KIND cluster. The choice is yours.

Assuming you have to create a new cluster:

Verify that your new cluster has been deployed:

Alternately, you can verify your cluster is up and running with kubectl:

We can see all the pieces that make up your Kubernetes cluster-in-a-container:

You now have a Kubernetes cluster. Time to deploy your pod!

Deploy a Confluence Server pod to Kubernetes

With a Kubernetes cluster running, you can deploy your manifest to it. First, ensure you’ve got the YAML manifest from the previous article:

Looks good. Deploy the confluence-pod pod using kubectl:

Verify that the pod is running using kubectl. This won’t work, and I’ll explain why soon.

Give it a minute to see whether it becomes ready. You can use the watch command to keep an eye on the pod:

If you watch, you’ll notice that the pod stays in ContainerCreating status and never makes it to Running. What could be wrong?

Debugging a Kubernetes YAML manifest

A good way to get to the bottom of an issue is to check the logs. You can use kubectl to check the pod logs, as well as check the logs for the containers in the pod:

Informative, but not terribly helpful at the pod level. Now look at the container logs:

Checking the logs for the confluence-postgres container:

Again, informative, but not terribly helpful. So much for the logs.

Next, look at the output provided by the kubectl describe command. This command provides a single source of truth brimming with information about pods.

The Events section reveals that there are issues with the volumes. There are no /home/tdean/confluence/site1/data and /home/tdean/confluence/site1/database directories on the KIND container node.

It looks like the storage strategy for this pod needs to change!

Change the storage volumes on Kubernetes

First, delete the existing pod:

It might take a minute, so let it run. When it returns your prompt, verify your work:

Next, edit the YAML manifest and make the volume configuration Kubernetes-friendly. There are a lot of volume types supported by Kubernetes, as detailed in the Kubernetes documentation. For this demonstration, go with the tried-and-true emptyDir method. Change the volume type to emptyDir, with a maximum size of 1Gi, and clean up the volume name for convenience. Here’s the changed section:

The full file after the changes:

OK, changes have been made. Time to test them out. Create a new confluence-pod pod, using kubectl:

Get the status of the pod:

If you watch it, you’ll see the status change to Error:

See what kubectl describe reveals about the pod:

It looks like the volumes are sorted:

Nothing else seems to be the issue here, so look at the pod logs:

We can see that Postgres is complaining about needing the POSTGRES_PASSWORD. That means this value must be supplied in the manifest. For more information on defining environment variables for a container, read the Kubernetes documentation.

Edit your YAML manifest again, this time adding the environment variables in the container spec for the confluence-postgres container:

Delete the old pod:

Create a new confluence-pod pod using kubectl, and then verify:

Verify your work:

Success! The Confluence pod is up and in the Running status.

You can get information about your confluence-pod pod:

Because your confluence-pod pod is only available inside your KIND cluster, you must use a container to run a curl test. There’s a curl container image available, with the curl command included. Deploy it in a pod, then open a shell in it, and run curl against the status URL for your Confluence server, which is http://<k8s_node_ip>:8290/status.

First, launch a curlpod pod, using the curlimages/curl container image:

Once your curl container is launched and you’re in the shell, try a curl command against the status URL:

Awesome! The status of {"state":"FIRST_RUN"} is exactly what you wanted to see!

Cleaning up

Time to clean up. Delete the pods, and then verify:

You can also delete your KIND cluster:

With a little tweaking and troubleshooting, you’ve deployed your Confluence pod in Kubernetes. How cool is that?


From a simple local pod to a YAML manifest to Kubernetes in just three simple sessions. Deploying a Confluence server doesn’t have to be a pain. And when something goes wrong, you know how to troubleshoot for the resolution.

What about that systemd unit file you created for the confluence-pod pod in an earlier tutorial? When do you get to use that? In the final article in this series, I’m going to circle back around and show you how to deploy your confluence-pod pod and your systemd unit file so that your pod is managed by systemd.



Tom Dean

Just a guy, learning all the Kubernetes things and automating the world!


Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *