Helm is a Kubernetes native deployment mechanism that allows you to script and template out a deployment of an application on Kubernetes. In addition, Helm keeps a release history of your deployments so you can roll back easily as needed.

Install Helm

The official documentation details two different installation methods:

How Does Helm work?

Helm uses Go templates to make “charts” that are flexible and reusable. While not as common as say, Jinja2 templates, Go templates are well documented and easy to understand. You don’t need any Go experience to create a chart, nor Helm experience to use one.

Helm version 2 had a requirement to install a component called tiller in your cluster. While installing Tiller could be done, it added complexity and required a privileged pod running in your environment. However, Helm version 3 talks directly to the Kubernetes API server, and is fully supported in OpenShift 4.

To create your first chart, Helm provides the helm create <project name> command, which creates all the required files:

It’s likely you’ll customize these files to make your life easier or to suit the application you’re deploying.

Chart.yaml

This file contains the API version, the name of the Helm chart, the type of deployment (application or library), the Helm chart version, and finally the version of the application you are deploying. This is metadata that allows for the helm upgrade command to work appropriately. The sample Chart.yaml looks like this:

values.yaml

In order to make a repeatable process, YAML files provide a way to have variables used in place of hard-coded values. This allows a variable to be declared in one location, and then referenced throughout the Helm chart. The values.yaml file holds the default values for your chart. It includes things like the image information, replica counts, ingress/service definitions, and so on. It’s likely that you’ll add to this file. Because this file is over 80 lines by default, it’s not important to copy it here, but you can refer to the end of this article to see a sample application.

Here are some areas of the values.yaml file that you may want to examine in more detail.

Service Accounts

A service account is an OpenShift Container Platform account that allows a component to directly access the API. Every aspect (including security components) inside OpenShift is controlled by an API call.

Every pod that’s run inside a cluster requires security considerations. OpenShift has a more restrictive security stance by default than upstream Kubernetes. To help manage these security settings, a service account is associated with a pod or deployment. There is a default service account issued to each project upon creation. The service account object in an OpenShift cluster can have a wide variety of permissions applied to it. This is particularly useful when you have an image from an ISV that has specific requirements. For example, the official Gitlab container image requires additional permissions that the default service account doesn’t have. Without going into the details of how you determine security constraints, it becomes very important to running an OpenShift cluster to have the proper permissions applied to a service account so that an application can be deployed successfully.

Why is this important to the discussion of Helm Charts? Well, the default files for a Helm chart create a unique service account for your application. If you’re going to adjust security constraints with a Helm Chart, then you need to know which service account to use.

Left unconfigured, a Helm Chart generates a service account based on the full name variable specified in values.yaml. Instead of using a YAML file, Helm has a template for this located in the _helpers.tpl file. Here’s the relevant section:

securityContext

There are sections for security in the values.yaml file. For sample deployments, the podSecurityContext and securityContext can be left at their defaults. If your application or environment requires special security concerns, you must adjust this section.

Networking

This is the section that is most frequently adjusted. For example, by default the containerPort, the containerPortName along with the livenssProbePath and readinessProbePath are not configured to use variables in the built-in template. For the sample application deployed in this article, I have added the following variables inside the values.yaml file:

You also probably need to alter the service and ingress sections of the values.yaml file in order to set the ingresses, service ports, service type, TLS options, and so on.

Resources

There section handles resource limits. There are recommendations in the comments of this section. You can set requests and limits just as you would during a normal application deployment. This section is used to define resources required for your application to be successfully deployed in the cluster.

Application placement

Controlling where and how your application is deployed is controlled by one of four sections:

  1. autoscaling
  2. nodeSelector
  3. tolerations
  4. affinity

Detailing the function of each of these is outside the scope of this article, but they’re familiar concepts within Kubernetes administration so if you’re not familiar with them already, you will be given a little time and practice.

Working with templates

Arguably, the most versatile use of Helm is the ability to create templates. Combined with the option to override some or all of the options in the values.yaml, Helm templates can be quite reusable. Let’s start by looking at the default deployment.yaml:

There’s a lot of Go templating magic here. Some highlights:

.: Any time you see a dot by itself (such as toYaml .), it’s shorthand to say “in the current scope do X”. This is because you can have nested templates with a different scope than the main template. You might want to change values in a template you’re calling but not the deployment.yaml template (or the other way round). The (.) indicates which template you’re taking action on. This is similar to Unix-like operating systems, where a single dot (.) refers to the current directory.

nindent #: This tells the templating language to insert the value indented # number of spaces, but it also puts a newline character at the end of each line.

include: An include statement is used when calling another template inside a template. For example, sample-helm-chart.selectorLabels is a template defined in the _helpers.tpl file. An include statement calls the template and renders its contents inside the deployment.yaml template. An include statement allows for smaller, more modular templates.

with: This keyword has two functions. The first acts as an if statement (if the object is not blank, then proceed). The second function is almost a for loop (if an object is defined, then set the variable scope to that object). This has the effect of looping over an object, like a for loop would. Look at this section from the template:

This snippet says that if .Values.nodeSelector is not blank, then the template renders both the text nodeSelector: as well as loops over all nodeSelectors defined in the values.yaml file.

A sample application, end-to-end

Now that you have a basic understand of how Helm works in theory, it’s time to deploy a series of simple applications based on my Kubernetes network policy demo.

For this, you need 2 namespaces and 4 instances of the same application. Create these two namespaces before moving forward:

At the end of this process, you’ll have a traffic flow that looks like this:

Essentially, you’re trying to prevent Patty and Selma from talking to Homer. Marge is able to talk to her sisters (Patty and Selma), and Homer is able to initiate a conversation with Patty or Selma, but they can’t talk to him first.

Create a new helm chart:

The default values.yaml that ship with the chart has the following contents (comments removed for brevity):

The only other file I’ve edited in this project is template/deployment.yaml, which has the following familiar content:

To add the network policy bits, there are 4 new network_policy_*.yaml files in this project. The network_policy_allow_from_ns.yaml has the following content:

The definition is wrapped in an if statement. This prevents this policy from being created if the allowFrom variable is set to false. This policy allows a pod from one OpenShift project to a different project in its entirety. Essentially, it allows Marge to talk to her sisters.

The next file, network_policy_allow_same_ns.yaml, allows Homer and Marge to talk to each other because they are in the same project:

This policy is only created if the allowSameNS variable is set to true.

The next policy, network_policy_deny_all.yaml, prevents all communication in a given project. Understanding Network Policies is outside of the scope of this document. You can have a deny all rule and then add exceptions. This is what prevents all applications in a project from communicating with anything else.

The contents of the deny all looks like this:

The final policy allows the end user to talk to Homer and Marge. If you try to use the application without this policy, you wouldn’t be able to curl anything in the simpson project because it’s denying all traffic, excluding the 2 exceptions defined earlier. To allow the ingress controller to talk to the project, you need to specifically allow ingress. Put the following contents in network_policy_allow_ingress.yaml:

The final folder structure looks like this:

Using an override file

You can override the default values.yaml by recreating some or all of the variables in values.yaml. Patty and Selma’s override file are almost identical :

This overrides only the variables listed in the override file, while leaving the rest of the variables defined in the values.yaml intact.

Do a dry run to make sure everything is setup correctly. The syntax is:

Once you’re happy that everything is configured without error, deploy Patty and Selma:

You see output similar to this:

Homer and Marge’s overrides are a little more complicated. The simpson namespace is where most of the network policies reside. Homer’s override file looks like this:

Because the network policies are created at the project level, it becomes problematic to have Helm create policies for both Homer and Marge. In addition, if you want to reuse this pattern, it’s helpful to understand how the “toggles” work. In Homer’s case, you’re going to apply the allowIngress policy as well as the denyAll policy. In reality, it doesn’t matter whether Marge had all the policies applied when creating her objects. However, for the sake of showing complex options, I split the policies between Homer and Marge.

Marge’s override file is a bit more involved, because it requires some labels to identify how traffic can flow:

After the override files are created, run the helm install command with the -f flag to use your override files:

You can now curl the applications to confirm that Patty and Selma can talk to Marge and to each other, but not to Homer:

Homer can talk to all of the applications you just deployed:

Get the code

This sample application was written in Python using CherryPy. All of the source code for this walkthrough can be found in my Git repository. There are endpoints for /index/demo, and /metrics, and tracing has been enabled using the OpenTelemetry project.

This application is a fairly simple concept, but has a lot of moving parts to get working properly in OpenShift. There are several alternative paths that could have been used to accomplish the same outcome, so don’t get hung up on the specifics. Instead, use this walkthrough as a jump-off point on your journey to learn more about Helm and the benefits it offers when dealing with deployments in the Kubernetes and OpenShift ecosystems.

Author


0 Comments

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *