Skip to main content

Deploy a Service

Welcome to the Holos Deploy a Service guide. In this guide we'll explore how Holos helps the platform team work efficiently with a migration team at the fictional Bank of Holos. The migration team is responsible for migrating a service from an acquired company onto the bank's platform. The platform team supports the migration team by providing safe and consistent methods to add the service to the bank's platform.

We'll build on the concepts we learned in the Quickstart guide and explore how the migration team safely integrates a Helm chart from the acquired company into the bank's platform. We'll also explore how the platform team uses Holos and CUE to define consistent and safe structures for Namespaces, AppProjects, and HTTPRoutes for the benefit of other teams. The migration team uses these structures to integrate the Helm chart into the bank's platform safely and consistently in a self-service way, without filing tickets or interrupting the platform team.

What you'll need

Like our other guides, this guide is intended to be useful without needing to run the commands. If you'd like to render the platform and apply the manifests to a real Cluster, complete the Local Cluster Guide before this guide.

important

This guide relies on the concepts we covered in the Quickstart guide.

You'll need the following tools installed to run the commands in this guide.

  1. holos - to build the Platform.
  2. helm - to render Holos Components that wrap Helm charts.
  3. kubectl - to render Holos Components that render with Kustomize.

Fork the Git Repository

If you haven't already done so, fork the Bank of Holos then clone the repository to your local machine.

# Change YourName
git clone https://github.com/YourName/bank-of-holos
cd bank-of-holos

Run the rest of the commands in this guide from the root of the repository.

Component Kinds

As we explored in the Quickstart guide, the Bank of Holos is organized as a collection of software components. There are three kinds of components we work with day to day:

  1. Helm wraps a Helm chart.
  2. Kubernetes uses CUE to produce Kubernetes resources.
  3. Kustomize wraps a Kustomize Kustomization base.

Holos offers common functionality to every kind of component. We can:

  • Mix-in additional resources. For example, an ExternalSecret to fetch a Secret for a Helm or Kustomize component.
  • Write resource files to accompany the rendered kubernetes manifest. For example, an ArgoCD Application or Flux Kustomization.
  • Post-process the rendered manifest with Kustomize, for example to consistently add common labels.
tip

ComponentFields in the Author API describes the fields common to all kinds of component.

We'll start with a Helm component to deploy the service, then compare it to a Kubernetes component that deploys the same service.

Namespaces

Let's imagine the Bank of Holos is working on a project named migration to migrate services from a smaller company they've acquired onto the Bank of Holos platform. One of the teams at the bank will own this project during the migration. Once migrated, a second team will take over ownership and maintenance.

When we start a new project, one of the first things we need is a Kubernetes Namespace so we have a place to deploy the service, set security policies, and keep track of resource usage with labels.

The platform team owns the namespaces component that manages these Namespace resources. The bank uses ArgoCD to deploy services with GitOps, so the migration team also needs an ArgoCD AppProject managed for them.

The platform team makes it easy for other teams to register the Namespaces and AppProjects they need by adding a new file to the repository in a self-service way. Imagine we're on the software development team performing the migration. First, we'll create projects/migration.cue to configure the Namespace and Project we need.

package holos

// Platform wide definitions
#Migration: Namespace: "migration"

// Register namespaces
#Namespaces: (#Migration.Namespace): _

// Register projects
#AppProjects: migration: _

Each of the highlighted lines has a specific purpose.

  • Line 4 defines the #Migration CUE struct. The team that currently owns the migration project defines this struct.
  • Line 7 registers the namespace with the namespaces component owned by the platform team. The _ value indicates the value is defined elsewhere in CUE. In this case, the platform team defines what a Namespace is.
  • Line 10 registers the project similar to the namespace. The platform team is responsible for defining the value of an ArgoCD AppProject resource.

Render the platform to see how adding this file changes the platform as a whole.

holos render platform ./platform

We can see the changes clearly with git.

git status
git diff deploy

We can see how adding a new file with a couple of lines created the Namespace and AppProject resource the development team needs to start the migration. The development team didn't need to think about the details of what goes into a Namespace or an AppProject resource, they simply added a file expressing they need these resources.

Because all configuration in CUE is unified, both the platform team and development team can work safely together. The platform team defines the shape of the Namespace and AppProject, the development team registers them.

At the bank, a code owners file can further optimize self-service and collaboration. Files added to the projects/ directory can automatically request an approval from the platform team and block merge from other teams.

Let's add and commit these changes.

git add projects/migration.cue deploy
git commit -m 'manage a namespace for the migration project'

Now that we have a Namespace, we're ready to add a component to migrate the podinfo service to the platform.

Helm Component

Let's imagine the service we're migrating was deployed with a Helm chart. We'll use the upstream podinfo helm chart as a stand in for the chart we're migrating to the bank's platform. We'll wrap the helm chart in a Helm component to migrate it onto the bank's platform.

We'll start by creating a directory for the component.

mkdir -p projects/migration/components/podinfo

We use projects/migration so we have a place to add CUE files that affect all migration project components. CUE files for components are easily moved into sub-directories, for example a web tier and a database tier. Starting a project with one components/ sub-directory is a good way to get going and easy to change later.

tip

Components are usually organized into a file system tree reflecting the owner of groups of component. We do this to support code owners and limit the scope of configuration changes.

Next, create the projects/migration/components/podinfo/podinfo.cue file with the following content.

package holos

import ks "sigs.k8s.io/kustomize/api/types"

// Produce a helm chart build plan.
(#Helm & Chart).BuildPlan

let Chart = {
Name: "podinfo"
Version: "6.6.2"
Namespace: #Migration.Namespace

// Necessary to ensure the resources go to the correct namespace.
EnableKustomizePostProcessor: true
KustomizeFiles: "kustomization.yaml": ks.#Kustomization & {
namespace: Namespace
}

Repo: name: "podinfo"
Repo: url: "https://stefanprodan.github.io/podinfo"

// Allow the platform team to route traffic into our namespace.
Resources: ReferenceGrant: grant: #ReferenceGrant & {
metadata: namespace: Namespace
}
}

Line 3: We import the type definitions for a Kustomization from the kubernetes project to type check the file we write out on line 15. Type definitions have already been imported into the bank-of-holos repository. When we work with Kubernetes resources we often need to import their type definitions using cue get go or timoni mod vendor crds.

Line 6: This component produces a BuildPlan that wraps a Helm Chart.

Line 9: The name of the component is podinfo. Holos uses the Component's Name as the sub-directory name when it writes the rendered manifest into deploy/. Normally this name also matches the directory and file name of the component, podinfo/podinfo.cue, but holos doesn't enforce this convention.

Line 11: We use the same namespace we registered with the namespaces component as the value we pass to Helm. This is a good example of Holos offering safety and consistency with CUE, if we change the value of #Migration.Namespace, multiple components stay consistent.

Lines 14-15: Unfortunately, the Helm chart doesn't set the metadata.namespace field for the resources it generates, which creates a security problem. The resources will be created in the wrong namespace. We don't want to modify the upstream chart because it creates a maintenance burden. We solve the problem by having Holos post-process the Helm output with Kustomize. This ensures all resources go into the correct namespace.

Lines 23: The migration team grants the platform team permission to route traffic into the migration Namespace using a ReferenceGrant.

note

Notice this is also the first time we've seen CUE's & unification operator used to type-check a struct.

ks.#Kustomization & { namespace: Namespace }
important

Holos makes it easy for the migration team to mix-in the ReferenceGrant to the Helm chart from the company the bank acquired.

This is a good example of how Holos enables multiple teams working together efficiently. In this example the migration team adds a resource to grant access to the platform team to integrate a service from an acquired company into the bank's platform.

Unification ensures holos render platform fails quickly if the value does not pass type checking against the official Kustomization API spec. We'll see this often in Holos, unification with type definitions makes changes safer and easier.

tip

Quite a few upstream vendor Helm charts don't set the metadata.namespace, creating problems like this. Keep the EnableKustomizePostProcessor feature in mind if you've run into this problem before.

Our new podinfo component needs to be registered so it will be rendered with the platform. We could render the new component directly with holos render component, but it's usually faster and easier to register it and render the whole platform. This way we get an early indicator of how it's going to integrate with the whole. If you've ever spent considerable time building something only to have it take weeks to integrate with the rest of your organization, you've probably felt pain then the relief integrating early and often brings.

Register the new component by creating platform/migration-podinfo.cue with the following content.

package holos

// Manage on workload clusters only
for Cluster in #Fleets.workload.clusters {
#Platform: Components: "\(Cluster.name)/podinfo": {
path: "projects/migration/components/podinfo"
cluster: Cluster.name
}
}
tip

The behavior of files in the platform/ directory is covered in detail in the how platform rendering works section of the Quickstart guide.

Before we render the platform, we want to make sure our podinfo component, and all future migration project components, are automatically associated with the ArgoCD AppProject we managed along side the Namespace for the project.

Create projects/migration/app-project.cue with the following content.

package holos

// Assign ArgoCD Applications to the migration AppProject
#ArgoConfig: AppProject: #AppProjects.migration.metadata.name

This file provides consistency and safety in a number of ways:

  1. All components under projects/migration/ will automatically have their ArgoCD Application assigned to the migration AppProject.
  2. holos render platform errors out if #AppProjects.migration is not defined, we defined it in projects/migration.cue
  3. The platform team is responsible for managing the AppProject resource itself, the team doing the migration refers to the metadata.name field defined by the platform team.

Let's render the platform and see if our migrated service works.

holos render platform ./platform

The platform rendered without error and we see, rendered podinfo for cluster workload in 743.459292ms. Let's take a look at the output manifests.

git add .
git status

Adding the platform and component CUE files and rendering the platform results in a new manifest for the Helm output along with an ArgoCD Application for GitOps. Here's what they look like:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: podinfo
namespace: argocd
spec:
destination:
server: https://kubernetes.default.svc
project: migration
source:
path: ./deploy/clusters/workload/components/podinfo
repoURL: https://github.com/holos-run/bank-of-holos
targetRevision: main

Let's commit these changes and explore how we can manage Helm values more safely now that the chart is managed with Holos.

git add .
git commit -m 'register the migration project podinfo component with the platform'

And push the changes.

git push
important

We often need to import the chart's values.yaml file when we manage a Helm chart with Holos.

We'll cover this in depth in another guide. For now the important thing to know is we use cue import to import a values.yaml file into CUE. Holos caches charts in the vendor/ directory of the component if you need to import values.yaml for your own chart before the guide is published.

Expose the Service

We've migrated the Deployment and Service using the podinfo Helm chart we got from the company the bank acquired, but we still haven't exposed the service. Let's see how Holos makes it easier and safer to integrate components into the platform as a whole. Bank of Holos uses HTTPRoute routes from the new Gateway API. The company the bank acquired uses older Ingress resources from earlier Kubernetes versions.

The platform team at the bank defines a #HTTPRoutes struct for other teams at the bank to register with. The #HTTPRoutes struct is similar to the #Namespaces and #AppProjects structs we've already seen.

As a member of the migration team, we'll add the file projects/migration-routes.cue to expose the service we're migrating.

Go ahead and create this file with the following content.

package holos

let Podinfo = {
podinfo: {
port: 9898
namespace: #Migration.Namespace
}
}

// Route migration-podinfo.example.com to port 9898 of Service podinfo in the
// migration namespace.
#HTTPRoutes: "migration-podinfo": _backendRefs: Podinfo

In this file we're adding a field to the #HTTPRoutes struct the platform team defined for us.

You might be wondering how we knew all of these fields to put into this file. Often, the platform team provides instructions like this guide, or we can also look directly at how they defined the struct in the projects/httproutes.cue file.

There's a few new concepts to notice in the projects/httproutes.cue file.

The most important things the migration team takes away from this file are:

  1. The platform team requires a gateway.networking.k8s.io/httproute/v1 HTTPRoute.
  2. Line 13 uses a hidden field so we can provide backend references as a struct instead of a list.
  3. Line 30 uses a comprehension to convert the struct to a list.

We can look up the spec for the fields we need to provide in the Gateway API reference documentation for HTTPRoute.

important

Lists are more difficult to work with in CUE than structs because they're ordered. Prefer structs to lists so fields can be unified across many files owned by many teams.

You'll see this pattern again and again. The pattern is to use a hidden field to take input and pair it with a comprehension to produce output for an API that expects a list.

Let's render the platform to see the fully rendered HTTPRoute.

holos render platform ./platform

Git diff shows us what changed.

git diff

We see the HTTPRoute rendered by the httproutes component owned by the platform team will expose our service at https://migration-podinfo.holos.localhost and route requests to port 9898 of the podinfo Service in our migration Namespace

tip

At this point the migration team might submit a pull request, which could trigger a code review from the platform team.

The platform team is likely very busy. Holos and CUE performs strong type checking on this HTTPRoute, so the platform team may automate this approval with a pull request check, or not need a cross-functional review at all.

Cross functional changes are safer and easier because the migration team cannot render the platform unless the HTTPRoute is valid as defined by the platform team.

This all looks good, so let's commit the changes and try it out.

git add .
git commit -m 'add httproute for migration-podinfo.holos.localhost'

And push the changes for GitOps.

git push

Apply the Manifests

Now that we've integrated the podinfo service from the acquired company into the bank's platform, let's apply the configuration and deploy the service. We'll apply the manifests in a specific order to get the whole cluster up quickly.

Let's get the Bank of Holos platform up and running so we can see if migrating the podinfo Helm chart from the acquired company works. Run the apply script in the bank-of-holos repository after resetting your cluster following the Local Cluster Guide.

./scripts/apply
tip

Browse to https://argocd.holos.localhost/applications/argocd/podinfo and we'll see our podinfo component is missing in the cluster.

Podinfo Application Missing

warning

If ArgoCD doesn't look like this for you, double check you've configured ArgoCD to use your RepoURL in projects/argocd-config.cue as we did in the Quickstart.

Sync the changes to deploy the service.

Sync the Podinfo Application

The Deployment and Service become healthy.

Podinfo is deployed

tip

Once deployed, browse to https://migration-podinfo.holos.localhost/ to see the podinfo service migrated onto the Bank of Holos platform.

Podinfo Greetings

The migration team has successfully taken the Helm chart from the acquired company and integrated the podinfo Service into the bank's software development platform!

Next steps

Now that we've seen how to bring a Helm chart into Holos, let's move on to day 2 platform management tasks. The Change a Service guide walks through a typical configuration change a team needs to roll out after a service has been deployed for some time.