On-boarding your custom application to Keptn on GKE— Part 1 of 2

Rob Jahn
11 min readJul 17, 2019

I recently took on the challenge to share my experience to on-board a sample application into the Keptn continuous delivery and automated operations platform on Google Kubernetes Engine (GKE). This challenge is a call to action from my colleague, Andi Grabner, from his recent blog: “What is keptn, how it works and how to get started!

In this two part blog series, I first share how I setup my Google GKE Kubernetes cluster and installed Keptn. In the second blog, I share the steps I followed to on-board my sample application into Keptn and provide a guide for how you can prepare and on-board your application.

This is the outline of what I cover in each blog.

Part 1: Setting up a Cluster and installing keptn

  1. Pre-requisites setup
  2. Create Kubernetes cluster
  3. Install keptn
  4. Install Dynatrace Keptn service and OneAgent Operator

Part 2: On-boarding an application to keptn

  1. Create Keptn project files
  2. Create application service files
  3. Create JMeter test scripts
  4. Create Pitometer performance validation files
  5. Send new Keptn artifact events to run pipelines

First a Keptn Refresher

Recall that Keptn (pronounced “Captain”) is a platform that orchestrates Continuous Delivery (CD) stages and tasks using Keptn services, Keptn events, and control plane as depicted below.

Keptn control plane and Keptn services

Keptn services provide actions like deployment, testing and chat client notifications and act upon Keptn events such as configuration changed, problem detected, and new artifact. Keptn services are intended to be loosely coupled, mix-and-matched depending on the requirements, and will continue to grow as the Keptn community develops more. Check out the “Continuous Delivery without pipelines — How it works and why you need itblog for a more in-depth overview of the Keptn internals.

Although this blog covers how I setup Keptn on Google GKE, do know that Keptn 0.3.0 that was just released and supports these Kubernetes implementations:

  • Google Kubernetes Engine (GKE)
  • OpenShift 3.11
  • Azure Kubernetes Service (AKS)

Step 1: Pre-requisites setup

The Keptn documentation is the best place for up to date instructions, but below are the key areas and how I set them up.

# 1 Application Docker containers

Keptn isn’t responsible for the application build nor Docker image creation. So you must create Docker images for your application and push them into a registry where Keptn can pull them.

I cover this in more detail in my part 2 blog, but Keptn defaults to generating Helm deployment files that specify port 8080 for the service and a “/health” endpoint for Kubernetes readiness probe. One can use custom deployment files, but I found is easier to just update my application to meet these requirements.

I used an order processing application that is comprised of four services: a front end static web UI and three backend Java Spring boot services each with embedded databases.

Sample application home page and the services topology

I built and pushed each service image into my personal Dockerhub account that use can use too. I used docker CLI commands to push each service such as shown below for my front-end service.

# build version 1
docker build -t robjahn/front-end:1 .
# push to docker
docker login
docker push robjahn/front-end:1

All the source code for these services can be found here.

# 2 Google Cloud Account and CLI

You will need a Google cloud account with admin permissions to provision resources and Google’s cloud SDK automate resource provisioning using the “gcloud” command line tool.

I signed up for a free Google cloud trial and then downloaded and configured the Google Cloud SDK on following their instructions for my MacOS laptop.

# 3 GitHub

More options are being planned for where Keptn project files can reside, but currently Keptn stores them in GitHub. So you will need to create a new a GitHub organization and GitHub personal access token for Keptn to use during project on-boarding and managing deployments files.

For this requirement, simply do these two steps:

  1. Create a new GitHub organization using your GitHub account and selecting the “open source” free option. Here is a guide to this step.
  2. Create a new Personal Access token with full “repo” scope as shown in the screenshot below. Here is a guide to this step.
GitHub Personal Access Token

# 4 Dynatrace

Keptn ships with two services that integrate with the Dynatrace application intelligence platform.

  1. Keptn Dynatrace service: sends information to Dynatrace about the current state of a pipeline run such as deployments and test execution.
  2. Keptn Pitometer service: qualifies performance and architectural requirements using a declarative specification file or “perfspec” for short. A “perfspec” file defines which metrics you want to pay attention to, the sources to collect them from and how to grade/interpret the results. Pitometer parses this files, calls the backend source defined in the file, and evaluates the result. Currently, Keptn Pitometer service supports Prometheus and Dynatrace, but the backend provider list will grow overtime with community support. Read more about the “perfspec” in this blog, “Automated Deployment and Architectural Validation with Pitometer and keptn!”.

You can use your existing Dynatrace tenant or just sign up the a free Dynatrace SaaS Trial. Once you have your Dynatrace account, you will need to setup two Dynatrace tokens that are used for the Kubernetes cluster monitoring and for the Keptn Dynatrace and Keptn Pitometer services to use.

Dynatrace API Token for the Keptn services

Log into your Dynatrace tenant and go to “Settings > Integration > Dynatrace API” and create a new API token with the following permissions:

  • Access problem and event feed, metrics and topology
  • Access logs
  • Configure maintenance windows
  • Read configuration
  • Write configuration
  • Capture request data
  • Real user monitoring JavaScript tag management

Dynatrace PaaS Token for Kubernetes monitoring

Log into your Dynatrace tenant and go to “Settings > Integration > Platform as a Service and create a new PaaS Token.

# 5 Supporting Tools

The Keptn documentation lists a few pre-requisites tools, but I ended up needing the below set of tools. Just follow the installation instructions for each tool using the link I provided for each.

  • git — GIT source code management CLI
  • hub — git utility to support command line forking
  • jq — Json query utility to support parsing
  • yq — Yaml query utility to suport parsing
  • keptn — CLI used to manage keptn projects
  • kubectl — CLI to manage the cluster. See the Google Documentation for their preferred way.
  • gcloud — Google cloud command line tool

Automate this step !

If you’re constantly trying new tools like I am, take my approach and create a Google compute engine instance (i.e. virtual machine) that you can create and tear down. And from this virtual machine, run scripts that automate supporting tool installation. You can so this in tools like Terraform or just use these “gcloud” commands.

# initialize gcloud CLI 
# Prompts for login and default project, region and zone
gcloud init
# view gcloud setup
gcloud config list
# adjust these variables
export GKE_PROJECT=<your google project name>
export GKE_CLUSTER_ZONE=<example: us-east1-c>

# provision the virtual machine
gcloud compute instances create "keptn-orders-bastion" \
--project $GKE_PROJECT \
--zone $GKE_CLUSTER_ZONE \
--image-project="ubuntu-os-cloud" \
--image-family="ubuntu-1604-lts" \
--machine-type="g1-small"

I wrote a single bash script that installs all the supporting tools first by checking if the tool was installed and then using the installation commands for Ubuntu. Here is how I install “jq” and you can view my whole tools setup script here.

# Installation of jq
# https://github.com/stedolan/jq/releases
if
! [ -x “$(command -v jq)” ]; then
echo “Installing ‘jq’ utility”
sudo apt-get update
sudo apt-get --assume-yes
install jq
fi

Step 2: Create Kubernetes cluster

The core keptn team is working to optimize the resource requirement, but I used the recommended minimum of one “n1-standard-16” node with 60GB and 16 vCPUs for my cluster.

You can just use the Google web portal to provision the cluster by navigating to “Kubernetes Services” and clicking the “Add” button.

Create GKE Cluster page in Google Portal

On the Create cluster page, just adjust these values referring to arrows above:

  1. Kubernetes Cluster name = name like keptn-orders-cluster
  2. Master version = 1.12.7-gke-xx (latest)
  3. Number of Nodes = 1
  4. Node size = n1-standard-16 (16vCPU +60GB)
  5. Click the “More options” button
  6. Change Image type to “Ubuntu”

Finally click the “create” button.

After the cluster is created, just click the “connect” button to get the command command to update your kubectl connection. This command is taken from my example cluster setup from above.

gcloud container clusters get-credentials keptn-orders-cluster --zone us-central1-a --project gke-keptn-orders

Automate this step !

These are the “gcloud” commands to automate the provisioning.

1. Enable the cluster API

gcloud services enable container.googleapis.com

2. Provision the cluster with a specified name, size, and Kubernetes GKE version

# set environment variables
PROJECT=<your google project name>
CLUSTER_NAME=<cluster name. example: yourname-keptn-orders-cluster>
ZONE=<example: us-central1-a>
REGION=<example: us-central1>
GKE_VERSION="1.12.7-gke.10"
# provision the cluster
gcloud beta container \
--project $PROJECT clusters create $CLUSTER_NAME \
--zone $ZONE \
--no-enable-basic-auth \
--cluster-version $GKE_VERSION \
--machine-type "n1-standard-16" \
--image-type "UBUNTU" \
--disk-type "pd-standard" \
--disk-size "100" \
--metadata disable-legacy-endpoints=true \
--scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" \
--num-nodes "1" \
--enable-cloud-logging \
--enable-cloud-monitoring \
--no-enable-ip-alias \
--network "projects/$PROJECT/global/networks/default" \
--subnetwork "projects/$PROJECT/regions/$REGION/subnetworks/default" --addons HorizontalPodAutoscaling,HttpLoadBalancing \
--no-enable-autoupgrade

3. update the kubectl with the credentials to connect to the new cluster

gcloud container clusters get-credentials $CLUSTER_NAME \
--zone $CLUSTER_ZONE \
--project $GKE_PROJECT

You can review my bash script here that I developed to automate these actions.

Step 3: Install Keptn

This is the easiest step in the process. Running the command “keptn install” will provision the Keptn installer service as a pod in your cluster that has all the supporting tools in the image it needs to install and commands to add the Keptn queues and other Keptn services. During installation, this service will as monitor installation progress and when done the pod will be terminated.

Run Keptn installer

For Google GKE, we pass in “gke” for the platform argument as follows:

keptn install --platform=gke

The keptn CLI will prompt for the input values it requires such as the GitHub organization and personal access token we created earlier.

Automate this step !

Alternatively, once can make a JSON file will all these values and pass in that file name as a keptn CLI argument. NOTE that this format may change with new versions of Keptn, but for this approach first make a JSON file with these attributes using a file name of “creds.json”

{
"githubUserName": "",
"githubPersonalAccessToken": "",
"githubUserEmail": "",
"githubOrg": "",
"clusterName": "",
"clusterZone": "",
"gkeProject": ""
}

Then run the keptn install command as follows

keptn install -c=creds.json --platform=gke

Review Keptn setup

Once Keptn is installed, several “kubectl” commands can be used to the monitor Keptn services and Istio resources status.

kubectl -n keptn get podskubectl -n keptn get configmapskubectl get routes -n keptnkubectl get channels -n keptnkubectl get subscription -n keptnkubectl get svc istio-ingressgateway -n istio-system

You can check out my helper bash script that quickly outputs this same Keptn status. You can refer to the Keptn setup documentation where these steps came from.

Step 4: Install Dynatrace Keptn service and OneAgent Operator

Kubernetes monitoring is done using the Dynatrace OneAgent Operator that based on Operator SDK and uses this Operator framework for interacting with Kubernetes environments. The monitoring of nodes is done initially by a DaemonSet and the Dynatrace OneAgent Operator controls the lifecycle and keeps track of new versions and triggers updates if required.

Run install script

In order to install the Dynatrace OneAgent Operator and the Keptn Dynatrace service, one must first clone the Keptn Dynatrace service repository and run the appropriate platform bash script. Here are the command to run for Google GKE:

# clone keptn Dynatrace repo
git clone — branch 0.1.2 https://github.com/keptn/dynatrace-service — single-branch cd dynatrace-service/deploy/scripts
# run script that prompts for your Dynatrace PaaS token and tenant
# and saves it to a dt_creds.json local file
./defineDynatraceCredentials.sh
# run the deployment script
./deployDynatraceOnGKE.sh

There are subtle differences for each platform, but the deployment script will do the following:

  • Create Dynatrace secret used by the Operator
  • Install the Dynatrace OneAgent Operator
  • Apply auto tagging rules within Dynatrace
  • Setup problem notification within Dynatrace
  • Create secrets to be used by Keptn Dynatrace Service
  • Create the Keptn Dynatrace Service

Review the setup

  1. Kubernetes resources

Here are a few kubectl commands to get the Dynatrace resource status.

# Dynatrace OneAgent Operator
kubectl -n dynatrace get pods
kubectl get ksvc dynatrace-service -n keptn
kubectl get secret dynatrace -n keptn -o yaml
# Keptn Dynatrace service deployment
kubectl -n keptn get deployments

2. Dynatrace Tags

In your Dynatrace tenant, navigate to “Settings > Tags > Automatically applied tags” to view the new tagging rules for environments and services. In my part 2 blog, we will see how the tags for the services deployed by Keptn will match these rules as to make tags.

Dynatrace Auto tagging rules summary page

Here is a screenshot of “environment” and “service” tags for a deployed catalog service that use these rules.

Catalog Service within Dynatrace

3. Dynatrace Problem Notification

In your tenant, navigate to “Settings > Integration > Problem notifications” to view the problem rule that calls the public Keptn event broker web hook. The IP will be filled in automatically for your environment. This rule defaults to the “default” Dynatrace alerting profile, so you just adjust to limit the scope such as “Production” namespace only.

Dynatrace Problem Notification to Keptn

Give it a try

Everything I outlined in this blog can be found in this open source GitHub repository. Just follow the README file and run the bash scripts I developed to demonstrate the whole setup and on-boarding process. I plan to keep this repository updated as Keptn continues to deliver new releases and welcome contributors to this effort !

You can now move on to my Part 2 blog to continue to preparing and on-boarding the service to Keptn.

Join the Keptn Community

The Keptn team’s goal is to contribute keptn to CNCF (Cloud Native Computing Foundation) and for that we need to build a strong community. Help us with feedback, join the Keptn Slack channel, build your own services and tell us about them so we can put them into our keptn-contrib GitHub org.

--

--

Rob Jahn

Tech Partner Solutions Advocate at Dynatrace software