On-boarding your custom application to Keptn— Part 2 of 2

Rob Jahn
13 min readJul 17, 2019

In my previous blogs “On-boarding your custom application to Keptn — Part 1 of 2” for Google GKE and Azure AKS, I described the steps to provision a Kubernetes cluster on these cloud platforms and install the continuous delivery and automated operations platform Keptn.

This blog picks up with an overview of the next set of steps I followed to on-board my sample application into Keptn and tips for how you can prepare and on-board your application.

Here is a summary of the part 2 steps and know they are the same whether you are using AKS or GKE.

  1. Create Keptn project files
  2. Create application service files
  3. Create JMeter test scripts
  4. Create Pitometer performance validation files
  5. Send new Keptn artifact events to run pipelines

Step 1: Create Keptn project

A Keptn project defines the environments where your application will run, such as “development”, “staging” or “production” as well as its Keptn deployment and Keptn test strategy. Simply create a Keptn project file, referred to as a shipyard file, and pass it in as an argument to the Keptn CLI as follows:

keptn create project <your project name> shipyard.yaml

The Keptn CLI will generate the necessary files that automate the pipelines and deployments. That is one goal of Keptn, declare what you want and let Keptn do the rest!

My application

In my example, my project is named “orders-project”, so I run this command to create my Keptn project.

keptn create project orders-project shipyard.yaml

My “shipyard” file specifies three environments: DEV, STAGING, and PRODUCTION and different deployment and test strategies for each.

A few more details:

1. Behind the scenes, the Keptn CLI “create project” command calls the Keptn GitHub service that creates a new GitHub repo within the GIT Hub Organization specified when the Keptn CLI “config” command was run as part of Keptn installation. Below is a screenshot of the what is new Keptn repo, “orders-project” looks like in GitHub following the project creation.

GitHub “orders-project” repository created by Keptn installer

2. Currently, Keptn supports two deployment strategies: “direct” and “blue-green”

> “direct” deploys an image directly to the running pods within an environment. Since there can be a short Kubernetes pod startup delay during updates, this strategy is not recommended for production environments.

> “blue-green” deploys an image to either a “blue” or “green” set of pods and updates an Istio VirtualService to direct all the traffic to either “green” or “blue” set.

3. Currently, Keptn supports two test strategies: “functional” and “performance” that both call the Keptn JMeter service. A test strategy is not required, but if it is, then the Keptn JMeter service clones and runs JMeter scripts checked into the GIT repo of the service being deployed. More details of the JMeter setup is described below in the “Create JMeter test scripts” section.

Try it

First ensure you have the Keptn CLI installed and configured with your GIT organization and GIT personal access token. Refer to the Keptn docs if not.

Secondly, create a “shipyard.yaml” file with your desired stages and strategies and then call the Keptn CLI with the project name as follows:

keptn create project <your project name> shipyard.yaml

Verify you have a new GitHub repo with “your project name” within your GitHub organization.

Step 2: Create application service files

With the Keptn project now in place, we now need to on-board our services. On-boarding requires creating a “values” file with the values Keptn uses when services are deployed via Helm deployments. Like Keptn project creation, we use the Keptn CLI to run the “keptn on-board service” as follows:

keptn onboard service --project=<your project name> --values=<your service values file name>

Behind the scenes, Keptn will generate the necessary files that automate the Helm and Istio files needed for deployments. The goal of this automation is to shift developers and DevOps engineers time from building and maintaining the “plumbing” to time spent delivering new features for the business.

My application

Here is the services “values” file from the catalog service that I adjusted from the Keptn example carts service values file. I adjusted the service name and container name to “catalog-service” and I adjusted the pullPolicy to “Always” to support my iterative testing. Below is my complete file.

replicaCount: 1
image:
repository: null
tag: null
pullPolicy: Always
service:
name: catalog-service
externalPort: 80
internalPort: 8080
container:
name: catalog-service
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi

To on-board my service, I used this command passing in the project name of “orders-project”

keptn onboard service --project=orders-project --values=values_catalog-service.yaml

A few more details:

1. Behind the scenes, Keptn will update the project repo with branches that map to the three project stages along with various Helm and Istio files. Below is a summary the files within each branch, DEV, STAGING, and PRODUCTION that I defined in my shipyard file.

Folder structure in the GitHub “orders-project” repository after a service is onboarded

2. Keptn defaults to generating Helm deployment files that specify port 8080 for the service and a “/health” endpoint for Kubernetes readiness probe. So, you need to ensure your application has both or if you cannot, define a custom deployment YAML file that defines the application port and readiness endpoint and pass it into the Keptn CLI using the “deployment_file” argument. For example:

keptn onboard service --project=orders-project --values=values_catalog-service.yaml --deployment_file=catalog_deployment.yaml

3. Keptn also configures an Istio gateway that make the service have a public IP address using the XIP.io DNS service. So in addition to Keptn services like the Keptn bridge <LINK>, each service will have a DNS in this format.

http://<service name>.<environment>.<Ingress gateway public IP>.xip.io

For my catalog service in the “dev” environment, the DNS is:

http://front-end.dev.xxx.xxx.xxx.xxx.xip.io

Try it

First ensure your service listens on port 8080 and has a “/health” endpoint. Then, make a services “value.yaml” for each of your services and run the Keptn CLI with your project name as follows:

keptn onboard service --project=<your project name> --values=<your service values file name>

Review the files that Keptn generated in your project GitHub repo. Also, refer to Keptn docs for more details for on-boarding a service.

Step 3: Create JMeter test scripts

Keptn is designed to support pluggable tooling and there are plans for services such as Selenium and other 3rd party load testing services. However, Keptn currently defaults to use the Keptn JMeter service for deployment “health” checks and for both “functional” and “performance” test strategies.

This JMeter service, with some community support, will be more feature rich in the future, but for now it had rigid requirements for script filenames and has non-configurable behavior and parameters as summarized in the table below.

A few more details:

1. Keptn Jmeter service sets a number of JMeter properties when it calls a JMeter test script. Some values are hard-coded and some based on test type being run. Refer to the “JMeter Properties values” column in the table above for values that vary by test type.

VUCount = varies by the test type
LoopCount = varies by the test type
SERVER_URL = varies by the test type
SERVER_PORT = varies by the test type
ThinkTime = 250
CHECK_PATH = /health
DT_LTN = unique The Load Test Name unique value that can be used for Dynatrace request attributes

2. We will review this in the next section, but both the Keptn JMeter and Keptn Pitometer service expect files in a GIT repo that has the same name as the service. This repo must be in the same GIT organization that was specified during Keptn installation and each service must have its own repo. For example, my sample application is comprised of four services, a front-end and three back services, so I need four repos as shown below. NOTE the first repo, “orders-project”, was created when we created the Keptn project with the Keptn CLI.

GitHub organization with the “orders-project” repository created by Keptn installer and the four sample application services source code repositories

3. Keptn JMeter service test script files must be in a sub-folder named “jmeter/” in the GIT repo with the application. For example, in my sample catalog-service needs to have these files: “jmeter/basiccheck.jmx” and “jmeter/order-catalog_load.jmx” since I defined both a “functional” and “performance” test strategy.

My application

As mentioned above, Keptn JMeter service sets a number of JMeter properties based on the test type being run. The properties can be optional used within the scripts. I say optional, since although they are passed in, you don’t have to reference them within the various JMeter components. But in my JMeter scripts, I used the “VUCount” and “LoopCount” parameters in the thread group as shown below.

JMeter script thread group

And I used the “SERVER_URL” and “SERVER_PORT” parameters for each HTTP request sampler and in my “basiccheck.jmx” script. I also used the “CHECK_PATH” parameter for the basic check HTTP request sampler as shown below.

JMeter HTTP sampler

I developed my JMeter scripts locally by starting up my service on my laptop and using http://localhost:8080 as the “SERVER_URL”.

Keptn creates a public Istio Virtual Service name made up of the service name (“front-end” in my case) and the “Keptn project+environment name” (“orders-project-dev” in my case for DEVELOPMENT environment).

So after my application was deployed, I also tested the JMeter script using the external DNS so that I could also validate Dynatrace monitoring. The bash script below is how I get the URL to the application. It does have some hardcoding since I know my “front-end” service was running in the “orders-project”.

#!/bin/bashexport INGRESS_IP=$(kubectl get svc istio-ingressgateway -n istio-system -o=json | jq -r .status.loadBalancer.ingress[].ip)echo “Dev running @ http://front-end.orders-project-dev.$INGRESS_IP.xip.io/"echo “Staging running @ http://front-end.orders-project-staging.$INGRESS_IP.xip.io/"echo “Production running @ http://front-end.orders-project-production.$INGRESS_IP.xip.io/"

Try it

First create a repo in the GIT organization you installed Keptn with for each service. Then make a sub-folder called “JMeter” with the basiccheck.jmx and optional “<service>_load.jmx” scripts. You can also refer to my JMeter scripts as reference.

Step 4 of 5: Create Pitometer performance validation files

Pitometer is Keptn service that provides automated validation of a software deployment based on a list of indicators resulting in an overall deployment score that a deployment must achieve in order to be considered passed. I will just describe the setup, but for an overview of Keptn Pitometer, here is a good blog, “Automated Deployment and Architectural Validation with Pitometer and keptn!”.

Currently, Pitometer is only called during a Keptn “performance” test strategy once the application is deployed and a test is run. Currently, Pitometer supports Dynatrace and Prometheus back-end data sources, but service is designed to support others that the Keptn community builds.

As mentioned in the JMeter section above, the Keptn Pitometer service expects a file to be in a GIT repo named after the service within the GIT organization that was specified during Keptn installation. This file must be named “perfspec.json” and be in a “perfspec/” subfolder. If there is no “perfspec/perfspec.json” file, the pipeline will just skip the Pitometer quality check step.

Below is a screen from VS Code that shows my sample service to show the structure of both JMeter and perfspec files.

VS code folder view

My application

For my environment, I have installed the optional Keptn Dynatrace service as to enable pushing of Keptn events to Dynatrace and to enable Pitometer to use Dynatrace to gather metrics with quality gates. My sample application “perfspec.json” file has a single response time metric, called an “indicator”, that will be evaluated. If the response time exceeds 2000 milliseconds, the response from Pitometer will be “fail”. If the response time is under 1000 milliseconds, the response from Pitometer will be “pass” and “warn” if in between 1000 and 2000 milliseconds.

{
"spec_version": "1.0",
"indicators": [
{
"id": "ResponseTime_AVG",
"source": "Dynatrace",
"query": {
"timeseriesId": "com.dynatrace.builtin:service.responsetime",
"aggregation": "AVG",
"startTimestamp": "",
"endTimestamp": ""
},
"grading": {
"type": "Threshold",
"thresholds": {
"upperSevere": 2000,
"upperWarning": 1000
},
"metricScore": 100
}
}
],
"objectives": {
"pass": 100,
"warning": 50
}
}

A few more details:

The Keptn Dynatrace installation script will automatically create an auto-tagging rules within Dynatrace for the “service” name and “environment” of the service being deployed. Once deployed, the service with these tags looks like the below.

Catalog Service within Dynatrace

When the Keptn pitometer service makes to the Dynatrace API to retrieve metrics, it adds these tags as as to only retrieve data for the service and environment being evaluated.

Try it

Within the GIT organization you are using for Keptn, create or update the GIT repository for your service with a sub-folder called “perfspec/” and a file named “perfspec.json” and define the metrics you want to collect.

  1. If you want to use Dynatrace as a back-end data source, then follow the Keptn Dynatrace monitoring instructions.
  2. If you want to use Promethius as a back-end data source, then follow the Keptn Promethius monitoring instructions.

Step 5 of 5: Send new Keptn artifact events to run pipelines

Now that we have everything in place, it is time to run deploy and test our code. Recall that Keptn is a messaging control plane with subscribed services that are listening for Keptn events. Each Keptn service listens, acts, and pushes new events as to orchestrate the steps in a pipeline.

Starting a pipeline begins with sending a Keptn “new artifact” event. An artifact is a Docker image, which can be located at Docker Hub, Quay, or any other registry storing docker images. For my sample app, I used Dockerhub.

The easiest way to send the event is to use the Keptn CLI send event command with the required arguments of the project, service, image name and tag.

keptn send event new-artifact --project=<your project name> --service=<your service name> --image=<your image name> --tag=<your image tag name>

Behind the scenes, each pipeline execution step is saved in a log that has a unique “Keptn context” GUID for that run. You can monitor a pipeline run using the Keptn Bridge Web UI that groups these logs by the “Keptn context” GUID as shown below.

Keptn Bridge WebUI

My application

Recall that my example “shipyard.yaml” file defined three environments of DEV, STAGING, and PRODUCTION. So, after running this Keptn CLI “send event new-artifact” command to deploy my catalog service…

keptn send event new-artifact — project=orders-project — service=order-service — image=robjahn/keptn-orders-catalog-service –tag=1

…here are the steps that Keptn performs to deploy the service to all three environments:

  1. New artifact event received by Keptn control plane
  2. Keptn Helm service deploys service to the “Development” environment using the “direct” deployment strategy
  3. Keptn JMeter service runs a basic health check in “Development” environment
  4. Keptn JMeter service runs a functional test in “Development” environment as directed by the “functional” test strategy
  5. Keptn Gatekeeper service validates the deployed service is OK to promote. If yes, continue. If no, revert to previous image and stop pipeline.
  6. Keptn Dynatrace service pushes “deployment” meta-data as an event into Dynatrace
  7. Keptn Helm service deploys service to the “Staging” environment (it used the “direct” deployment strategy)
  8. Keptn JMeter service runs a basic health check in “Staging” environment
  9. Keptn JMeter service runs performance test in “Staging” environmentas directed by the “performance” test strategy
  10. Keptn Pitometer service uses the “PerfSpec” to validate the service
  11. Keptn Gatekeeper service validates the deployed service is OK to promote. If yes, update istio to redirect traffic to either “blue” or “green” pods. If no, stop pipeline.
  12. Keptn Dynatrace service pushes “deployment” and “performance test” meta-data as an events into Dynatrace
  13. Keptn Helm service deploys service to the “Production” environment
  14. Keptn JMeter service runs a basic health check in “Production” environment
  15. Keptn Gatekeeper service validates the deployed service is OK. If yes, update istio to redirect traffic to either “blue” or “green” pods. If no, stop pipeline.
  16. Pipeline complete !!

Try it

First get Keptn bridge running by using the “kubectl port-forward” command as described in Keptn Bridge docs.

kubectl port-forward svc/$(kubectl get ksvc bridge -n keptn -ojsonpath={.status.latestReadyRevisionName})-service -n keptn 9000:80

Running this command requires that you first install kubectl on your laptop and configure it to you cluster. After running the command above, you can access the Keptn bridge web UI at http://localhost:9000

Alternatively, you can try out what I did which was to add HA proxy that maps localhost port 9000 to port 80 for public access on the virtual machine we used to install Keptn. This also requires opening up port 80 for inbound access on the virtual machine and configuring basic authorization on HA proxy as to not have it be wide open. Once setup, you can access the Keptn bridge web UI at

http://<ip address>/#/

You can review my bridge setup script for details of this approach.

Second, push your application container image to some registry like dockerhub, run the Keptn CLI to send your event and monitor using Keptn Bridge.

keptn send event new-artifact --project=<your project name> --service=<your service name> --image=<your image> --tag=<your tag>

Lastly, review that you application is now deployed and use these commands as required to review the Keptn service logs for errors

# get the name of keptn service podskubectl -n keptn get pods# review pod logskubectl -n keptn logs <pod-name> user-container# SSH into a running podkubectl -n keptn exec -it jmeter-service-<id>-deployment-<id> — container user-container — bash

Take look in Dynatrace too. You should see the the Deployment and other events that Keptn Dynatrace service pushed.

Keptn Deployment event details within Dynatrace

Give my setup and sample application a try

Everything I outlined in this blog can be found in this open source GitHub repository. Just follow the README file and run the bash scripts I developed to demonstrate the whole setup and on-boarding process.

This blog is based on Keptn 0.3.0 that support Azure AKS, Google GKE, and OpenShift 3.11 as reflected in the picture below, so be sure to check out the Keptn release notes for updates for additional Kubernetes support and new Keptn features and services.

Keptn control plane and Keptn services

I plan to keep this repository updated as Keptn continues to deliver new releases and welcome contributors to this effort !

Join the Keptn Community

Last but not least: Our goal is to contribute Keptn to CNCF (Cloud Native Computing Foundation) and for that we need to build a strong community. Help us with feedback, join the Keptn Slack channel, build your own services and tell us about them so we can put them into our keptn-contrib GitHub org. Also, make sure to check out our community meetings as the team is regularly giving a LIVE update from the keptn headquarters!

--

--

Rob Jahn

Tech Partner Solutions Advocate at Dynatrace software