Writing Jenkins Pipeline For OpenShift Deployment: Part One

Writing Jenkins Pipeline For OpenShift Deployment: Part One
Photo by tian kuan on Unsplash

Pipeline is a set of instructions, which will be executed as per given sequence and produce a output. Jenkins Pipeline is simply writing these instructions in Jenkins. These pipelines can be written in Groovy.

In previous post we have defined deployment structure and made necessary preparations for deployment of a python+gunicorn+nginx+jenkins based project. In this post, we will discuss about the most important part of our CI/CD project, writing jenkins pipeline. This pipeline will provide continuous delivery(CD) for our deployment.

The whole deployment will be done in 11 steps. Step 1-5 will be executed in Jenkins and step 6-11 will be deploying the code in OpenShift. If you are feeling confused about these stages, don’t worry. We will be explaining them one by one. Please keep in mind, with every stage there is a description of possible outcome of the execution of that stage. But in this article, our task is only to create a pipeline, not execute it. In this article, we will cover the steps which will run in jenkins.

Step 1: declaring agent and ENV variables

We will declare agent and environment variables in this step. In Jenkins, we need to specify a jenkins agent, in which the pipeline will be executing. We will be using python jenkins slave to execute our pipeline. We will discuss more about jenkins python slave in next article. The python agent should look like this:

pipeline {
    agent {
      node {label 'python'}

We can write down our constant values in environment variables. For example:

environment {
    APPLICATION_NAME = 'python-nginx'
    STAGE_TAG = "promoteToQA"
    DEV_PROJECT = "dev"
    STAGE_PROJECT = "stage"
    TEMPLATE_NAME = "python-nginx"
    ARTIFACT_FOLDER = "target"
    PORT = 8081;

Some variable declaration may seem confusing, but we will use them in next steps.

Step 2: get latest code

Jenkins Pipeline execution is done in stages. All the stages of pipeline will be inside in one stages dictionary. Like:

stages {
    // Do Something
    // Do Something

In first stage, we will be using Git Plugin which comes as default with OpenShift Jenkins. We will be using following code for pulling the latest code:

stage('Get Latest Code') {
  steps {
    git branch: "${GIT_BRANCH}", url: "${GIT_REPO}" // declared in environment

Step 3: install dependencies

In this stage, we will install python dependencies inside in a virtual environment. For that, we will first install virtualenv using pip install virtualenv. After activating the virtualenv, we will install the dependencies using pip install -r requirements.pip from dependencies defined in app>requirements.pip.

stage("Install Dependencies") {
    steps {
        sh """
        pip install virtualenv
        virtualenv --no-site-packages .
        source bin/activate
        pip install -r app/requirements.pip

Here we are using scripted steps to execute the commands for installing dependencies.

Step 4: run tests

In this stage, we will run tests, so that we can make sure if any test fails, pipeline does not execute any further. We will also store the test results using JUnit Plugin.

First, we will activate our virtualenv(again!!) then run tests in app directory using nosetests. It will export test results in xml format. Then we will store the result using JUnit.

stage('Run Tests') {
    steps {
        sh '''
        source bin/activate
        nosetests app --with-xunit
        junit "nosetests.xml"

Step 5: storing artifacts

Artifact may sound weird to you if you are familliar with Java. Because we are working with Python, not Java builds. Python does not require any builds, still we are storing a compressed file consists of Python codes as well as our Dockerfile and NGINX configurations(app,config,Dockerfile), and we are going to use this compressed file to deploy our application to openshift. You might think that storing that file is not necessary, but I feel that storing that might be necessary, so that in later you can find out what is being pushed to openshift or if there is any discrepancy between what you want to deploy and what you are deploying. Anyways, for this stage, we will make safe name for naming our compressed file. BUILD_NUMBER is an environment variable available in pipeline, which provides current build number which should be unique per build. We will be using APPLICATION_NAME + BUILD_NUMBER to make a safe build name. We will store the Artifacts in a special directory, for now lets use target folder inside jenkins workspace(Workspace is basically the place/path where the whole pipeline execution is happening).

stage('Store Artifact'){
        def safeBuildName  = "${APPLICATION_NAME}_${BUILD_NUMBER}",
            artifactFolder = "${ARTIFACT_FOLDER}",
            fullFileName   = "${safeBuildName}.tar.gz",
            applicationZip = "${artifactFolder}/${fullFileName}"
            applicationDir = ["app",
                              ].join(" ");
        def needTargetPath = !fileExists("${artifactFolder}")
        if (needTargetPath) {
            sh "mkdir ${artifactFolder}"
        sh "tar -czvf ${applicationZip} ${applicationDir}"
        archiveArtifacts artifacts: "${applicationZip}", excludes: null,               onlyIfSuccessful: true

We will be using script syntax to execute our instructions. Script console is basically for running arbitrary commands inside it.

In conclusion

In this article, we saw the steps which will modify stuff in jenkins. In next article, we will see rest of the steps which will do stuff in OpenShift.

Last updated: May 22, 2024

← Previous
Deploy A Python App to OpenShift: Planning and Preparations

Deploying a Python application to OpenShift is fairly easy. Write a Dockerfile and run oc new-app …

Next →
Writing Jenkins Pipeline For OpenShift Deployment: Part Two

In previous article, we covered step 1-5 which would prepare the code and make sure it is ready to …

Share Your Thoughts
M↓ Markdown