topics/cicd/README.md
| Name | Topic | Objective & Instructions | Solution | Comments |
|---|---|---|---|---|
| Set up a CI pipeline | CI | Exercise | ||
| Deploy to Kubernetes | Deployment | Exercise | Solution | |
| Jenkins - Remove Jobs | Jenkins Scripts | Exercise | Solution | |
| Jenkins - Remove Builds | Jenkins Scripts | Exercise | Solution |
A development practice where developers integrate code into a shared repository frequently. It can range from a couple of changes every day or a week to a couple of changes in one hour in larger scales.
Each piece of code (change/patch) is verified to make sure that the change is safe to merge. Today, it's a common practice to test the change using an automated build that makes sure the code can be integrated. It can be one build which runs several tests in different levels (unit, functional, etc.) or several separate builds that all or some has to pass in order for the change to be merged into the repository. </b></details>
<details> <summary>What is Continuous Deployment?</summary> <b>A development strategy used by developers to release software automatically into production where any code commit must pass through an automated testing phase. Only when this is successful is the release considered production worthy. This eliminates any human interaction and should be implemented only after production-ready pipelines have been set with real-time monitoring and reporting of deployed assets. If any issues are detected in production it should be easy to rollback to previous working state.
For more info please read here </b></details>
<details> <summary>Can you describe an example of a CI (and/or CD) process starting the moment a developer submitted a change/PR to a repository?</summary> <b>There are many answers for such a question, as CI processes vary, depending on the technologies used and the type of the project to where the change was submitted. Such processes can include one or more of the following stages:
An example of one possible answer:
A developer submitted a pull request to a project. The PR (pull request) triggered two jobs (or one combined job). One job for running lint test on the change and the second job for building a package which includes the submitted change, and running multiple api/scenario tests using that package. Once all tests passed and the change was approved by a maintainer/core, it's merged/pushed to the repository. If some of the tests failed, the change will not be allowed to merged/pushed to the repository.
A complete different answer or CI process, can describe how a developer pushes code to a repository, a workflow then triggered to build a container image and push it to the registry. Once in the registry, the k8s cluster is applied with the new changes. </b></details>
<details> <summary>What is Continuous Delivery?</summary> <b>A development strategy used to frequently deliver code to QA and Ops for testing. This entails having a staging area that has production like features where changes can only be accepted for production after a manual review. Because of this human entanglement there is usually a time lag between release and review making it slower and error prone as compared to continuous deployment.
For more info please read here </b></details>
<details> <summary>What is difference between Continuous Delivery and Continuous Deployment?</summary> <b>Both encapsulate the same process of deploying the changes which were compiled and/or tested in the CI pipelines.
The difference between the two is that Continuous Delivery isn't fully automated process as opposed to Continuous Deployment where every change that is tested in the process is eventually deployed to production. In continuous delivery someone is either approving the deployment process or the deployment process is based on constraints and conditions (like time constraint of deploying every week/month/...) </b></details>
<details> <summary>What CI/CD best practices are you familiar with? Or what do you consider as CI/CD best practice?</summary> <b>The decision on which type of worker (virtual machine, bare-metal, or container) to use for running a pipeline would depend on several factors, including the nature of the pipeline, the requirements of the software being built, the available resources, and the specific goals and constraints of the development and deployment process. Here are some considerations that can help in making the decision:
Based on these considerations, the appropriate choice of worker (virtual machine, bare-metal, or container) for running the pipeline would be determined by weighing the pros and cons of each option and aligning with the specific requirements, resources, and goals of the development and deployment process. It may also be useful to consult with relevant stakeholders, such as developers, operations, and infrastructure teams, to gather input and make an informed decision. </b></details>
<details> <summary>Where do you store CI/CD pipelines? Why?</summary> <b>There are multiple approaches as to where to store the CI/CD pipeline definitions:
Capacity planning for CI/CD resources involves estimating the resources required to support the CI/CD pipeline and ensuring that the infrastructure has enough capacity to meet the demands of the pipeline. Here are some steps to perform capacity planning for CI/CD resources:
By following these steps, you can effectively plan the capacity for your CI/CD resources, ensuring that your pipeline has sufficient resources to operate efficiently and meet the demands of your development process. </b></details>
<details> <summary>How would you structure/implement CD for an application which depends on several other applications?</summary> <b>Implementing Continuous Deployment (CD) for an application that depends on several other applications requires careful planning and coordination to ensure smooth and efficient deployment of changes across the entire ecosystem. Here are some general steps to structure/implement CD for an application with dependencies:
Implementing CD for an application with dependencies requires careful planning, coordination, and automation to ensure efficient and reliable deployments. By following best practices such as automation, version control, testing, monitoring, rollback strategies, and effective communication, you can ensure a smooth and successful CD process for your application ecosystem. </b></details>
<details> <summary>How do you measure your CI/CD quality? Are there any metrics or KPIs you are using for measuring the quality?</summary> <b>Measuring the quality of CI/CD processes is crucial to identify areas for improvement, ensure efficient and reliable software delivery, and achieve continuous improvement. Here are some commonly used metrics and KPIs (Key Performance Indicators) to measure CI/CD quality:
These are just some examples of metrics and KPIs that can be used to measure the quality of CI/CD processes. It's important to choose metrics that align with the goals and objectives of your organization and regularly track and analyze them to continuously improve the CI/CD process and ensure high-quality software delivery. </b></details>
Jenkins is an open source automation tool written in Java with plugins built for Continuous Integration purpose. Jenkins is used to build and test your software projects continuously making it easier for developers to integrate changes to the project, and making it easier for users to obtain a fresh build. It also allows you to continuously deliver your software by integrating with a large number of testing and deployment technologies.
Jenkins integrates development life-cycle processes of all kinds, including build, document, test, package, stage, deploy, static analysis and much more.
</b></details>
<details> <summary>What are the advantages of Jenkins over its competitors? Can you compare it to one of the following systems?Jenkins has several advantages over its competitors, including Travis, Bamboo, TeamCity, and CircleCI. Here are some of the key advantages:
When comparing Jenkins to its competitors, there are some key differences in terms of features and capabilities. For example:
This might be considered to be an opinionated answer:
Jenkins has a vast library of plugins, and the most commonly used plugins depend on the specific needs and requirements of each organization. However, here are some of the most popular and widely used plugins in Jenkins:
Pipeline: This plugin allows users to create and manage complex, multi-stage pipelines using a simple and easy-to-use scripting language. It provides a powerful and flexible way to automate the entire software delivery process, from code commit to deployment.
Git: This plugin provides integration with Git, one of the most popular version control systems used today. It allows users to pull code from Git repositories, trigger builds based on code changes, and push code changes back to Git.
Docker: This plugin provides integration with Docker, a popular platform for building, shipping, and running distributed applications. It allows users to build and run Docker containers as part of their build process, enabling easy and repeatable deployment of applications.
JUnit: This plugin provides integration with JUnit, a popular unit testing framework for Java applications. It allows users to run JUnit tests as part of their build process and generates reports and statistics on test results.
Cobertura: This plugin provides code coverage reporting for Java applications. It allows users to measure the code coverage of their tests and generate reports on which parts of the code are covered by tests.
Email Extension: This plugin provides advanced email notification capabilities for Jenkins. It allows users to customize the content and format of email notifications, including attachments, and send notifications to specific users or groups based on build results.
Artifactory: This plugin provides integration with Artifactory, a popular artifact repository for storing and managing binaries and dependencies. It allows users to publish and retrieve artifacts from Artifactory as part of their build process.
SonarQube: This plugin provides integration with SonarQube, a popular code quality analysis tool. It allows users to run code quality checks and generate reports on code quality metrics such as code complexity, code duplication, and code coverage.
</b></details>
<details> <summary>Have you used Jenkins for CI or CD processes? Can you describe them?</summary> <b>Let's assume we have a web application built using Node.js, and we want to automate its build and deployment process using Jenkins. Here is how we can set up a simple CI/CD pipeline using Jenkins:
This is just a simple example of a CI/CD pipeline using Jenkins, and the specific implementation details may vary depending on the requirements of the project. </b></details>
<details> <summary>What type of jobs are there? Which types have you used?</summary> <b>In Jenkins, there are various types of jobs, including:
You can report via:
Each has its own disadvantages and advantages. Emails for example, if sent too often, can be eventually disregarded or ignored. </b></details>
<details> <summary>You need to run unit tests every time a change submitted to a given project. Describe in details how your pipeline would look like and what will be executed in each stage</summary> <b>The pipelines will have multiple stages:
Jenkins documentation provides some basic intro for securing your Jenkins server. </b></details>
<details> <summary>Describe how do you add new nodes (agents) to Jenkins</summary> <b>You can describe the UI way to add new nodes but better to explain how to do in a way that scales like a script or using dynamic source for nodes like one of the existing clouds. </b></details>
<details> <summary>How to acquire multiple nodes for one specific build?</summary> <b>To acquire multiple nodes for a specific build in Jenkins, you can use the "Parallel" feature in the pipeline script. The "Parallel" feature allows you to run multiple stages in parallel, and each stage can run on a different node.
Here is an example pipeline script that demonstrates how to acquire multiple nodes for a specific build:
pipeline {
agent any
stages {
stage('Build') {
parallel {
stage('Node 1') {
agent { label 'node1' }
steps {
// Run build commands on Node 1
}
}
stage('Node 2') {
agent { label 'node2' }
steps {
// Run build commands on Node 2
}
}
stage('Node 3') {
agent { label 'node3' }
steps {
// Run build commands on Node 3
}
}
}
}
stage('Deploy') {
agent any
steps {
// Deploy the built artifacts
}
}
}
}
In this example, the "Build" stage has three parallel stages, each running on a different node labeled as "node1", "node2", and "node3". The "Deploy" stage runs after the build is complete and runs on any available node.
To use this pipeline script, you will need to have the three nodes (node1, node2, and node3) configured in Jenkins. You will also need to ensure that the necessary build commands and dependencies are installed on each node. </b></details>
<details> <summary>Whenever a build fails, you would like to notify the team owning the job regarding the failure and provide failure reason. How would you do that?</summary> <b>In Jenkins, you can use the "Email Notification" plugin to notify a team when a build fails. Here are the steps to set up email notifications for failed builds:
With this setup, Jenkins will send an email notification to the specified recipients whenever a build fails, providing them with the failure reason and any other relevant information. </b></details>
<details> <summary>There are four teams in your organization. How to prioritize the builds of each team? So the jobs of team x will always run before team y for example</summary> <b>In Jenkins, you can prioritize the builds of each team by using the "Priority Sorter" plugin. Here are the steps to set up build prioritization:
With this setup, Jenkins will prioritize the builds of each team based on the priority value set in the job configuration. Jobs owned by Team X will have a higher priority than jobs owned by Team Y, ensuring that they are executed first. </b></details>
<details> <summary>If you are managing a dozen of jobs, you can probably use the Jenkins UI. But how do you manage the creation and deletion of hundreds of jobs every week/month?</summary> <b>Managing the creation and deletion of hundreds of jobs every week/month in Jenkins can be a daunting task if done manually through the UI. Here are some approaches to manage large numbers of jobs efficiently:
Jenkins supports two types of pipelines: Scripted pipelines and Declarative pipelines.
Scripted pipelines use Groovy syntax and provide a high degree of flexibility and control over the build process. Scripted pipelines allow developers to write custom code to handle complex scenarios, but can be complex and hard to maintain.
Declarative pipelines are a newer feature and provide a simpler way to define pipelines using YAML syntax. Declarative pipelines provide a more structured and opinionated way to define builds, making it easier to get started with pipelines and reducing the risk of errors.
Some key differences between the two types of pipelines are:
I am familiar with both types of pipelines, but generally prefer declarative pipelines for their ease of use and simplicity. </b></details>
<details> <summary>How would you implement an option of a starting a build from a certain stage and not from the beginning?</summary> <b>To implement an option of starting a build from a certain stage and not from the beginning in a Jenkins pipeline, we can use the when directive along with a custom parameter to determine the starting stage. Here are the steps to implement this:
Add a custom parameter to the pipeline. This parameter can be a simple string or a more complex data type like a map.
parameters {
string(name: 'START_STAGE', defaultValue: '', description: 'The name of the stage to start the build from')
}
Use the when directive to conditionally execute stages based on the value of the START_STAGE parameter.
stage('Build') {
when {
expression {
params.START_STAGE == '' || currentStage.name == params.START_STAGE
}
}
// Build steps go here
}
stage('Test') {
when {
expression {
params.START_STAGE == '' || currentStage.name == params.START_STAGE || previousStage.result == 'SUCCESS'
}
}
// Test steps go here
}
stage('Deploy') {
when {
expression {
params.START_STAGE == '' || currentStage.name == params.START_STAGE || previousStage.result == 'SUCCESS'
}
}
// Deploy steps go here
}
In this example, we use the when directive to execute each stage only if the START_STAGE parameter is empty or matches the current stage's name. Additionally, for the Test and Deploy stages, we also check if the previous stage executed successfully before running.
Trigger the pipeline and pass the START_STAGE parameter as needed.
pipeline {
agent any
parameters {
string(name: 'START_STAGE', defaultValue: '', description: 'The name of the stage to start the build from')
}
stages {
stage('Build') {
// Build steps go here
}
stage('Test') {
// Test steps go here
}
stage('Deploy') {
// Deploy steps go here
}
}
}
When triggering the pipeline, you can pass the START_STAGE parameter to start the build from a specific stage.
For example, if you want to start the build from the Test stage, you can trigger the pipeline with the START_STAGE parameter set to 'Test':
pipeline?START_STAGE=Test
This will cause the pipeline to skip the Build stage and start directly from the Test stage.
</b></details>
Developing a Jenkins plugin requires knowledge of Java and familiarity with Jenkins API. The process typically involves setting up a development environment, creating a new plugin project, defining the plugin's extension points, and implementing the desired functionality using Java code. Once the plugin is developed, it can be packaged and deployed to Jenkins.
The Jenkins plugin ecosystem is extensive, and there are many resources available to assist with plugin development, including documentation, forums, and online communities. Additionally, Jenkins provides tools such as Jenkins Plugin POM Generator and Jenkins Plugin Manager to help with plugin development and management. </b></details>
<details> <summary>Have you written Jenkins scripts? If yes, what for and how they work?</summary> <b> </b></details>A YAML file that defines the automation actions and instructions to execute upon a specific event.
The file is placed in the repository itself.
A Workflow can be anything - running tests, compiling code, building packages, ... </b></details>
<details> <summary>What is a Runner in GitHub Actions?</summary> <b>A workflow has to be executed somewhere. The environment where the workflow is executed is called Runner.
A Runner can be an on-premise host or GitHub hoste </b></details>
<details> <summary>What is a Job in GitHub Actions?</summary> <b>A job is a series of steps which are executed on the same runner/environment.
A workflow must include at least one job. </b></details>
<details> <summary>What is an Action in GitHub Actions?</summary> <b>An action is the smallest unit in a workflow. It includes the commands to execute as part of the job. </b></details>
<details> <summary>In GitHub Actions workflow, what the 'on' attribute/directive is used for?</summary> <b>Specify upon which events the workflow will be triggered.
For example, you might configure the workflow to trigger every time a changed is pushed to the repository. </b></details>
<details> <summary>True or False? In Github Actions, jobs are executed in parallel by default</summary> <b>True </b></details>
<details> <summary>How to create dependencies between jobs so one job runs after another?</summary> <b>Using the "needs" attribute/directive.
jobs:
job1:
job2:
needs: job1
In the above example, job1 must complete successfully before job2 runs </b></details>
<details> <summary>How to add a Workflow to a repository?</summary> <b> CLI:.github/workflows in the repositoryUI:
check pipeline are triggered when a patch is uploaded to a code review system (e.g. Gerrit).
</b></details>
<details> <summary>In Zuul, What are the <code>gate</code> pipelines?</summary> <b>gate pipeline are triggered when a code reviewer approves the change in a code review system (e.g. Gerrit)
</b></details>
True. check pipeline run when the change is uploaded, while the gate pipelines run when the change is approved by a reviewer
</b></details>