Phone Handle Icon
Call Us
Back to all

Getting Started with Azure DevOps Pipelines (Part 1)

May 19, 2020
By
Steve Danner
Back to all
Share this article

Continuous Integration/Continuous Delivery, often abbreviated CI/CD, has become a hot button topic in the last few years alongside the rise of "DevOps" methodologies and practices. But what exactly are these things and why should you care? When we are dealing with technology, there are almost always 2 angles to answer that question: through the eyes of the IT staff and through the eyes of the business stakeholders. This post will aim to look more through the profession of IT, covering the why and how from that perspective, and we will follow up on the business side in a later post.

Continuous Integration and Continuous Delivery, while often mentioned together and implemented together, are 2 separate practices. Azure DevOps facilitates both of these practices with what they simply call "Pipelines". Pipelines are broken up into what they actually call just "Pipelines", as well as "Releases". Each of these tools uses the YAML (YAML Ain't Markup Language) language to script out tasks, and each one is referred to as a "pipeline" of sorts. It can get confusing and leave you wondering what's the difference? I read someone on Reddit, who perfectly said that Pipelines don't output anything and are only meant to ensure builds and unit tests run, whereas Releases DO emit output, often deploying or emitting deployment packages in the process. So, that's a helpful way to keep them straight on what each should be doing. It's also helpful to remember, that most often, your Release Pipeline is going to consume the output of your Continuous Integration "Pipeline".

Setting up your initial Pipeline for a simple .NET project is fairly straight forward.  However, I would imagine some builds could get complex if you require a lot of abnormal steps or generate a lot of artifacts. When you go to create your first pipeline, you'll be presented with the screen below. We'll click that "Create Pipeline" button to get started!

Your first pipeline

Once we click that button, we'll be asked first where our source code is stored. For demo purposes, we're just using Git via Azure Repos, but I've used Atlassian BitBucket as well, and Microsoft has made it dead simple to integrate this and several other third party source control providers.

Where is your code?

Next, we will be presented with a screen to select our repository from our chosen provider. Note that here it was smart enough to filter down to the "PipelinesDemo" DevOps project repos.

Select your repository

After that, we need to select our project type. We'll be doing a simple ASP.NET Core application for our demo.

Select project type

Finally, we're given a YAML template based on our selected project type. This is the template for .NET Core:

YAML default template with addition

Let's go down the list and see what the template is doing for us.

  • The "trigger" at the top is what branch of your repository is going to trigger the build and how. Generally speaking, any time your source code repository changes, you want a build to get triggered! This can vary based on your branching strategy. I like to use an environment branching strategy, so normally we'll have a "staging" and "Production" environment to deploy to. For this simplified example, we're going to just have our master branch represent "Production".
  • The "pool" is simply the type of environment the build will run on within Azure. In this case, it'll run on the latest version of Windows.
  • The "variables" section simply defines a set of variables to be used later in the YAML file.
  • The "steps" section will contain all of our tasks to be performed on each build.
  • The "NuGetCommand" task will restore all of our projects' NuGet packages. Note that it makes use of the solution file variable defined earlier to obtain the projects that require NuGet restore commands.
  • The "VSBuild" task is your normal build/compile. It will execute all build tasks in the solution including pre and post-build events! It's important to remember that you'll be on a "clean" machine here, there's certain things like NodeJS you can expect, but each VM is thrown out after each use. Here is some more info on what you can expect on the target machine: https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops&tabs=browser. It is important to note that the "msbuildArgs" input item is actually doing the equivalent of Visual Studio's "Publish"! This output will be used later in our Release Pipeline. Additionally, we are going to modify this file to add one final task at the end : PublishPipelineArtifact, which will make the published output available for other pipelines downstream.
  • Finally, the "VSTest" task is your normal unit test execution engine. In a normal CI environment, failed unit tests should fail the build, right? That's exactly what happens here.
  • There is also a host of built-in tasks that can be utilized in your YAML file for more complicated scenarios.

This represents a fairly normal, Continuous Integration build, albeit simple and easy. Most builds will probably at the very least have NPM installs to run, Angular CLI commands or file copy post-build events. When you're working with more complicated aspects of a build, it's so important to remember that this will be run on a CLEAN virtual machine on every pass! You can replicate a local build environment by just setting up a clean directory, re-cloning your repository into it and attempting to build there. If you're running on the free version of DevOps, you'll be constrained by build minutes, so it's important to get things as solid as possible in your local environment before attempting to debug builds in the Azure DevOps cloud.

Once we save and run our build, you can watch the build's progress in the built-in log viewer!

A running DevOps Pipeline build

This log is invaluable in debugging issues and resolving them. More than likely, someone has run into your problem and will have posted a solution to it.

Once you do have working builds, your team will now be emailed every time a build completes, indicating whether it's successful or not, and what errors occurred if it failed! So now, if Bob checks in some code and breaks the build, Jim and Bill now know to immediately inform Bob and allow for correction! No more sifting through 20 check ins to see which check in broke the build! You will now be able to match up exactly which check in broke the build and when! This can cut down troubleshooting time by hours, if not days, each time a build breaks! While initial setup can sometimes be a challenge, if you get your builds going early, it should be a straight forward ongoing task as your application size increases and the build gets more complicated. If your boss is torn on paying for DevOps, the troubleshooting time saved alone should be an easy sell for you to get him or her on board! Additionally, all team members AND project stakeholders can always go to the Pipelines web page and check out build statuses!

Recently run Pipeline builds

This post has gotten a little long winded, so we'll follow up with Release Pipelines in our next post! We'll show how to connect the output of this build to the Release Pipeline, set up Deployments to Azure, and set up Management approvals for each Production release! Stay tuned!

Competitive Edge

Software Design Partners creates a competitive edge for your business.

About the author

Steve Danner is the Director of Technology at Software Design Partners. He specializes in software development implementations, but also manages the company network infrastructure, provides technology and application architecture direction for the development team, engages clients to provide technology solutions for their business challenges.