SoftwareOne logo

How to find the best solution for Azure DevOps Pipeline automation?

SoftwareOne blog editorial team
Blog Editorial Team
A stack of containers on top of each other.

The power of container portability makes them an attractive option when building and architecting cloud solutions. That’s why for most of us, containers are associated with building cloud-native applications and microservices platforms. However, they can be really helpful in other scenarios too – for example, DevOps automation. In this article, we will talk about how containers together with Azure DevOps can support DevOps automation when it comes to Microsoft Azure solutions.

Why use containers with Azure DevOps?

The answer here is twofold. First, containers may be a more efficient technical option for your solution, given the limitations of Virtual Machines (we will get to it in a moment). Another reason is that it’s a good FinOps practice. What is FinOps? In short, it’s an approach whereby you make the most of your cloud resources by optimising costs and not spending more than necessary. So, how do we involve containers? Let us set the stage a little bit. Let’s assume that we use Azure DevOps to keep our solution source code together with Azure infrastructure code. The application consists of many different Azure Services operating within an Azure Virtual Network. To keep our environment secure, we are using Azure Virtual Network and integrating Function Apps with VNET using Private Links. You might already know this, but with such an architecture and configuration, using Microsoft-Hosted Azure DevOps Agents can be problematic. Why? Because by default they will not be able to reach Azure Functions to deploy your code. As Azure resources are located in the Azure Virtual Network, direct access to them is blocked.

What are other options for deploying with Azure DevOps pipelines?

We have three additional options to still be able to deploy our code using an Azure pipeline.

Azure DevOps service tag

It enables access to Virtual Network from Azure DevOps. This is not a perfect solution because we cannot enable access for a specific Microsoft-Hosted Agent. Instead, we have to add a whole range of IP addresses. Sometimes service tags are not an option due to security restrictions. In this scenario, our Virtual Network is opened to the whole region of Azure DevOps Microsoft-Hosted Agents that is not owned by us.

Azure Virtual Machine

We can use an Azure Virtual Machine integrated with our VNET and install a Self-Hosted Azure DevOps Agent. This way we can solve the problem with network access but instead we will face challenges related to VM cost and maintenance.

Self-Hosted Agents with containers

We can also use Self-Hosted Agents as running containers. In this case, we have three ways to run them: Azure Container Instances, Azure Container Apps, and Azure Kubernetes Service (AKS). The key benefit of such an approach is reduced cost in comparison to Virtual Machines. So, let’s review the recommended two approaches for running Self-Hosted Agents in the Azure cloud to deploy our code in resources within Azure Virtual Network.

Using Virtual Machines for DevOps automation

The most popular way to use Azure DevOps Self-Hosted Agents is to host them using an Azure Virtual Machine integrated with Azure Virtual Network. With this method, we can easily deploy code to resources like Azure Web Apps and Function Apps in the same Azure Virtual Network. Unfortunately, this approach is not without certain problems.

Cost of Azure VMs

Azure Virtual Machines are quite expensive. Below is an example Linux machine with low usage. As you can see, the estimated monthly cost is around $13.19. Of course, it can be lower/higher depending on usage. In this example, at $13.19 per month, the yearly cost will be $158.28. If you decide to use a Windows machine, the cost will increase.

Maintaining Azure VMs

As you probably know, Azure Virtual Machines are under the IaaS (Infrastructure as a Service) cloud service model. It means that there are many things we have to take care of, such as operating system updates, installation of proper applications, and security.

The upside of this approach is the fact that we have full control over VM and tools that can be installed. This can be helpful when we have a complex CI/CD process.

Using containers for DevOps automation

Even if containers are associated with building cloud-native applications, they are also the perfect match for running Self-Hosted Azure DevOps Agents. The Azure Cloud offers many container services. Here we will write a bit about the three most popular and useful options when it comes to DevOps automation. Note: In this article, we do not focus on creating Docker images with Azure DevOps Agents, however this great documentation provides a step-by-step guide on how to create Docker image with Self-Hosted Agent.

Azure Container Instances

Azure Container Instances let you run a container in Azure without managing Virtual Machines and without a higher-level service. They are useful for scenarios that can operate in isolated containers, including simple applications, task automation, and build jobs used in our pipelines. If we take a look at pricing, we can notice that cost is quite low if we compare it to Azure Virtual Machines: A single container group with 1GB memory at 60 seconds is priced at a fraction of a cent: $0.0000012.

What’s important here is that when we create Azure Container Instances, we can choose to run a single container or a group of containers: A container group is a collection of containers that get scheduled on the same host machine. Containers in a container group share a lifecycle, resources, local network, and storage volumes. It means that we cannot scale them dynamically. Once a Container Instance is created, there is no option to scale the number of groups (or pods in the Kubernetes world).

Cost estimate

To make it easier to understand the pricing, let’s do a simple calculation using the Microsoft calculator. We create a Linux container group with a 1.3 vCPU and 2.15 GB, running 50 times daily for a month (30 days). The container group duration is 150 seconds. In this example, the vCPU and memory usage must be rounded up to calculate the total cost. Memory duration = number of container groups * memory duration (seconds) * GB * price per GB * number of days 50 container groups * 150 seconds * 2.2 GB * $0.00000124 per GB * 30 days = $0.612 vCPU duration = number of container groups * vCPU duration (seconds) * vCPU(s) * price per vCPU * number of days 50 container groups * 150 seconds * 2 vCPU * $0.00001125 per vCPU * 30 days = $5.063 Total cost = memory duration (seconds) + vCPU duration (seconds) In our scenario, this amounts to: $0.612 + $5.063 = $5.675 It is worth mentioning that price can vary based on the operating system used for running containers (Windows or Linux). There is an additional charge of $0.000012 per vCPU second for Windows software duration on Windows container groups.

Example use case

We can use Azure Container Instances to run Azure DevOps Self-Hosted Agents. Once the Container Instance is created, we can schedule the Azure DevOps Pipeline run. In a typical case we will run one Self-Hosted Agent in one Azure Container Instance: It means that if we want more Self-Hosted Agents to handle scheduled Pipelines runs, we will have to create more Azure Container Instances. There is no dynamic or event-driven scaling available. If we have three separate jobs scheduled in our Pipelines in Azure DevOps, each job will be queued and run after another.

Considerations

There are of course some disadvantages of this approach. For example, it is not possible to scale a specific ACI instance. If you want more CPU or memory, you need to redeploy that container again. There is no option to scale Azure Container Instances horizontally based on the amount of pipeline runs pending in a given agent pool. For example, to scale to five container instances, you create five distinct container instances. It means that we have to architect our pipelines in the way that they first create Azure Container Instance and execute the actual job. But there are of course some benefits to using Azure Container Instances:

  • It does not use public IPs. It doesn’t need one, as the Azure DevOps agent initiates the communication to the service.
  • It does not have any exposed ports. There’s no need for publishing anything.
  • Can be provisioned very quickly. To fully configure a container instance with the required components takes 5-10 minutes.

To summarise, Azure Container Instances is a perfect match when we need to run Azure DevOps Self-Hosted Agents, we do not want to maintain Azure Virtual Machines, and we want to reduce the cost. It is worth noting that Azure Container Instances can be deployed to Azure Virtual Network so we will be able to reach out to other resources in this network.

Azure Container Apps

This is our favorite container service in the Azure Cloud. Azure Container Apps enables you to run microservices and containerised applications on a serverless platform, so you can forget about managing complex Kubernetes clusters but still benefit from Kubernetes concepts. One of the biggest advantages here is autoscaling. Applications built on Azure Container Apps can dynamically scale based on the following characteristics:

  • HTTP traffic
  • Event-driven processing
  • CPU or memory load
  • Any KEDA-supported scaler.

Azure Container Apps manages automatic horizontal scaling through a set of declarative scaling rules. As a container app scales out, new instances of it are created on demand. These instances are known as replicas. When you first create a container app, the scale rule is set to zero. No charges are incurred when an application scales to zero. Can we use Azure Container Apps to run Azure DevOps Self-Hosted Agents? Of course we can! Here is how.

Scaling

The biggest advantage over Azure Container Instances is the fact that we can automatically scale the number of containers with our Self-Hosted Agents. Note: This is a great place to mention that Azure Container Apps supports KEDA ScaledObjects and all of the available KEDA scalers. KEDA is a Kubernetes-based Event Driven Autoscaler. With KEDA, you can drive the scaling of any container in Kubernetes based on the number of events needing to be processed. In the Azure Container Apps documentation, we can find a section with an explanation of  how to enable KEDA scaling.With KEDA we can scale Azure Container Apps based on agent pool queues for Azure Pipelines. Where is the tricky part? You can run the agents with KEDA as a Deployment or a Job and scale them accordingly with a ScaledObject or a ScaledJob. At the time of writing this article, Azure Container Apps with KEDA supports only ScaledObject. The problem is when container apps scale down, they can stop any instance, including a long-running agent pipeline. When running your agents as a deployment you have no control over which pod gets killed when scaling down. Using a ScaledJob is the preferred way to autoscale your Azure Pipelines agents if you have long-running jobs. We recommend you read more details in this article. Additionally, as we're writing this, there is also an Issue opened by Jeff Hollan on GitHub to Support the KEDA ScaledJobs / Jobs Pattern. Anyway, we can still host multiple Azure DevOps Self-Hosted Agents with Azure Container Apps and deploy them to resources in the Azure Virtual Network because of the fact that you can create Container Apps Environment in an existing Azure Virtual Network.

Cost estimates

The pricing for Azure Container Apps is quite attractive. Let us start with the important information that the following resources are free during each calendar month, per subscription:

  • The first 180,000 vCPU-seconds
  • The first 360,000 GiB-seconds
  • The first 2 million HTTP requests.

Of course, in the case of Azure DevOps Agents, the last item above does not apply. Why? Because of the fact that the Self-Hosted Agent communicates with Azure Pipelines or Azure DevOps Server to determine which job it needs to run, and to report the logs and job status. This communication is always initiated by the agent.

Author

SoftwareOne blog editorial team

Blog Editorial Team

We analyse the latest IT trends and industry-relevant innovations to keep you up-to-date with the latest technology.