Tag: Azure DevOps

  • Migrating to Github Packages

    I have been running a free version of Proget locally for years now. It served as a home for Nuget packages, Docker images, and Helm charts for my home lab projects. But, in an effort to slim down the apps that are running in my home lab, I took a look at some alternatives.

    Where can I put my stuff?

    When I logged in to my Proget instance and looked around, it occurred to me that I only had 3 types of feeds: Nuget packages, Docker images, and Helm charts. So to move off of Proget, I need to find replacements for all of these.

    Helm Charts

    Back in the heady days of using Octopus Deploy for my home lab, I used published Helm charts to deploy my applications. However, since I switched to a Gitops workflow with ArgoCD, I haven’t published a Helm chart in a few years. I deleted that feed in Proget. One down, two to go.

    Nuget Packages

    I have made a few different attempts to create Nuget packages for public consumption. A number of years ago, I tried publishing a data layer that was designed to be used across platforms (think APIs and mobile applications), but even I stopped using that in favor of Entity Framework Core and good old fashioned data models. More recently, I created some “platform” libraries to encapsulate some of the common code that I use in my APIs and other projects. They serve as utility libraries as well as a reference architecture for my professional work.

    There are a number of options for hosting Nuget feeds, with varying costs depending on structure. I considered the following options:

    • Azure DevOps Artifacts
    • Github Packages
    • Nuget.org

    I use Azure DevOps for my builds, and briefly considered using the artifacts feeds. However, none of my libraries are private. Everything I am writing is a public repository in Github. With that in mind, it seemed that the free offerings from Github and Nuget were more appropriate.

    I published the data layer packages to Nuget previously, so I have some experience with that. However, with these platform libraries, while they are public, I do not expect them to be heavily used. For that reason, I decided that publishing the packages to Github Packages made a little more sense. If these platform libraries get to the point where they are heavily used, I can always publish stable packages to Nuget.org.

    Container Images

    In terms of storage percentage, container images take up the bulk of my Proget storage. Now, I only have 5 container images, but I never clean anything up, so those 5 containers are taking up about 7 GB of data. When I was investigating alternatives, I wanted to make sure I had some way to clean up old pre-release tags and manifests to keep my usage down.

    I considered two alternatives:

    • Azure Container Registry
    • Github Container Registry

    An Azure Container Registry instance would cost me about $5 a month and provide me with 10 GB of storage. Github Container Registry provides 500MB of storage and 1GB of Data transfer per month, but that is for private repositories.

    As with my Nuget packages, nothing that I have is private. Github packages are free for public packages. Additionally, I found a Github task that will clean up Github the images. As this was one of my “new” requirements, I decided to take a run at Github packages.

    Making the switch

    With my current setup, the switch was fairly simple. Nuget publishing is controlled by my Azure DevOps service connections, so I created a new service connection for my Github feed. The biggest change was some housekeeping to add appropriate information to the Nuget package itself. This included added the RepositoryUrl property on the .csproj files. This tells Github which repository to associate the package with.

    Container registry wasn’t much different, and again, some housekeeping in adding the appropriate labels to the images. From there, a few template changes and the images were in the Github container registry.

    Overall, the changes were pretty minimal. I have a few projects left to convert, and once that is done, I can decommission my Proget instance.

    Next on the chopping block…

    I am in the beginning stages of evaluating Azure Key Vault as a replacement for my Hashicorp Vault instance. Although it comes at a cost, for my usage it is most likely under $3 a month, and getting away from self-hosted secrets management would make me a whole lot happier.

  • Supporting a GitHub Release Flow with Azure DevOps Builds

    It has been a busy few months, and with the weather changing, I have a little more time in front of the computer for hobby work. Some of my public projects were in need of a few package updates, so I started down that road. Most of the updates were pretty simple: a few package updates and some Azure DevOps step template updates and I was ready to go. However, I had been delaying my upgrade to GitVersion 6, and in taking that leap, I changed my deployment process slightly.

    Original State

    My current development process supports three environments: test, stage, and production. Commits to feature/* branches are automatically deployed to the test environment, and any builds from main are first deployed to stage and then can be deployed to production.

    For me, this works: I am usually only working on one branch at a time, so publishing feature branches to the test environment works. When I am done with a branch, I merge it into main and get it deployed.

    New State

    As I have been working through some processes at work, it occurred to me that versions are about release, not necessarily commits. While commits can help us number releases, they shouldn’t be the driving force. GitVersion 6 and its new workflow defaults drive this home.

    So my new state would be pretty similar: feature/* branches get deployed to the test environment automatically. The difference lies in main: I no longer want to release with every commit to main. I want to be able to control releases through the use of tags (and GitHub releases, which generate tags.

    So I flipped over to GitVersion 6 and modified my GitVersion.yml file:

    workflow: GitHubFlow/v1
    merge-message-formats:
      pull-request: 'Merge pull request \#(?<PullRequestNumber>\d+) from'
    branches:
      feature:
        mode: ContinuousDelivery

    I modified my build pipeline to always build, but only trigger code release for feature/* branch builds and builds from a tag. I figured this would work fine, but Azure DevOps threw me a curve ball.

    Azure DevOps Checkouts

    When you build from a tag, Azure DevOps checks that tag out directly, using the /tags/<tagname> branch reference. When I tried to run GitVersion on this, I got a weird branch number: A build on tag 1.3.0 resulted in 1.3.1-tags-1-3-0.1.

    I dug into GitVersion’s default configuration, and noticed this corresponded with the unknown branch configuration. To get around Azure Devops, I had to do configure the tags/ branches:

    workflow: GitHubFlow/v1
    merge-message-formats:
      pull-request: 'Merge pull request \#(?<PullRequestNumber>\d+) from'
    branches:
      feature:
        mode: ContinuousDelivery
      tags:
        mode: ManualDeployment
        label: ''
        increment: Inherit
        prevent-increment:
          when-current-commit-tagged: true
        source-branches:
        - main
        track-merge-message: true
        regex: ^tags?[/-](?<BranchName>.+)
        is-main-branch: true

    This treats tags as main branches when calculating the version.

    Caveat Emptor

    This works if you ONLY tag your main branch. If you are in the habit of tagging other branches, this will not work for you. However, I only ever release from main branches, and I am in a fix-forward scenario, so this works for me. If you use release/* branches and need builds from there, you may need additional GitVersion configuration to get the correct version numbers to generate.

  • Terraform Azure AD

    Over the last week or so, I realized that while I bang the drum of infrastructure as code very loudly, I have not been practicing it at home. I took some steps to reconcile that over the weekend.

    The Goal

    I have a fairly meager home presence in Azure. Primarily, I use a free version of Azure Active Directory (now Entra ID) to allow for some single sign-on capabilities in external applications like Grafana, MinIO, and ArgoCD. The setup for this differs greatly among the applications, but common to all of these is the need to create applications in Azure AD.

    My goal is simple: automate provisioning of this Azure AD account so that I can manage these applications in code. My stretch goal was to get any secrets created as part of this process into my Hashicorp Vault instance.

    Getting Started

    The plan, in one word, is Terraform. Terraform has a number of providers, including both the azuread and vault providers. Additionally, since I have some experience in Terraform, I figured it would be a quick trip.

    I started by installing all the necessary tools (specifically, the Vault CLI, the Azure CLI, and the Terraform CLI) in my WSL instance of Ubuntu. Why there instead of Powershell? Most of the tutorials and such lean towards the bash syntax, so it was a bit easier to roll through the tutorials without having to convert bash into powershell.

    I used my ops-automation repository as the source for this, and started by creating a new folder structure to hold my projects. As I anticipated more Terraform projects to come up, I created a base terraform directory, and then an azuread directory under that.

    Picking a Backend

    Terraform relies on state storage. They use the term backend to describe this storage. By default, Terraform uses a local file backend provider. This is great for development, but knowing that I wanted to get things running in Azure DevOps immediately, I decided that I should configure a backend that I can use from my machine as well as from my pipelines.

    As I have been using MinIO pretty heavily for storage, it made the most sense to configure MinIO as the backend, using the S3 backend to do this. It was “fairly” straightforward, as soon as I turned off all the nonsense:

    terraform {
      backend "s3" {
        skip_requesting_account_id  = true
        skip_credentials_validation = true
        skip_metadata_api_check     = true
        skip_region_validation      = true
        use_path_style              = true
        bucket                      = "terraform"
        key                         = "azuread/terraform.tfstate"
        region                      = "us-east-1"
      }
    }

    There are some obvious things missing: I am setting environment variables for values I would like to treat as secret, or, at least not public.

    • MinIO Endpoint -> AWS_ENDPOINT_URL_S3 environment variable instead of endpoints.s3
    • Access Key -> AWS_ACCESS_KEY_ID environment variable instead of access_key
    • Secret Key -> AWS_SECRET_ACCESS_KEY environment variable instead of secret_key

    These settings allow me to use the same storage for both my local machine and the Azure Pipeline.

    Configuration Azure AD

    Likewise, I needed to configure the azuread provider. I followed the steps in the documentation, choosing the environment variable route again. I configured a service principal in Azure and gave it the necessary access to manage my directory.

    Using environment variables allows me to set these from variables in Azure DevOps, meaning my secrets are stored in ADO (or Vault, or both…. more on that in another post).

    Importing Existing Resources

    I have a few resources that already exist in my Azure AD instance, enough that I didn’t want to re-create them and then re-configure everything which uses them. Luckily, most Terraform providers allow for importing existing resources. Thankfully, most of the resources I have support this feature.

    Importing is fairly simple: you create the simplest definition of a resource that you can, and then run a terraform import variant to import that resource into your project’s state. Importing an Azure AD Application, for example, looks like this:

    terraform import azuread_application.myapp /applications/<object-id>

    It is worth noting that the provider is looking for the object-id, not the client ID. The provider documentation has information as to which ID each resource uses for import.

    More importantly, Applications and Service Principals are different resources in Azure AD, even though they are pretty much a one to one. To import a Service Principal, you run a similar command:

    terraform import azuread_service_principal.myprincipal <sp-id>

    But where is the service principal’s ID? I had to go to the Azure CLI to get that info:

    az ad sp list --display myappname

    From this JSON, I grabbed the id value and used that to import.

    From here, I ran a terraform plan to see what was going to be changed. I took a look at the differences, and even added some properties to the terraform files to maintain consistency between the app and the existing state. I ended up with a solid project full of Terraform files that reflected my current state.

    Automating with Azure DevOps

    There are a few extensions available to add Terraform tasks to Azure DevOps. Sadly, most rely on “standard” configurations for authentication against the backends. Since I’m using an S3 compatible backend, but not S3, I had difficulty getting those extensions to function correctly.

    As the Terraform CLI is installed on my build agent, though, I only needed to run my commands from a script. I created an ADO template pipeline (planning for expansion) and extended it to create the pipeline.

    All of the environment variables in the template are reflected in the variable groups defined in the extension. If a variable is not defined, it’s simply blank. That’s why you will see the AZDO_ environment variables in the template, but not in the variable groups for the Azure AD provisioning.

    Stretch: Adding Hashicorp Vault

    Adding HC Vault support was somewhat trivial, but another exercise in authentication. I wanted to use AppRole authentication for this, so I followed the vault provider’s instructions and added additional configuration to my provider. Note that this setup requires additional variables that now need to be set whenever I do a plan or import.

    Once that was done, I had access to read and write values in Vault. I started by storing my application passwords in a new key vault. This allows me to have application passwords that rotate weekly, which is a nice security feature. Unfortunately, the rest of my infrastructure isn’t quite setup to handle such change. At least, not yet.

  • Environment Woes

    No, this is not a post on global warming. As it turns out, I have been provisioning my Azure DevOps build agents somewhat incorrectly, at least for certain toolsets.

    Sonar kicks it off

    It started with this error in my build pipeline:

    ERROR: 
    
    The version of Java (11.0.21) used to run this analysis is deprecated, and SonarCloud no longer supports it. Please upgrade to Java 17 or later.
    As a temporary measure, you can set the property 'sonar.scanner.force-deprecated-java-version' to 'true' to continue using Java 11.0.21
    This workaround will only be effective until January 28, 2024. After this date, all scans using the deprecated Java 11 will fail.

    I provision my build agents using the Azure DevOps/GitHub Actions runner images repository, so I know it has multiple versions of Java. I logged in to the agent, and the necessary environment variables (including JAVA_HOME_17_X64) are present. However, adding the jdkVersion input made no difference.

    - task: SonarCloudAnalyze@1
      inputs:
        jdkversion: 'JAVA_HOME_17_X64'

    I also tried using the JavaToolInstaller step to install Java 17 prior, and I got this error:

    ##[error]Java 17 is not preinstalled on this agent

    Now, as I said, I KNOW it’s installed. So, what’s going on?

    All about environment

    The build agent has the proper environment variables set. As it turns out, however, the build agent needs some special setup. Some research on my end led me to Microsoft’s page on the Azure DevOps Linux Agents, specifically, the section on environmental variables.

    I checked my .env file in my agent directory, and it had a scrawny 5-6 entries. As a test, I added JAVA_HOME_17_X64 with a proper path as an entry in that file and restarted the agent. Presto! No more errors, and Sonar Analysis ran fine.

    Scripting for future agents

    With this in mind, I updated the script that installs my ADO build agent to include steps to copy environment variables from the machine to the .env file for the agent, so that the agent knows what is on the machine. After a couple tests (forgot a necessary sudo), I have a working provisioning script.

  • When a Build is not a Build

    In my experience, the best way to learn a new software system is to see how it is built. That path lead me to some warnings housekeeping, which lead me to a weird issue with Azure DevOps.

    Housekeeping!

    In starting my new role, I have been trying to get a lay of the land. That includes cloning repositories, configuring authentication, and getting local builds running. As I was doing that, I noticed some build warnings.

    Now, I will be the first to admit, in the physical world, I am not a “neat freak.” But, in my little digital world, I appreciate a world of builds free of warnings, unit tests where appropriate, and a well-organized distribution system.

    So with that, I spent a little time fixing the build warnings or entering new technical debt tickets for them to be addressed at a later date. I also spent some time digging into the build process to start making these more apparent to the teams.

    Where are the Warnings?

    In digging through our builds, I noticed that none of them listed the warnings that I was seeing locally. I found that odd, so I started looking at my local build to see what was wrong.

    And I did, in fact, find some issues with my local environment. After some troubleshooting, I literally deleted everything (Nuget caches and all my bin/obj folders), and it did get rid of some of the warnings.

    However, there were still general warnings that the compiler was showing that I was not seeing in the build pipeline. I noticed that we were using the publish command of the DotNetCoreCli@2 task to effectively build and publish the projects, creating build artifacts. Looking at the logs for those publish tasks, I saw the warnings I expected. But Azure DevOps was not picking them up in the pipeline.

    Some Google searches led me to this issue. As it turns out, it is a known problem that warnings output using the publish command do not get picked up by Azure DevOps. So what do we do?

    Build it!

    While publish does not output the warnings correctly, the build command for the same task does. So, instead of one step to build and publish, we have to separate:

    - task: DotNetCoreCLI@2
      displayName: 'Build Project'
      inputs:
        command: 'build'
        publishWebProjects: false
        projects: '${{ parameters.project }}'
        ${{ if eq(parameters.forceRestore, true) }}:
          arguments: '-c ${{ parameters.configuration }}'
        ${{ else }}:
          arguments: '-c ${{ parameters.configuration }} --no-restore' 
    
    - task: DotNetCoreCLI@2
    
      displayName: 'Publish Project'
      inputs:
        command: 'publish'
        publishWebProjects: false
        nobuild: true
        projects: '${{ parameters.project }}'
        arguments: '-c ${{ parameters.configuration }} --output ${{ parameters.outputDir }} --no-build --no-restore' 
        zipAfterPublish: '${{ parameters.zipAfterPublish }}'
        modifyOutputPath: '${{ parameters.modifyOutputPath }}'

    Notice in the build task, I have a parameter to optionally pass --no-restore to the build, and in the publish I am passing in --no-build and --no-restore. Since these steps are running on the same build agent, we can do a build and then a publish with a --no-build parameter to prevent a double build scenario. Our pipeline also performs package restores separately, so most of the time, a restore is not necessary on the build. For the times that it is required, we can pass in the forceRestore parameter to configure that project.

    With this change, all the warnings that were not showing in the pipeline are now showing. Now it is time for me to get at fixing those warnings!

  • Pack, pack, pack, they call him the Packer….

    Through sheer happenstance I came across a posting for The Jaggerz playing near me and was taken back to my first time hearing “The Rapper.” I happened to go to school with one of the member’s kids, which made it all the more fun to reminisce.

    But I digress. I spent time a while back getting Packer running at home to take care of some of my machine provisioning. At work, I have been looking for an automated mechanism to keep some of our build agents up to date, so I revisited this and came up with a plan involving Packer and Terraform.

    The Problem

    My current problem centers around the need to update our machine images weekly, but still using Terraform to manage our infrastructure. In the case of Azure DevOps, we can provision VM Scale Sets and assign those Scale Sets to an Azure DevOps agent pool. But, when I want to update that image, I can do it two different ways:

    1. Using Azure CLI, I can update the Scale Set directly.
    2. I can modify the Terraform repository to update the image and then re-run Terraform.

    Now, #1 sounds easy, right? Run command and I’m done. But it then defeats the purpose of Terraform, which is to maintain infrastructure as code. So, I started down path #2.

    Packer Revisit

    I previously used Packer to provision Hyper-V VMs, but the provisioner for azure-rm is pretty similar. I was able to configure a simple windows based VM and get the only application I needed installed with a Powershell script.

    One app? On a build agent? Yes, this is a very particular agent, and I didn’t want to install it everywhere, so I created a single agent image with the necessary software.

    Mind you, I have been using the runner-images Packer projects to build my Ubuntu agent at home, and we use them to build both Windows and Ubuntu images at work, so, by comparison, my project is wee tiny. But it gives me a good platform to test. So I put a small repository together with a basic template and a Powershell script to install my application, and it was time to build.

    Creating the Build Pipeline

    My build process should be, for all intents and purposes, one step that runs the packer build command, which will create the image in Azure. I found the PackerBuild@1 task, and thought my job was done. It would seem that the Azure DevOps task hasn’t kept up with the times, either that, or Packer’s CLI needs help.

    I wanted to use the PackerBuild@1 task to take advantage of the service connection. I figured, if I could run the task with a service connection, I wouldn’t have to store service principal credential in a variable library. As it turns out… well, I would have to do that anyway.

    When I tried to run the task, I got an error that “packer fix only supports json.” My template is in HCL format, and everything I have seen suggests that Packer would rather move to HCL. Not to be beaten, I looked at the code for the task to see if I could skip the fix step.

    Not only could I not skip that step, but when I dug into the task, I noticed that I wouldn’t be able to use the service connection parameter with a custom template. So with that, my dreams of using a fancy task went out the door.

    Plan B? Use Packer’s ability to grab environment variables as default values and set the environment variables in a Powershell script before I run the Packer build. It is not super pretty, but it works.

    - pwsh: | 
        $env:ARM_CLIENT_ID = "$(azure-client-id)"
        $env:ARM_CLIENT_SECRET = "$(azure-client-secret)"
        $env:ARM_SUBSCRIPTION_ID = "$(azure-subscription-id)"
        $env:ARM_TENANT_ID = "$(azure-tenant-id)"
        Invoke-Expression "& packer build --var-file values.pkrvars.hcl -var vm_name=vm-image-$(Build.BuildNumber) windows2022.pkr.hcl"
      displayName: Build Packer

    On To Terraform!

    The next step was terraforming the VM Scale Set. If you are familiar with Terraform, the VM Scale Set resource in the AzureRM provider is pretty easy to use. I used the Windows VM Scale Set, as my agents will be Windows based. The only “trick” is finding the image you created, but, thankfully, that can be done by name using a data block.

    data "azurerm_image" "image" {
      name                = var.image_name
      resource_group_name = data.azurerm_resource_group.vmss_group.name
    }

    From there, just set source_image_id to data.azurerm_image.image.id, and you’re good. Why look this up by name? It makes automation very easy.

    Gluing the two together

    So I have a pipeline that builds an image, and I have another pipeline that executes the Terraform plan/apply steps. The latter is triggered on a commit to main in the Terraform repository, so how can I trigger a new build?

    All I really need to do is “reach in” to the Terraform repository, update the variable file with the new image name, and commit it. This can be automated, and I spent a lot of time doing just that as part of implementing our GitOps workflow. In fact, as I type this, I realize that I probably owe a post or two on how exactly we have done that. But, using some scripted git commands, it is pretty easy.

    So, my Packer build pipeline will checkout the Terraform repository, change the image name in the variable file, and commit. This is where the image name is important: Packer spit out the Azure Image ID (at least, not that I saw), so having a known name makes it easy for me to just tell Terraform to use the new image name, and it uses that to look up the value.

    What’s next?

    This was admittedly pretty easy, but only because I have been using Packer and Terraform for some time now. The learning curve is steep, but as I look across our portfolio, I can see areas where these types of practices can help us by allowing us to build fresh machine images on a regular cadence, and stop treating our servers as pets. I hope to document some of this for our internal teams and start driving them down a path of better deployment.

  • Packages, pipelines, and environment variables…. oh my!

    I was clearly tempting the fates of package management when I called out NPM package management. Nuget was not to be outdone, and threw us a curveball in one of our Azure DevOps builds that is too good not to share.

    The Problem

    Our Azure Pipeline build was erroring out, however, it was erroring in different steps, and steps that seemingly had no connection to the changes that were made.

    Error 1 – Unable to install GitVersion.Tool

    We use GitVersion to version our builds and packages. We utilize the GitTools Azure DevOps extension to provide us with pipeline tasks to install and execute GitVersion. Most of our builds use step templates to centralize common steps, so these steps are executed every time, usually without error.

    However, in this particular branch, we were getting the following error:

    --------------------------
    Installing GitVersion.Tool version 5.x
    --------------------------
    Command: dotnet tool install GitVersion.Tool --tool-path /agent/_work/_temp --version 5.7.0
    /usr/bin/dotnet tool install GitVersion.Tool --tool-path /agent/_work/_temp --version 5.7.0
    The tool package could not be restored.
    /usr/share/dotnet/sdk/5.0.402/NuGet.targets(131,5): error : 'feature' is not a valid version string. (Parameter 'value') [/tmp/t31raumn.tj5/restore.csproj]

    Now, I would have concentrated more on the “‘feature’ is not a valid version string” error initially, however, I was distracted because, sometimes, we got past this error but into another error.

    Error 2 – Dotnet restore failed

    So, sometimes (not always), the pipeline would get past the installation of GitVersion and make it about three steps forward, into the dotnet restore step. Buried in that restore log was a similar error to the first one:

    error : 'feature' is not a valid version string. (Parameter 'value')

    Same error, different steps?

    So, we were seeing the same error, but in two distinct places: one in a step which installs the tool, and presumably has nothing to do with the code in the repository, and the other which is squarely around the dotnet restore command.

    A quick Google search yielded an issue in the dotnet core Github repository. At the tail end of that thread is this little tidbit from user fabioduque:

    The same thing happened to me, and it was due to me setting up the Environment Variable “VERSION”.
    Solved with with:
    set VERSION=

    Given that, I put a quick step in to print out the environment variables and determined the problem.

    Better the devil you know…

    As the saying goes: Better the devil you know than the devil you don’t. In this case, the devil we didn’t know about was an environment variable named VERSIONPREFIX. Now, we were not expressly setting that version in a script, but one of our variables was named versionPrefix. Azure DevOps makes pipeline variables available as environment variables, and it standardizes them into all caps, so we ended up with VERSIONPREFIX. Nuget, in versions after 3.4, provides for applying settings at runtime via environment variables. And since VERSIONPREFIX is applicable in dotnet / nuget, our variable was causing these random errors. I renamed the variable and viola, no more random errors.

    The Short, Short Version

    Be careful when naming variables in Azure Pipelines. They are made available as-named via environment variables. If some of the steps or commands that you use accept those environment variables as settings, you may be inadvertently affecting your builds.

  • Packer.io : Making excess too easy

    As I was chatting with a former colleague the other day, I realized that I have been doing some pretty diverse work as part of my home lab. A quick scan of my posts in this category reveal a host of topics ranging from home automation to Python monitoring to Kubernetes administration. One of my colleague’s questions was something to the effect of “How do you have time to do all of this?”

    As I thought about it for a minute, I realized that all of my Kubernetes research would not have been possible if I had not first taken the opportunity to automate the process of provisioning Hyper-V VMs. In my Kubernetes experimentation, I have easily provisioned 35-40 Ubuntu VMs, and then promptly broken two-thirds of them through experimentation. Thinking about taking the time to install Ubuntu and provision it before I can start work, well, that would have been a non-starter.

    It started with a build…

    In my corporate career, we have been moving more towards Azure DevOps and away from TeamCity. To date, I am impressed with Azure DevOps. Pipelines-as-code appeals to my inner geek, and not having to maintain a server and build agents has its perks. I had visions of migrating from TeamCity to Azure DevOps, hoping I could take advantage of Microsoft’s generosity with small teams. Alas, Azure DevOps is free for small teams ONLY if you self host your build agents, which meant a small amount of machine maintenance.. I wanted to be able to self-host agents with the same software that Microsoft uses for their Github Actions/Azure DevOps agents. After reading through the Github Virtual Environments repository, I determined it was time to learn Packer.

    The build agents for Github/Azure Devops are provisioned using Packer. My initial hope was that I would just be able to clone that repository, run packer, and viola! It’s never that easy. The Packer projects in that repository are designed to provision VM images that run in Azure, not on Hyper-V. Provisioning Hyper-V machines is possible through Packer, but requires different template files and some tweaking of the provisioning scripts. Without getting too much into the weeds, Packer uses different builders for Azure and Hyper-V. So I had to grab all the provisioning scripts I wanted from the template files in the Virtual Environments repository, but configure a builder for Hyper-V. Thankfully, Nick Charlton provided a great starting point for automating Ubuntu 20.04 installs with Packer. From there, I was off to the races.

    Enabling my excess

    Through probably 40 hours of trial and error, I got to the point where I was building my own build agents and hooking them up to my Azure DevOps account. It should be noted that fully provisioning a build agent takes six to eight hours, so most of that 40 hours was “fire and forget.” With that success, I started to think: “Could I provision simpler Ubuntu servers and use those to experiment with Kubernetes?”

    The answer, in short, is “Of course!” I went about creating some Powershell scripts and Packer templates so that I could provision various levels of Ubuntu servers. I have shared those scripts, along with my build agent provisioning scripts, in my provisioning-projects repository on Github. With those scripts, I was off to the races, provisioning new machines at will. It is remarkable the risks you will take in a lab environment, knowing that you are only 20-30 minutes away from a clean machine should you mess something up.

    A note on IP management

    If you dig into the repository above, you may notice some scripts and code around provisioning a MAC address from a “Unifi IP Manager.” I created a small API wrapper that utilizes the Unifi Controller APIs to create clients with fixed IP addresses. The API generates a random, but valid, MAC Address for Hyper-V, then uses the Unifi Controller API to assign a fixed IP.

    That project isn’t quite ready for public consumption, but if you are interested, drop a comment on this post.