Tag: Azure

  • Migrating to Github Packages

    I have been running a free version of Proget locally for years now. It served as a home for Nuget packages, Docker images, and Helm charts for my home lab projects. But, in an effort to slim down the apps that are running in my home lab, I took a look at some alternatives.

    Where can I put my stuff?

    When I logged in to my Proget instance and looked around, it occurred to me that I only had 3 types of feeds: Nuget packages, Docker images, and Helm charts. So to move off of Proget, I need to find replacements for all of these.

    Helm Charts

    Back in the heady days of using Octopus Deploy for my home lab, I used published Helm charts to deploy my applications. However, since I switched to a Gitops workflow with ArgoCD, I haven’t published a Helm chart in a few years. I deleted that feed in Proget. One down, two to go.

    Nuget Packages

    I have made a few different attempts to create Nuget packages for public consumption. A number of years ago, I tried publishing a data layer that was designed to be used across platforms (think APIs and mobile applications), but even I stopped using that in favor of Entity Framework Core and good old fashioned data models. More recently, I created some “platform” libraries to encapsulate some of the common code that I use in my APIs and other projects. They serve as utility libraries as well as a reference architecture for my professional work.

    There are a number of options for hosting Nuget feeds, with varying costs depending on structure. I considered the following options:

    • Azure DevOps Artifacts
    • Github Packages
    • Nuget.org

    I use Azure DevOps for my builds, and briefly considered using the artifacts feeds. However, none of my libraries are private. Everything I am writing is a public repository in Github. With that in mind, it seemed that the free offerings from Github and Nuget were more appropriate.

    I published the data layer packages to Nuget previously, so I have some experience with that. However, with these platform libraries, while they are public, I do not expect them to be heavily used. For that reason, I decided that publishing the packages to Github Packages made a little more sense. If these platform libraries get to the point where they are heavily used, I can always publish stable packages to Nuget.org.

    Container Images

    In terms of storage percentage, container images take up the bulk of my Proget storage. Now, I only have 5 container images, but I never clean anything up, so those 5 containers are taking up about 7 GB of data. When I was investigating alternatives, I wanted to make sure I had some way to clean up old pre-release tags and manifests to keep my usage down.

    I considered two alternatives:

    • Azure Container Registry
    • Github Container Registry

    An Azure Container Registry instance would cost me about $5 a month and provide me with 10 GB of storage. Github Container Registry provides 500MB of storage and 1GB of Data transfer per month, but that is for private repositories.

    As with my Nuget packages, nothing that I have is private. Github packages are free for public packages. Additionally, I found a Github task that will clean up Github the images. As this was one of my “new” requirements, I decided to take a run at Github packages.

    Making the switch

    With my current setup, the switch was fairly simple. Nuget publishing is controlled by my Azure DevOps service connections, so I created a new service connection for my Github feed. The biggest change was some housekeeping to add appropriate information to the Nuget package itself. This included added the RepositoryUrl property on the .csproj files. This tells Github which repository to associate the package with.

    Container registry wasn’t much different, and again, some housekeeping in adding the appropriate labels to the images. From there, a few template changes and the images were in the Github container registry.

    Overall, the changes were pretty minimal. I have a few projects left to convert, and once that is done, I can decommission my Proget instance.

    Next on the chopping block…

    I am in the beginning stages of evaluating Azure Key Vault as a replacement for my Hashicorp Vault instance. Although it comes at a cost, for my usage it is most likely under $3 a month, and getting away from self-hosted secrets management would make me a whole lot happier.

  • Isolating your Azure Functions

    I spent a good bit of time over the last two weeks converting our Azure functions from the in-process to the isolated worker process model. Overall the transition was fairly simple, but there were a few bumps in the proverbial road worth noting.

    Migration Process

    Microsoft Learn has a very detailed How To Guide for this migration. The guide includes steps for updating the project file and references, as well as additional packages that are required based on various trigger types.

    Since I had a number of functions to process, I followed the guide for the first one, and that worked swimmingly. However, then I got lazy and started the “copy-paste” conversion. In that laziness, I missed a particular section of the project file:

    <ItemGroup>
      <Using Include="System.Threading.ExecutionContext" Alias="ExecutionContext"/>
    </ItemGroup>

    Unfortunately, if you forget this, you will not break your local development environment. However, when you publish to a function, it will not execute correctly.

    Fixing Dependency Injection

    When using the in-process model, there are some “freebies” that get added to the dependency injection system, as if by magic. ILogger, in particular, was allowed to be automatically injected into the function (as a function parameter). However, in the in-process model, you must get ILogger from either the FunctionContext or through dependency injection into the class.

    As part of our conversion, we removed the function parameters for ILogger and replaced them with service instances retrieved through dependency injection at the class level.

    What we did not realize until we got our functions into the test environments was that IHttpContextAccessor was not available in the isolated model. Apparently, that particular interface is available as part of the in-process model automatically, but is not added as part of the isolated model. So we had to add an instance of IHttpContextAccessor to our services collection in the Program.cs file.

    It is never easy

    Upgrades or migrations are never just “change this and go.” as much as we try to make it easy, there always seems to be a little change here or there that end up being a fly in the ointment. In our case, we simply assumed that IHttpContextAccessor was there because in-process put it there, and the code which needed that was a few layers deep in the dependency tree. The only way to find it was to make the change and see what breaks. And that is what keeps quality engineers up at night.

  • Terraform Azure AD

    Over the last week or so, I realized that while I bang the drum of infrastructure as code very loudly, I have not been practicing it at home. I took some steps to reconcile that over the weekend.

    The Goal

    I have a fairly meager home presence in Azure. Primarily, I use a free version of Azure Active Directory (now Entra ID) to allow for some single sign-on capabilities in external applications like Grafana, MinIO, and ArgoCD. The setup for this differs greatly among the applications, but common to all of these is the need to create applications in Azure AD.

    My goal is simple: automate provisioning of this Azure AD account so that I can manage these applications in code. My stretch goal was to get any secrets created as part of this process into my Hashicorp Vault instance.

    Getting Started

    The plan, in one word, is Terraform. Terraform has a number of providers, including both the azuread and vault providers. Additionally, since I have some experience in Terraform, I figured it would be a quick trip.

    I started by installing all the necessary tools (specifically, the Vault CLI, the Azure CLI, and the Terraform CLI) in my WSL instance of Ubuntu. Why there instead of Powershell? Most of the tutorials and such lean towards the bash syntax, so it was a bit easier to roll through the tutorials without having to convert bash into powershell.

    I used my ops-automation repository as the source for this, and started by creating a new folder structure to hold my projects. As I anticipated more Terraform projects to come up, I created a base terraform directory, and then an azuread directory under that.

    Picking a Backend

    Terraform relies on state storage. They use the term backend to describe this storage. By default, Terraform uses a local file backend provider. This is great for development, but knowing that I wanted to get things running in Azure DevOps immediately, I decided that I should configure a backend that I can use from my machine as well as from my pipelines.

    As I have been using MinIO pretty heavily for storage, it made the most sense to configure MinIO as the backend, using the S3 backend to do this. It was “fairly” straightforward, as soon as I turned off all the nonsense:

    terraform {
      backend "s3" {
        skip_requesting_account_id  = true
        skip_credentials_validation = true
        skip_metadata_api_check     = true
        skip_region_validation      = true
        use_path_style              = true
        bucket                      = "terraform"
        key                         = "azuread/terraform.tfstate"
        region                      = "us-east-1"
      }
    }

    There are some obvious things missing: I am setting environment variables for values I would like to treat as secret, or, at least not public.

    • MinIO Endpoint -> AWS_ENDPOINT_URL_S3 environment variable instead of endpoints.s3
    • Access Key -> AWS_ACCESS_KEY_ID environment variable instead of access_key
    • Secret Key -> AWS_SECRET_ACCESS_KEY environment variable instead of secret_key

    These settings allow me to use the same storage for both my local machine and the Azure Pipeline.

    Configuration Azure AD

    Likewise, I needed to configure the azuread provider. I followed the steps in the documentation, choosing the environment variable route again. I configured a service principal in Azure and gave it the necessary access to manage my directory.

    Using environment variables allows me to set these from variables in Azure DevOps, meaning my secrets are stored in ADO (or Vault, or both…. more on that in another post).

    Importing Existing Resources

    I have a few resources that already exist in my Azure AD instance, enough that I didn’t want to re-create them and then re-configure everything which uses them. Luckily, most Terraform providers allow for importing existing resources. Thankfully, most of the resources I have support this feature.

    Importing is fairly simple: you create the simplest definition of a resource that you can, and then run a terraform import variant to import that resource into your project’s state. Importing an Azure AD Application, for example, looks like this:

    terraform import azuread_application.myapp /applications/<object-id>

    It is worth noting that the provider is looking for the object-id, not the client ID. The provider documentation has information as to which ID each resource uses for import.

    More importantly, Applications and Service Principals are different resources in Azure AD, even though they are pretty much a one to one. To import a Service Principal, you run a similar command:

    terraform import azuread_service_principal.myprincipal <sp-id>

    But where is the service principal’s ID? I had to go to the Azure CLI to get that info:

    az ad sp list --display myappname

    From this JSON, I grabbed the id value and used that to import.

    From here, I ran a terraform plan to see what was going to be changed. I took a look at the differences, and even added some properties to the terraform files to maintain consistency between the app and the existing state. I ended up with a solid project full of Terraform files that reflected my current state.

    Automating with Azure DevOps

    There are a few extensions available to add Terraform tasks to Azure DevOps. Sadly, most rely on “standard” configurations for authentication against the backends. Since I’m using an S3 compatible backend, but not S3, I had difficulty getting those extensions to function correctly.

    As the Terraform CLI is installed on my build agent, though, I only needed to run my commands from a script. I created an ADO template pipeline (planning for expansion) and extended it to create the pipeline.

    All of the environment variables in the template are reflected in the variable groups defined in the extension. If a variable is not defined, it’s simply blank. That’s why you will see the AZDO_ environment variables in the template, but not in the variable groups for the Azure AD provisioning.

    Stretch: Adding Hashicorp Vault

    Adding HC Vault support was somewhat trivial, but another exercise in authentication. I wanted to use AppRole authentication for this, so I followed the vault provider’s instructions and added additional configuration to my provider. Note that this setup requires additional variables that now need to be set whenever I do a plan or import.

    Once that was done, I had access to read and write values in Vault. I started by storing my application passwords in a new key vault. This allows me to have application passwords that rotate weekly, which is a nice security feature. Unfortunately, the rest of my infrastructure isn’t quite setup to handle such change. At least, not yet.

  • Tech Tip – Not all certificates are the same

    I have been trying to build a model in Azure to start modernizing one of our applications. Part of that is configuring an application gateway correctly and getting end-to-end SSL configured. As it turns out, not all certificates are good certificates, at least to Azure.

    Uploading the Cert

    I have a wildcard certificate for a test domain, so I exported it into a full chain PFX that I could upload into Azure where I needed. The model I’m building is “hand built” for now, so I am not terribly concerned about uploading the certificate in a few places just to get things moving.

    I was able to upload the certificate into Key Vault, as well as to the Azure Application Gateway I created. But, when I went to use the certificate for a custom domain in an Azure App Service, well, it was fighting me.

    Legacy Only??

    As it turns out, App Services has some very specific requirements for its certificates. My method to export was “too new” to work. Thankfully, I came across a StackOverflow question that solved the issue.

    For everyone’s reference, I had to import the certificate into Windows, and then export another PFX with the proper encryption.

    From the Stack Overflow post above, in Powershell, import the existing PFX:

    Import-PfxCertificate -FilePath "pfx file path" -CertStoreLocation Cert:\LocalMachine\My -Password (ConvertTo-SecureString -String 'MyPassword' -AsPlainText -Force) -Exportable
    

    Grab the thumprint (you’ll need it), and then export the certificate in Powershell:

    Export-PfxCertificate -Cert Microsoft.PowerShell.Security\Certificate::LocalMachine\My\B56CE9B122FB04E29A974A4D0DB3F6EAC2D150C0 -FilePath 'newPfxName.pfx' -Password (ConvertTo-SecureString -String 'MyPassword' -AsPlainText -Force)

    The newly generated PFX can be used in Azure App Services!