Category: Software

  • Publishing Code Coverage in both Azure DevOps and SonarQube

    I spent more time than I care to admit trying to get the proper configuration for reporting code coverage to both the Azure DevOps pipeline and SonarQube. The solution was, well, fairly simple, but it is worth me writing down.

    Testing, Testing…

    After fumbling around with some of the linting and publishing to SonarQube’s Community Edition, I succeeded in creating build pipelines which, when building from the main branch, will run SonarQube analysis and publish to the project.

    I modified my build template as follows:

    - ${{ if eq(parameters.execute_sonar, true) }}:
        # Prepare Analysis Configuration task
        - task: SonarQubePrepare@5
          inputs:
            SonarQube: ${{ parameters.sonar_endpoint_name }}
            scannerMode: 'MSBuild'
            projectKey: ${{ parameters.sonar_project_key }}
    
      - task: DotNetCoreCLI@2
        displayName: 'DotNet restore packages (dotnet restore)'
        inputs:
          command: 'restore'
          feedsToUse: config
          nugetConfigPath: "$(Build.SourcesDirectory)/nuget.config"
          projects: "**/*.csproj"
          externalFeedCredentials: 'feedName'
    
      - task: DotNetCoreCLI@2
        displayName: Build (dotnet build)
        inputs:
          command: build
          projects: ${{ parameters.publishProject }}
          arguments: '--no-restore --configuration ${{ parameters.BUILD_CONFIGURATION }} /p:InformationalVersion=$(fullSemVer) /p:AssemblyVersion=$(AssemblySemVer) /p:AssemblyFileVersion=$(AssemblySemFileVer)'
          
    
    ## Test steps are here, details below
    
      - ${{ if eq(parameters.execute_sonar, true) }}:
        - powershell: |
            $params = "$env:SONARQUBE_SCANNER_PARAMS" -replace '"sonar.branch.name":"[\w,/,-]*"\,?'
            Write-Host "##vso[task.setvariable variable=SONARQUBE_SCANNER_PARAMS]$params"
    
        # Run Code Analysis task
        - task: SonarQubeAnalyze@5
    
        # Publish Quality Gate Result task
        - task: SonarQubePublish@5
          inputs:
            pollingTimeoutSec: '300'

    In the code above, the execute_sonar parameter allows me to execute the Sonarqube steps only in the main branch, which allows me to keep the community edition happy but retain the rest of my pipeline definition on feature branches.

    This configuration worked and my project’s analysis showed in Sonarqube.

    Testing, Testing, 1, 2, 3…

    I went about adding some trivial unit tests in order to verify that I could get code coverage publishing. I have had experience with Coverlet in the past, and it allows for generation of various coverage report formats, so I went about adding it to the project using the simple collector.

    Within my build pipeline, I added the following (if you are referencing the snippet above, this is in place of the comment):

    - ${{ if eq(parameters.execute_tests, true) }}:
        - task: DotNetCoreCLI@2
          displayName: Test (dotnet test)
          inputs:
            command: test
            projects: '**/*tests/*.csproj'
            arguments: '--no-restore --configuration ${{ parameters.BUILD_CONFIGURATION }} --collect:"XPlat Code Coverage"'
        

    Out of the box, this command worked well: tests were executed, and both Tests and Coverage information were published to Azure DevOps. Apparently, the DotNetCoreCLI@2 task defaults publishTestResults to true.

    While the tests ran, the coverage was not published to Sonarqube. I had hoped that the Sonarqube Extension for Azure DevOps would pick this up, but, at last, this was not the case.

    Coverlet, DevOps, and Sonarqube… oh my.

    As it turns out, you have to tell Sonarqube explicitly where to find the coverage report. And, while the Sonarqube documentation is pretty good at describing how to report coverage to Sonarqube from the CLI, the Azure DevOps integration documentation does not specify how to accomplish this, at least not outright. Also, while Azure DevOps recognizes cobertura coverage reports, Sonarqube prefers opencover reports.

    I had a few tasks ahead of me:

    1. Generate my coverage reports in two formats
    2. Get a specific location for the coverage reports in order to pass that information to Sonarqube.
    3. Tell the Sonarqube analyzer where to find the opencover reports

    Generating Multiple Coverage Reports

    As mentioned, I am using Coverlet to collect code coverage. Coverlet allows for the usage of RunSettings files, which helps to standardize various settings within the test project. It also allowed me to generate two coverage reports in different formats. I created a coverlet.runsettings file in my test project’s directory and added this content:

    <?xml version="1.0" encoding="utf-8" ?>
    <RunSettings>
      <DataCollectionRunSettings>
        <DataCollectors>
          <DataCollector friendlyName="XPlat code coverage">
            <Configuration>
              <Format>opencover,cobertura</Format>          
            </Configuration>
          </DataCollector>
        </DataCollectors>
      </DataCollectionRunSettings>
    </RunSettings>

    To make it easy on my build and test pipeline, I added the following project to my test csproj file:

    <RunSettingsFilePath>$(MSBuildProjectDirectory)\coverlet.runsettings</RunSettingsFilePath>

    There are a number of settings for coverlet that can be set in the runsettings file, the full list can be found in their VSTest Integration documentation.

    Getting Test Result Files

    As mentioned above, the DotNetCoreCLI@2 action defaults publishTestResults to true. This setting adds some arguments to the test command to output trx logging and setting a results directory. However, this means I am not able to specify a results directory on my own.

    Even specifying the directory myself does not fully solve the problem: Running the tests with coverage and trx logging generates a trx file using username/computer name/timestamp AND two sets of coverage reports: one set is stored under the username/computer name/timestamp folder, and the other under a random Guid.

    To ensure I only pulled one set of tests, and that Azure DevOps didn’t complain, I updated my test execution to look like this:

      - ${{ if eq(parameters.execute_tests, true) }}:
        - task: DotNetCoreCLI@2
          displayName: Test (dotnet test)
          inputs:
            command: test
            projects: '**/*tests/*.csproj'
            publishTestResults: false
            arguments: '--no-restore --configuration ${{ parameters.BUILD_CONFIGURATION }} --collect:"XPlat Code Coverage" --logger trx --results-directory "$(Agent.TempDirectory)"'
        
        - pwsh:
            Push-Location $(Agent.TempDirectory);
            mkdir "ResultFiles";
            $resultFiles = Get-ChildItem -Directory -Filter resultfiles;
    
            $trxFile = Get-ChildItem *.trx;
            $trxFileName = [System.IO.Path]::GetFileNameWithoutExtension($trxFile);
                  
            Push-Location $trxFilename;
            $coverageFiles = Get-ChildItem -Recurse -filter coverage.*.xml;
            foreach ($coverageFile in $coverageFiles) 
            {
              Copy-Item $coverageFile $resultFiles.FullName;
            }
            Pop-Location;
          displayName: Copy Test Files
    
        - task: PublishTestResults@2
          inputs:
            testResultsFormat: 'VSTest' # 'JUnit' | 'NUnit' | 'VSTest' | 'XUnit' | 'CTest'. Alias: testRunner. Required. Test result format. Default: JUnit.
            testResultsFiles: '$(Agent.TempDirectory)/*.trx'
    
        - task: PublishCodeCoverageResults@1
          condition: true # always try publish coverage results, even if unit tests fail
          inputs:
            codeCoverageTool: 'cobertura' # Options: cobertura, jaCoCo
            summaryFileLocation: '$(Agent.TempDirectory)/ResultFiles/**/coverage.cobertura.xml'

    I ran the dotnet test command with custom arguments to log to trx and set my own results directory. The Powershell script uses the name of the TRX file to find the coverage files, and copies them to a ResultsFiles folder. Then I added tasks to publish test results and code coverage results to the Azure DevOps pipeline.

    Pushing coverage results to Sonarqube

    I admittedly spent a lot of time chasing down what, in reality, was a very simple change:

      - ${{ if eq(parameters.execute_sonar, true) }}:
        # Prepare Analysis Configuration task
        - task: SonarQubePrepare@5
          inputs:
            SonarQube: ${{ parameters.sonar_endpoint_name }}
            scannerMode: 'MSBuild'
            projectKey: ${{ parameters.sonar_project_key }}
            extraProperties: |
              sonar.cs.opencover.reportsPaths="$(Agent.TempDirectory)/ResultFiles/coverage.opencover.xml"
    
    ### Build and test here
    
      - ${{ if eq(parameters.execute_sonar, true) }}:
        - powershell: |
            $params = "$env:SONARQUBE_SCANNER_PARAMS" -replace '"sonar.branch.name":"[\w,/,-]*"\,?'
            Write-Host "##vso[task.setvariable variable=SONARQUBE_SCANNER_PARAMS]$params"
        # Run Code Analysis task
        - task: SonarQubeAnalyze@5
        # Publish Quality Gate Result task
        - task: SonarQubePublish@5
          inputs:
            pollingTimeoutSec: '300'
    

    That’s literally it. I experimented with a lot of different settings, but, in the end, simply setting sonar.cs.opencover.reportsPaths in the extraProperties input of the SonarQubePrepare task.

    SonarQube Success!

    Sample SonarQube Report

    In the small project that I tested, I was able to get analysis and code coverage published to my SonarQube instance. Unfortunately, this means that I know have technical debt to fix and unit tests to write in order to improve my code coverage, but, overall, this was a very successful venture.

  • Tech Tips – Adding Linting to C# Projects

    Among the Javascript/Typescript community, ESlint and Prettier are very popular ways to enforce some standards and formatting within your code. In trying to find similar functionality for C#, I did not find anything as ubiquitous as ESLint/Prettier, but there are some front runners.

    Roslyn Analyzers and Dotnet Format

    John Reilly has a great post on enabling Roslyn Analyzers in your .Net applications. He also posted some instructions on using the dotnet format tool as a “Prettier for C#” tool.

    I will not bore you by re-hashing his posts, but following those posts allowed me to apply some basic formatting and linting rules to my projects. Additionally, the Roslyn Analyzers can be made to generate build warnings and errors, so any build worth its salt (builds that fail with warnings) will be free of undesirable code.

    SonarLint

    I was not really content to stop there, and a quick Google search led me to an interesting article around linting options for C#. One of those was SonarLint. While SonarLint bills itself as an IDE plugin, it has a Roslyn Analyzer package (SonarAnalyzer.CSharp) that can be added and configured in a similar fashion to the built-in Roslyn Analyzers.

    Following the instructions in the article, I installed SonarAnalyzer and configured it alongside the base Roslyn Analyzers. It produced a few more warnings, particularly around some best practices from Sonar that go beyond what the Microsoft standards apply.

    SonarQube, my old friend

    Getting into SonarLint brought be back to SonarQube. What seems like forever ago, but really was only a few years ago, SonarQube was something of a go-to tool in my position. We had hoped to gather a portfolio-wide view of our bugs, vulnerabilities, and code smells. For one reason or another, we abandoned that particular tool set.

    After putting SonarLint in place, I was interested in jumping back in, at least in my home lab, to see what kind of information I could get out of Sonar. I found the Kubernetes instructions and got to work setting up a quick instance on my production instance, alongside my Proget instance.

    Once installed, I have to say, the application has done well to improve the user experience. Tying in to my Azure DevOps instance was quick and easy, with very good in-application tutorials for that configuration. I setup a project based on the pipeline for my test application, made my pipeline changes, and waited for results…

    Failed! I kept getting errors about not being allowed to set the branch name in the Community edition. That is fair, and for my projects, I only really need analysis on the main branch, so I setup analysis to only happen on builds of main. Failed again!

    There seems to be a known issue around this, but thanks to the SonarSource community, I found a workaround for my pipeline. With that in place, I had my code analysis in place, but, well, what do I do with it? Well, I can add quality gates to fail builds based on missing code coverage, tweak my rule sets, and have a “portfolio wide” view of my private projects.

    Setting the Standard

    For any open source C# projects, simply building the linting/formatting into the build/commit process might be enough. If project maintainers are so inclined, they can add their projects to SonarCloud and get the benefits of SonarQube (including adding quality gates).

    For enterprise customers, the move to a paid tier depends on how much visibility you want in your code base. Sonar can be an expensive endeavor, but provides a lot of quality and tech debt tracking that you may find useful. My suggestion? Start with a trial or the community version, and see if you like it before you start requesting budget.

    Either way, setting standards for formatting and analysis on your C# projects make contributions across teams much easier and safer. I suggest you try it!

  • Deprecating Microsoft Teams Notifications

    My first “owned” open source project was a TeamCity plugin to send notifications to Microsoft Teams based on build notifications in Teamcity. It was based on a similar TeamCity plugin for Slack.

    Why? Well, out of necessity. Professionally, we were migrating to using MS Teams, and we wanted functionality to post messages when builds failed/succeeded. So I copied the Slack notifier, made the requisite changes, and it worked well enough to publish. I even went the extra mile of adding some GitHub actions to build and deploy, so that I could fix dependabot security issues quickly.

    The plugin is currently published in Jetbrains’ plugin repository.

    The Sun Always Sets

    Fast-forward 5 years: both professionally and personally I have moved towards Azure DevOps / GitHub Actions for building. Why? Well, the core of them is essentially the same, as Microsoft has melded them together. For open source projects in GitHub, it is a defacto standard, and for my lab instance of Azure DevOps, well, it makes transitioning lab work to professional recommendations much easier. But none of this uses TeamCity.

    Additionally, I have spent the majority of my professional career in C/C++/C#. Java is not incredibly different at its core, but add in Maven, Spring, and the other tag-alongs that come with TeamCity plugin development, and I was well out of my league. And while I have expanded into the various Javascript languages and frameworks, I have never had a reason to dive into Java to learn.

    So, with that, I am officially deprecating this plugin. Truthfully, I have not done much in the repository recently, so this should not be a surprise. However, I wanted to formally do this so that anyone who may want to take it over (or start over, if they so desire) can do so. I will gladly turn over ownership of the code to someone willing to spend their time to improve it.

    To those who use the plugin: I appreciate all of the support from the community, and I apologize for not doing this sooner: perhaps someone will take the reins and bring the plugin up to the state it deserves.

    Thanks!

  • Pulling metrics from Home Assistant into Prometheus

    I have setup an instance of Home Assistant as the easiest front end for interacting with my home automation setup. While I am using the Universal Devices ISY994 as the primary communication hub for my Insteon devices, Home Assistant provides a much nicer interface for my family, including a great mobile app for them to use the system.

    With my foray into monitoring, I started looking around to see if I was able to get some device metrics from Home Assistant into my Grafana Mimir instance. Turns out, there is an a Prometheus integration built right in to Home Assistant.

    Read the Manual

    Most of my blog posts are “how to” style: I find a problem that maybe I could not find an exact solution for online, and walk you through the steps. In this case, though, it was as simple as reading the configuration instructions for the Prometheus integration.

    ServiceMonitor?

    Well, almost that easy. I have been using ServiceMonitor resources within my clusters, rather than setting up explicit scrape configs. Generally, this is easier to manage, since I just install the Prometheus operator, and then create ServiceMonitor instances when I want Prometheus to scrape an endpoint.

    The Home Assistant Prometheus endpoint requires a token, however, and I did not have the desire to dig in to configuring a ServiceMonitor with an appropriate secret. For now, it is a to-do on my ever-growing list

    What can I do now?

    This integration has opened up a LOT of new alerts on my end. Home Assistant talks to many of the devices in my home, including lights and garage doors. This means I can write alerts for when lights go on or off, when the garage door goes up or down, and, probably the best, when devices are reporting low battery.

    The first alert I wrote was to alert me when my Ring Doorbell battery drops below 30%. Couple that with my Prometheus Alerts module for Magic Mirror, and I now get a display when the battery needs changed.

    What’s Next?

    I am giving back to the community. The Prometheus integration for Home Assistant does not currently report cover statuses. Covers are things like shades or, in my case, garage doors. Since I would like to be able to alert when the garage door is open, I am working on a pull request to add cover support to the Prometheus integration.

    It also means I would LOVE to get my hands on some automated shades/blinds… but that sounds really expensive.

  • Bruce Lee to the Rescue! Health Checks for .NET Worker Services

    As we start to develop more containers that are being run in Kubernetes, we encounter non-http workloads. I came across a workload that represents a non-http processor for queued events. In .NET, I used the IHostedService offerings to run a simple service in a container to do this work.

    However, when it came time to deploy to Kubernetes, I quickly realized that my standard liveness/health checks would not work for this container. I searched around, and the HealthChecks libraries are limited to ASP.NET Core. Not wanting to bloat my image, I looked for some alternatives. My Google searches led me to Bruce Lee.

    No, not Bruce Lee the actor, but Bruce Lee Harrison. Bruce published a library called TinyHealthChecks, which provides the ability to add lightweight endpoints without dragging in the entire ASP.NET Core libraries.

    While it seems a pretty simple concept, it solved an immediate need of mine with minimal effort. Additionally, there was a sample and documentation!

    Why call this out? Many developers use open source software to solve these types of problems, and I feel as though they deserve a little publicity for their efforts. So, thanks to the contributors to TinyHealthCheck, I will certainly watch this repository and contribute as I can.

  • MMM-PrometheusAlerts: Display Alerts in Magic Mirror

    I have had MagicMirror running for about a year now, and I love having it in my office. A quick glance gives my family and I a look at information that is relevant for the days ahead. As I continue my dive into Prometheus for monitoring, it occurred to me that I might be able to create a new module for displaying Prometheus Alerts.

    Current State

    Presently, my Magic Mirror configuration uses the following modules:

    Creating the Prometheus Alerts module

    In recent weeks, my experimentation with Mimir has lead me to write some alerts to keep tabs on things in my Kubernetes cluster and, well, the overall health of my systems. Currently, I have a personal Slack team with an alerts channel, and that has been working nicely. However, as I stared at my office panel, it occurred to me that there should be a way to gather these alerts and show them in Magic Mirror.

    Since Grafana Mimir is Prometheus-compatible, I should be able to use the Prometheus APIs to get alert data. A quick Google search yielded the HTTP API for Prometheus.

    With that in hand, I copied the StatusPage IO module’s code and got to work. In many ways, the Prometheus Alerts are simpler than Status Page, since it is a single collection of alerts with labels and annotations. So I stripped out some of the extra handling for Status Page Components, renamed a few things, and after some debugging, I have a pretty good MVP.

    What’s next?

    It’s pretty good, but not perfect. I started adding some issues to the GitHub repository for things like message templating and authentication, and when I get around to adding authentication to Grafana Mimir and Loki, well, I’ll probably need to update the module.

    Watch the Github repository for changes!

  • Tech Tips – Upgrading your Argo cluster tools

    Moving my home lab to GitOps and ArgoCD has been, well, nearly invisible now. With the build pipelines I have in place, I’m able to work on my home projects without much thought to deploying changes to my clusters.

    My OCD, however, prevents me from running old versions. I really want to stay up-to-date when it comes to the tools I am using. This resulted in a periodic update of all of my Helm repositories to search for new versions.

    After getting tired of this “search and replace,” I wrote a small Powershell script which automatically updates my local Helm repositories, and then searches through the current directory (recursively) for chart.yaml files. If there are dependencies in that Chart.yaml, it will upgrade them to the latest version and save the Chart.yaml file.

    It does not, at the moment, automatically commit these changes, so I do have a manual step to confirm the upgrades. However, it is a much faster way to get changes staged for review and committing.

  • Creating a simple Nginx-based web server image

    One of the hardest parts of blogging is identifying topics. I sometimes struggle with identifying things that I have done that would be interesting or helpful to others. In trying to establish a “rule of thumb” for such decisions, I think things that I have done at least twice qualify as potential topics. As it so happens, I have had to construct simple web server containers twice in the last few weeks.

    The Problem

    Very simply, I wanted to be able to build a quick and painless container to host some static web sites. They are mostly demo sites for some of the UI libraries that we have been building. One is raw HTML, the other is built using Storybook.js, but both end up being a set of HTML/CSS/JS files to be hosted.

    Requirements

    The requirements for this one are pretty easy:

    • Host a static website
    • Do not run as root

    There was no requirement to be able to change the content outside of the image: changes would be handled by building a new image.

    My Solution

    I have become generally familiar with Nginx for a variety of uses. It serves as a reverse proxy for my home lab and is my go-to ingress controller for Kubernetes. Since I am familiar with its configuration, I figured it would be a good place to start.

    Quick But Partial Success

    The “do not run as root” requirement led me to the Nginx unprivileged image. With that as a base, I tried something pretty quick and easy:

    # Dockerfile
    FROM nginxinc/nginx-unprivileged:1.20 as runtime
    
    
    COPY output/ /usr/share/nginx/html

    Where output contains the generated HTML files that I wanted to host.

    This worked great for the first page that loaded. However, links to other pages within the site kept coming back from Nginx with :8080 as the port. Out networking configuration is offloading SSL outside of the cluster and using ingress within the cluster, so I did not want any port forwarding at all.

    Custom Configuration Completes the Set

    At that point, I realized that I needed to configure Nginx to disabled the port redirects, and then include the new configuration in my container. So I trapsed through the documentation for the Nginx containers. As it turns out, the easiest way to configure these images is to replace the default.conf file in /etc/nginx/conf.d folder.

    So I went about creating a new Nginx config file with the appropriate settings:

    server { 
      listen 8080;
      server_name localhost;
      port_in_redirect off;
      
      location / {
        root /usr/share/nginx/html;
        index index.html index.htm;
      }
      error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   /usr/share/nginx/html;
        }
    }

    From there, my Dockerfile changed only slightly:

    # Dockerfile
    FROM nginxinc/nginx-unprivileged:1.20 as runtime
    COPY nginx/default.conf /etc/nginx/conf.d/default.conf
    COPY output/ /usr/share/nginx/html

    Success!

    With those changes, the image built with the appropriate files and the links no longer had the port redirect. Additionally, my containers are not running as root, so I do not run afoul of our cluster’s policy management rules.

    Hope this helps!

  • Nginx Reverse proxy: A slash makes all the difference.

    I have been doing some work to build up some standard processes for Kubernetes. ArgoCD has become a big part of that, as it allows us to declaratively manage the state of our clusters.

    After recovering from a small blow-up in the home lab (post coming), I wanted to migrate my cluster tools to utilize the label selector feature of the ApplicationSet’s Cluster Generator. Why? It allows me to selectively manage tools in the cluster. After all, not every cluster needs the nfs-external-provisioner to provide a StorageClass for pod file storage.

    As part of this, I wanted to deploy to tools to the local Argo cluster. In order to do that, the local cluster needs a secret. I tried to follow the instructions, but when I clicked to view the cluster details, I got a 404 error. I dug around the logs, and my request wasn’t even getting to the Argo application server container.

    When I looked at the Ingress controller logs, it showed the request looked something like this:

    my.url.com/api/v1/clusters/http://my.clusterurl.svc

    Obviously, that’s not correct. The call coming from the UI is this:

    my.url.com/api/v1/clusters/http%3A%2F%2Fmy.clusterurl.svc

    Notice the encoding: My Nginx reverse proxy (running on a Raspberry Pi outside my main server) was decoding the request before passing it along to the cluster.

    The question was, why? A quick Google search lead right to their documentation:

    • If proxy_pass is specified with URI, when passing a request to the server, part of a normalized request URI matching the location is replaced by a URI specified in the directive
    • If proxy_pass is specified without URI, a request URI is passed to the server in the same form as sent by a client when processing an original request

    What? Essentially, it means the slash at the end will dictate whether Nginx does anything with the request.

    ## With a slash at the end, the client request is normalized
    location /name/ {
        proxy_pass http://127.0.0.1/remote/;
    }
    
    ## Without a slash, the request is as-is
    location /name/ {
        proxy_pass http://127.0.0.1;
    }

    Removing the slash, the Argo UI was able to load the cluster details correctly.

  • Tech Tip – Azure DevOps Pipelines Newline handling

    Just a quick note: It would seem that somewhere between Friday, April 29, 2022 and Monday, May 2, 2022, Azure DevOps pipelines changed their handling of newlines in YAML literal blocks. The change caused our pipelines to stop executing with the following error:

    While scanning a literal block scalar, found extra spaces in first line

    What caused it? Multi-line, inline block definitions.

    - powershell: |
        
        Write-Host "Azure Fails on this now"
      displayName: Bad Script
      
    - powershell: |
        Write-Host "Azure works with this"
      displayName: Good Script