Tag: open source

  • Migrating to Github Packages

    I have been running a free version of Proget locally for years now. It served as a home for Nuget packages, Docker images, and Helm charts for my home lab projects. But, in an effort to slim down the apps that are running in my home lab, I took a look at some alternatives.

    Where can I put my stuff?

    When I logged in to my Proget instance and looked around, it occurred to me that I only had 3 types of feeds: Nuget packages, Docker images, and Helm charts. So to move off of Proget, I need to find replacements for all of these.

    Helm Charts

    Back in the heady days of using Octopus Deploy for my home lab, I used published Helm charts to deploy my applications. However, since I switched to a Gitops workflow with ArgoCD, I haven’t published a Helm chart in a few years. I deleted that feed in Proget. One down, two to go.

    Nuget Packages

    I have made a few different attempts to create Nuget packages for public consumption. A number of years ago, I tried publishing a data layer that was designed to be used across platforms (think APIs and mobile applications), but even I stopped using that in favor of Entity Framework Core and good old fashioned data models. More recently, I created some “platform” libraries to encapsulate some of the common code that I use in my APIs and other projects. They serve as utility libraries as well as a reference architecture for my professional work.

    There are a number of options for hosting Nuget feeds, with varying costs depending on structure. I considered the following options:

    • Azure DevOps Artifacts
    • Github Packages
    • Nuget.org

    I use Azure DevOps for my builds, and briefly considered using the artifacts feeds. However, none of my libraries are private. Everything I am writing is a public repository in Github. With that in mind, it seemed that the free offerings from Github and Nuget were more appropriate.

    I published the data layer packages to Nuget previously, so I have some experience with that. However, with these platform libraries, while they are public, I do not expect them to be heavily used. For that reason, I decided that publishing the packages to Github Packages made a little more sense. If these platform libraries get to the point where they are heavily used, I can always publish stable packages to Nuget.org.

    Container Images

    In terms of storage percentage, container images take up the bulk of my Proget storage. Now, I only have 5 container images, but I never clean anything up, so those 5 containers are taking up about 7 GB of data. When I was investigating alternatives, I wanted to make sure I had some way to clean up old pre-release tags and manifests to keep my usage down.

    I considered two alternatives:

    • Azure Container Registry
    • Github Container Registry

    An Azure Container Registry instance would cost me about $5 a month and provide me with 10 GB of storage. Github Container Registry provides 500MB of storage and 1GB of Data transfer per month, but that is for private repositories.

    As with my Nuget packages, nothing that I have is private. Github packages are free for public packages. Additionally, I found a Github task that will clean up Github the images. As this was one of my “new” requirements, I decided to take a run at Github packages.

    Making the switch

    With my current setup, the switch was fairly simple. Nuget publishing is controlled by my Azure DevOps service connections, so I created a new service connection for my Github feed. The biggest change was some housekeeping to add appropriate information to the Nuget package itself. This included added the RepositoryUrl property on the .csproj files. This tells Github which repository to associate the package with.

    Container registry wasn’t much different, and again, some housekeeping in adding the appropriate labels to the images. From there, a few template changes and the images were in the Github container registry.

    Overall, the changes were pretty minimal. I have a few projects left to convert, and once that is done, I can decommission my Proget instance.

    Next on the chopping block…

    I am in the beginning stages of evaluating Azure Key Vault as a replacement for my Hashicorp Vault instance. Although it comes at a cost, for my usage it is most likely under $3 a month, and getting away from self-hosted secrets management would make me a whole lot happier.

  • Platform Engineering

    As I continue to build out some reference architecture applications, I realized that there was a great deal of boilerplate code that I add to my APIs to get things running. Time for a library!

    Enter the “Platform”

    I am generally terrible at naming things, but Spydersoft.Platform seemed like a good base namespace for this one. The intent is to put the majority of my boilerplate code into a set of libraries that can be referenced to make adding stuff easier.

    But, what kind of “stuff?” Well, for starters

    • Support for OpenTelemetry trace, metrics, and logging
    • Serilog logging for console logging
    • Simple JWT identity authentication (for my APIs)
    • Default Health Check endpoints

    Going deep with Health Checks

    The first three were pretty easy: just some POCOs for options and then startup extensions to add the necessary items with the proper configuration. With health checks, however, I went a little overboard.

    My goal was to be able to implement IHealthCheck anywhere and decorate it in such a way that it would be added to the health check framework and could be tagged. Furthermore, I wanted to use tags to drive standard endpoints.

    In the end, I used a custom attribute and some reflection to add the checks that are found in the loaded AppDomain. I won’t bore you: the documentation should do that just fine.

    But can we test it?

    Testing startup extensions is, well, interesting. Technically, it is an integration test, but I did not want to setup playwright tests to execute the API tests. Why? Well, usually API integration tests are run again a particular configuration, but in this case, I needed to run the reference application with a lot of different configurations in order to fully test the extensions. Enter WebApplicationFactory.

    With WebApplicationFactory, I was able to configure tests to stand up a copy of the reference application with different configurations. I could then verify the configuration using some custom health checks.

    I am on the fence as to whether or not this is a “unit” test or an “integration” test. I’m not calling out to any other application, which is usually the definition of an integration test. But I did have to configure a reference application in order to get things tested.

    Whatever you call it, I have coverage on my startup extensions, and even caught a few bugs while I was writing the tests.

    Make it truly public?

    Right now, the build publishes the Nuget package to my private nuget feed. I am debating on moving it to Nuget (or maybe Github’s package feeds). While the code is open source, I want to make the library openly available. But until I make the decision on where to put it, I will keep it in my private feed. If you have any interest in it, watch or star the repo in GitHub: it will help me gauge the level of interest.

  • Badges… We don’t need no stinkin’ badges!

    Well… Maybe we do. This is a quick plug (no reimbursement of any kind) for the folks over at Shields.io, who make creating custom badges for readme files and websites an easy and fun task.

    A Quick Demo

    License for spyder007/MMM-PrometheusAlerts
    Build Status for spyder007/MMM-PrometheusAlerts

    The badges above are generated from Shields.io. The first link looks like this:

    https://img.shields.io/github/license/spyder007/MMM-PrometheusAlerts

    My Github username (spyder007) and the repository name (MMM-PrometheusAlerts) are used in the Image URL, which generates the badge. The second one, build status, looks like this:

    https://img.shields.io/github/actions/workflow/status/spyder007/MMM-PrometheusAlerts/node.js.yml

    In this case, my Github username and the repository name remain the same, but node.js.yml is the name of the workflow file for which I want to display the status.

    Every badge in Shields.io has a “builder” page that explains how to build the image and even allows you to override styles, colors, and labels, and even add logos from any icon in the Simple Icons collection.

    Some examples of alterations to my build status above:

    “For the Badge” style, Bugatti Logo with custom color
    Flat style, CircleCI logo, Custom label

    Too many options to list…

    Now, these are live badges, meaning, if my build fails, the green “passing” will go to a red “failing.” Shields.io does this by using the variety of APIs available to gather data about builds, code coverage, licenses, chat, sizes and download counts, funding, issue tracking… It’s a lot. But the beauty of it is, you can create Readme files or websites which have easy to read visuals. My PI Monitoring repository‘s Readme makes use of a number of these shields to give you a quick look at the status of the repo.

  • Using SonarCloud for Open Source

    My last few posts have centered around adding some code linting and analysis to C# projects. Most of this has been to identify some standards and best practices for my current position.

    During this research, I came across SonarCloud, which is Sonarqube’s hosted instance. SonarCloud is free for open source projects, and given the breadth of languages it supports, I have decided to start adding my open source projects to SonarCloud. This will allow some extra visibility into my open source code and provide me with a great sandbox for evaluating Sonarqube for corporate use.

    I added Sonar Analysis to a GitHub actions pipeline for my Hyper-V Info API. You can see the Sonar analysis on SonarCloud.io.

    The great part?? All the code is public, including the GitHub Actions pipeline. So, feel free to poke around and see how I made it work!

  • Publishing Code Coverage in both Azure DevOps and SonarQube

    I spent more time than I care to admit trying to get the proper configuration for reporting code coverage to both the Azure DevOps pipeline and SonarQube. The solution was, well, fairly simple, but it is worth me writing down.

    Testing, Testing…

    After fumbling around with some of the linting and publishing to SonarQube’s Community Edition, I succeeded in creating build pipelines which, when building from the main branch, will run SonarQube analysis and publish to the project.

    I modified my build template as follows:

    - ${{ if eq(parameters.execute_sonar, true) }}:
        # Prepare Analysis Configuration task
        - task: SonarQubePrepare@5
          inputs:
            SonarQube: ${{ parameters.sonar_endpoint_name }}
            scannerMode: 'MSBuild'
            projectKey: ${{ parameters.sonar_project_key }}
    
      - task: DotNetCoreCLI@2
        displayName: 'DotNet restore packages (dotnet restore)'
        inputs:
          command: 'restore'
          feedsToUse: config
          nugetConfigPath: "$(Build.SourcesDirectory)/nuget.config"
          projects: "**/*.csproj"
          externalFeedCredentials: 'feedName'
    
      - task: DotNetCoreCLI@2
        displayName: Build (dotnet build)
        inputs:
          command: build
          projects: ${{ parameters.publishProject }}
          arguments: '--no-restore --configuration ${{ parameters.BUILD_CONFIGURATION }} /p:InformationalVersion=$(fullSemVer) /p:AssemblyVersion=$(AssemblySemVer) /p:AssemblyFileVersion=$(AssemblySemFileVer)'
          
    
    ## Test steps are here, details below
    
      - ${{ if eq(parameters.execute_sonar, true) }}:
        - powershell: |
            $params = "$env:SONARQUBE_SCANNER_PARAMS" -replace '"sonar.branch.name":"[\w,/,-]*"\,?'
            Write-Host "##vso[task.setvariable variable=SONARQUBE_SCANNER_PARAMS]$params"
    
        # Run Code Analysis task
        - task: SonarQubeAnalyze@5
    
        # Publish Quality Gate Result task
        - task: SonarQubePublish@5
          inputs:
            pollingTimeoutSec: '300'

    In the code above, the execute_sonar parameter allows me to execute the Sonarqube steps only in the main branch, which allows me to keep the community edition happy but retain the rest of my pipeline definition on feature branches.

    This configuration worked and my project’s analysis showed in Sonarqube.

    Testing, Testing, 1, 2, 3…

    I went about adding some trivial unit tests in order to verify that I could get code coverage publishing. I have had experience with Coverlet in the past, and it allows for generation of various coverage report formats, so I went about adding it to the project using the simple collector.

    Within my build pipeline, I added the following (if you are referencing the snippet above, this is in place of the comment):

    - ${{ if eq(parameters.execute_tests, true) }}:
        - task: DotNetCoreCLI@2
          displayName: Test (dotnet test)
          inputs:
            command: test
            projects: '**/*tests/*.csproj'
            arguments: '--no-restore --configuration ${{ parameters.BUILD_CONFIGURATION }} --collect:"XPlat Code Coverage"'
        

    Out of the box, this command worked well: tests were executed, and both Tests and Coverage information were published to Azure DevOps. Apparently, the DotNetCoreCLI@2 task defaults publishTestResults to true.

    While the tests ran, the coverage was not published to Sonarqube. I had hoped that the Sonarqube Extension for Azure DevOps would pick this up, but, at last, this was not the case.

    Coverlet, DevOps, and Sonarqube… oh my.

    As it turns out, you have to tell Sonarqube explicitly where to find the coverage report. And, while the Sonarqube documentation is pretty good at describing how to report coverage to Sonarqube from the CLI, the Azure DevOps integration documentation does not specify how to accomplish this, at least not outright. Also, while Azure DevOps recognizes cobertura coverage reports, Sonarqube prefers opencover reports.

    I had a few tasks ahead of me:

    1. Generate my coverage reports in two formats
    2. Get a specific location for the coverage reports in order to pass that information to Sonarqube.
    3. Tell the Sonarqube analyzer where to find the opencover reports

    Generating Multiple Coverage Reports

    As mentioned, I am using Coverlet to collect code coverage. Coverlet allows for the usage of RunSettings files, which helps to standardize various settings within the test project. It also allowed me to generate two coverage reports in different formats. I created a coverlet.runsettings file in my test project’s directory and added this content:

    <?xml version="1.0" encoding="utf-8" ?>
    <RunSettings>
      <DataCollectionRunSettings>
        <DataCollectors>
          <DataCollector friendlyName="XPlat code coverage">
            <Configuration>
              <Format>opencover,cobertura</Format>          
            </Configuration>
          </DataCollector>
        </DataCollectors>
      </DataCollectionRunSettings>
    </RunSettings>

    To make it easy on my build and test pipeline, I added the following project to my test csproj file:

    <RunSettingsFilePath>$(MSBuildProjectDirectory)\coverlet.runsettings</RunSettingsFilePath>

    There are a number of settings for coverlet that can be set in the runsettings file, the full list can be found in their VSTest Integration documentation.

    Getting Test Result Files

    As mentioned above, the DotNetCoreCLI@2 action defaults publishTestResults to true. This setting adds some arguments to the test command to output trx logging and setting a results directory. However, this means I am not able to specify a results directory on my own.

    Even specifying the directory myself does not fully solve the problem: Running the tests with coverage and trx logging generates a trx file using username/computer name/timestamp AND two sets of coverage reports: one set is stored under the username/computer name/timestamp folder, and the other under a random Guid.

    To ensure I only pulled one set of tests, and that Azure DevOps didn’t complain, I updated my test execution to look like this:

      - ${{ if eq(parameters.execute_tests, true) }}:
        - task: DotNetCoreCLI@2
          displayName: Test (dotnet test)
          inputs:
            command: test
            projects: '**/*tests/*.csproj'
            publishTestResults: false
            arguments: '--no-restore --configuration ${{ parameters.BUILD_CONFIGURATION }} --collect:"XPlat Code Coverage" --logger trx --results-directory "$(Agent.TempDirectory)"'
        
        - pwsh:
            Push-Location $(Agent.TempDirectory);
            mkdir "ResultFiles";
            $resultFiles = Get-ChildItem -Directory -Filter resultfiles;
    
            $trxFile = Get-ChildItem *.trx;
            $trxFileName = [System.IO.Path]::GetFileNameWithoutExtension($trxFile);
                  
            Push-Location $trxFilename;
            $coverageFiles = Get-ChildItem -Recurse -filter coverage.*.xml;
            foreach ($coverageFile in $coverageFiles) 
            {
              Copy-Item $coverageFile $resultFiles.FullName;
            }
            Pop-Location;
          displayName: Copy Test Files
    
        - task: PublishTestResults@2
          inputs:
            testResultsFormat: 'VSTest' # 'JUnit' | 'NUnit' | 'VSTest' | 'XUnit' | 'CTest'. Alias: testRunner. Required. Test result format. Default: JUnit.
            testResultsFiles: '$(Agent.TempDirectory)/*.trx'
    
        - task: PublishCodeCoverageResults@1
          condition: true # always try publish coverage results, even if unit tests fail
          inputs:
            codeCoverageTool: 'cobertura' # Options: cobertura, jaCoCo
            summaryFileLocation: '$(Agent.TempDirectory)/ResultFiles/**/coverage.cobertura.xml'

    I ran the dotnet test command with custom arguments to log to trx and set my own results directory. The Powershell script uses the name of the TRX file to find the coverage files, and copies them to a ResultsFiles folder. Then I added tasks to publish test results and code coverage results to the Azure DevOps pipeline.

    Pushing coverage results to Sonarqube

    I admittedly spent a lot of time chasing down what, in reality, was a very simple change:

      - ${{ if eq(parameters.execute_sonar, true) }}:
        # Prepare Analysis Configuration task
        - task: SonarQubePrepare@5
          inputs:
            SonarQube: ${{ parameters.sonar_endpoint_name }}
            scannerMode: 'MSBuild'
            projectKey: ${{ parameters.sonar_project_key }}
            extraProperties: |
              sonar.cs.opencover.reportsPaths="$(Agent.TempDirectory)/ResultFiles/coverage.opencover.xml"
    
    ### Build and test here
    
      - ${{ if eq(parameters.execute_sonar, true) }}:
        - powershell: |
            $params = "$env:SONARQUBE_SCANNER_PARAMS" -replace '"sonar.branch.name":"[\w,/,-]*"\,?'
            Write-Host "##vso[task.setvariable variable=SONARQUBE_SCANNER_PARAMS]$params"
        # Run Code Analysis task
        - task: SonarQubeAnalyze@5
        # Publish Quality Gate Result task
        - task: SonarQubePublish@5
          inputs:
            pollingTimeoutSec: '300'
    

    That’s literally it. I experimented with a lot of different settings, but, in the end, simply setting sonar.cs.opencover.reportsPaths in the extraProperties input of the SonarQubePrepare task.

    SonarQube Success!

    Sample SonarQube Report

    In the small project that I tested, I was able to get analysis and code coverage published to my SonarQube instance. Unfortunately, this means that I know have technical debt to fix and unit tests to write in order to improve my code coverage, but, overall, this was a very successful venture.

  • MMM-PrometheusAlerts: Display Alerts in Magic Mirror

    I have had MagicMirror running for about a year now, and I love having it in my office. A quick glance gives my family and I a look at information that is relevant for the days ahead. As I continue my dive into Prometheus for monitoring, it occurred to me that I might be able to create a new module for displaying Prometheus Alerts.

    Current State

    Presently, my Magic Mirror configuration uses the following modules:

    Creating the Prometheus Alerts module

    In recent weeks, my experimentation with Mimir has lead me to write some alerts to keep tabs on things in my Kubernetes cluster and, well, the overall health of my systems. Currently, I have a personal Slack team with an alerts channel, and that has been working nicely. However, as I stared at my office panel, it occurred to me that there should be a way to gather these alerts and show them in Magic Mirror.

    Since Grafana Mimir is Prometheus-compatible, I should be able to use the Prometheus APIs to get alert data. A quick Google search yielded the HTTP API for Prometheus.

    With that in hand, I copied the StatusPage IO module’s code and got to work. In many ways, the Prometheus Alerts are simpler than Status Page, since it is a single collection of alerts with labels and annotations. So I stripped out some of the extra handling for Status Page Components, renamed a few things, and after some debugging, I have a pretty good MVP.

    What’s next?

    It’s pretty good, but not perfect. I started adding some issues to the GitHub repository for things like message templating and authentication, and when I get around to adding authentication to Grafana Mimir and Loki, well, I’ll probably need to update the module.

    Watch the Github repository for changes!

  • A little open source contribution

    The last month has been chock full of things I cannot really post about publicly, namely, performance reviews and security remediations. And while the work front has not been kind to public posts, I have taken some time to contribute back a bit more to the Magic Mirror project.

    Making ToDo Better

    Thomas Bachmann created MMM-MicrosoftToDo, a plugin for the Magic Mirror that pulls tasks from Microsoft’s ToDo application. Since I use that app for my day to day tasks, it would be nice to see my tasks up on the big screen, as it were.

    Unfortunately, the plugin used the old beta version of the APIs, as well as the old request module, which has been deprecated. So I took the opportunity to fork the repo and make some changes. I submitted a pull request to the owner, hopefully it makes its way into the main plugin. But, for now, if you want my changes, check them out here.

    Making StatusPage better

    I also took the time to improve on my StatusPage plugin, adding the ability to ignore certain components and removing components from the list when they are a part of an incident. I also created a small enhancement list for some future use.

    With the holidays and the rest of my “non-public” work taking up my time, I would not expect too much from me for the rest of the year. But I’ve been wrong before…

  • Simple Site Monitoring with Raspberry PI and Python

    My off-hours time this week has been consumed by writing some Python scripts to help monitor uptime for some of my sites.

    Build or Buy?

    At this point in my career, “build or buy” is a question I ask more often than not. As a software engineer, there is no shortage of open source and commercial solutions for almost any imaginable task. Web Site monitoring is no different. Tools such as StatusCake, Pingdom, and LogicMonitor offer hosted platforms, while tools like Nagios and PRTG offer on-premise installations, there is so much to choose from, it’s hard to decide.

    I had a few simple requirements:

    • Simple at first, but expandable as needed.
    • Runs on my own network so that I can monitor sites that are not available outside of my firewall.
    • Since most of my servers are virtual machines consolidated on the one lab server, well, it does not make much sense to put it on that server. I needed something I could run on easily with little to no power.

    Build it is!

    I own a few Raspberry Pis, but the Model 3B and 4B are currently in use. The lone unused Pi is an old Model B (i.e., model 1B), so installing something like Nagios would be, well, not usable when it was all said and done. Given that the Raspberry Pi is at home with Python, I thought I would dust off my “language learning skills” and figure out how to make something useful.

    As I started, though, I remembered my free version of Atlassian’s Status Page. Although the free version is limited in the number of subscribers and no text subscriptions, for my usage, it’s perfect. And, near and dear to my developer heart, it has a very well defined set of APIs for managing statuses and incidents.

    So, with Python and some additional modules, I created a project that lets me do a quick request on a desired website. If the website is down, the Status Page status for the associated component is changed and an incident is created. If/when it comes back up, any open incidents associated with that component are closed.

    Viola!

    After a few evening hours tinkering, I have the scripts doing some basic work. For now, a cron job executes the script every 5 minutes, and if a site goes down it is reported to my statuspage site.

    Long term, I plan on adding support for more in-depth checks of my own projects, which utilized .Net’s HealthChecks namespace to report service health automatically. I may also look into setting up the scripts as a service running on the Pi.

    If you are interested, the code is shared on Github.

  • MS Teams Notifications Plugin

    I have spent the better part of my last 20 years working on software in one form or another. During that time, it’s been impossible to avoid open source software components in one form or another.

    I have not, until today, contributed back to that community in a large way. Perhaps I’ve suggested a change here or there, but never really took the opportunity to get involved. I suppose my best excuse is that I have a difficult time finding a spot to jump in and get to work.

    About two years ago, I ported a Teamcity Plugin for Slack notifications to get it to work with Microsoft Teams. It’s been in use at my current company since then, and it has a few users who happened to have found it on GitHub. I took the step today to publish this plugin to Jetbrains’ plugin library.

    So, here’s to my inaugural open source publication!