As I approach the five year anniversary of this blog, I got to wondering just what my post frequency is and how it might affect my overall readership. In my examination of that data, I learned an important lesson about content and traffic: if you build it, they will come.
My Posting Habits
Posts per Month, 2018-2022
As you can tell from the above graph, for the first, oh, three years, my posting was sporadic at best. In the first 36 months of existence, I posted a total of 16 times, averaging about 0.4 posts per month. Even worse, 9 of those posts occurred in 3 months. That means, a little over half of my posts were generated in what amounts to 8% of the total time period.
With some inspiration from my former CTO, Raghu Chakravarthi, I committed to both steady and increase my writing in June of 2021. And the numbers show that initiative: in the 19 months from June 2021 to December of 2022, I posted 54 times, increasing my average to 2.8 posts per month. In addition to more posts, I started creating LinkedIn posts to go along with my blog posts, in the hopes of generating some traffic to the site.
People!!
I have the free version of Google Site Analytics hooked up to get an idea of traffic month of month. While I do not get a lot of history, I can say that, in recent months, my site traffic continues to grow, even if it’s only by 40-50 unique users per month.
Most of this traffic, though, has been generated not through my LinkedIn posts, but through Google searches. My top channels in the last two months are, far and away, organic search:
Top Channels for mattgerega.com
This tells me that most people find my site through a Google search, not my LinkedIn posts.
Increasing Visibility
My experience with my home blog has inspired a change in approach to the visibility of my architects at work. I manage a handful of architects, and there have been several asks by the team to create conduits for informing people about our current research and design work. In the past, we have tossed around ideas such as a blog post or a newsletter to get people interested. However, we never focused on the content.
If my home blogging has taught me nothing else, it is that, sometimes, content creation is a matter of quantity, not quality. Not everything I do is a Pulitzer prize-winning piece of journalism… Ok, NOTHING I write is of that level, but you get the idea: Sometimes it is simply about putting the content out there and then identifying what people interests people. When I switched my blogging over to focus less on perfection and more on information sharing, I was able to increase the amount of content I create. This, subsequently, allowed me to reach different people.
So, if it worked at home, why not try it at work? My team is in the middle of creating content based on their work. It does not have to be perfect, but we all have a goal to post a blog post in our Confluence instance at least once in a two week period. Hopefully, through this content generation, and some promotion by yours truly, we can start to increase the visibility of the work our team does to those outside of the team.
My last few posts have centered around adding some code linting and analysis to C# projects. Most of this has been to identify some standards and best practices for my current position.
During this research, I came across SonarCloud, which is Sonarqube’s hosted instance. SonarCloud is free for open source projects, and given the breadth of languages it supports, I have decided to start adding my open source projects to SonarCloud. This will allow some extra visibility into my open source code and provide me with a great sandbox for evaluating Sonarqube for corporate use.
I added Sonar Analysis to a GitHub actions pipeline for my Hyper-V Info API. You can see the Sonar analysis on SonarCloud.io.
The great part?? All the code is public, including the GitHub Actions pipeline. So, feel free to poke around and see how I made it work!
In a previous post, I had a to-do list that included managing my Hyper-V VMs so that they did not all start at once. I realized today that I never explained what I was able to do or post the code for my solution. So today, you get both.
And, for the impatient among you, my repository for this API is on Github.
Managing VMs with Powershell
My plan of attack was something like this:
Organize Virtual Machines into “Startup Groups” which can be used to set Automatic Start Delays
Using the group and an offset within the group, calculate a start delay and set that value on the VM.
Powershell’s Hyper-V Module is a powerful and pretty easy way to interact with the Hyper-V services on a particular machine. The module itself had all of the functionality I needed to implement my plan. This included the ability to modify the Notes of a VM. I am storing JSON in the Notes field to denote the start group and offset within the group. Powershell has the built in JSON conversion necessary to make quick work of retrieving this data from the VM’s Notes field and converting it into an object.
Creating the API
For the API, this seemed an appropriate time to try out the Minimal APIs in ASP.NET Core 6. Minimal APIs are Microsoft’s approach to building APIs fast, without all the boilerplate code that sometimes comes with .Net projects. For this project, as I only had three endpoints (and maybe some test/debug ones) and a few services, so it seemed a good candidate.
Without getting into the details, I was pleased with the approach, although scaling this type of approach requires implementing some standards that, in the end, would have you re-designing the notion of Controllers as it exists in a typical API project. So, while it is great for small, agile APIs, if you expect your API to grow, stick with the Controller-structured APIs.
Hosting the API
The server I am using as a Hyper-V hypervisor is running a version of Windows Hyper-V server, which means it has a limited feature set that does not include Internet Information Systems (IIS). Even if it did, I want to keep the hypervisor focused on running VMs. However, in order to manage the VMs, the easiest path is to put the API on the hypervisor.
With that in mind, I went about configuring this API to run within a Windows Service. That allowed me to ensure the API was running through standard service management (instead of as a console application) but still avoid the need for a heavy IIS install.
I installed the service using one of the methods described in How to: Install and uninstall Windows services. However, for proper access, the service needs to run as a user with Powershell access and rights to modify the VMs.
I created a new domain user and granted it the ability to perform a service log on via local security policy. See Enable Service Logon for details.
Prepping the VMs
The API does not, at the moment, pre-populate the Notes field with JSON settings. So I went through my VM List and added the following JSON snippet:
{"startGroup": 0,"delayOffset": 0}
I chose a startGroup value based on the VM’s importance (Domain Controllers first, then data servers, then Kubernetes nodes, etc), and then used the delayOffset to further stagger the start times.
All this for an API call
Once each VM has the initialization data, I made a call to /vm/refreshdelay and viola! The AutomaticStartDelay gets set based on its startGroup and delayOffset.
There is more to do (see my to-do list in my previous post for other next steps), but since I do not typically provision many machines, this one usually falls to a lower spot on the priority list. So, well, I apologize in advance if you do not see more work on this for another six months.
I spent more time than I care to admit trying to get the proper configuration for reporting code coverage to both the Azure DevOps pipeline and SonarQube. The solution was, well, fairly simple, but it is worth me writing down.
Testing, Testing…
After fumbling around with some of the linting and publishing to SonarQube’s Community Edition, I succeeded in creating build pipelines which, when building from the main branch, will run SonarQube analysis and publish to the project.
In the code above, the execute_sonar parameter allows me to execute the Sonarqube steps only in the main branch, which allows me to keep the community edition happy but retain the rest of my pipeline definition on feature branches.
This configuration worked and my project’s analysis showed in Sonarqube.
Testing, Testing, 1, 2, 3…
I went about adding some trivial unit tests in order to verify that I could get code coverage publishing. I have had experience with Coverlet in the past, and it allows for generation of various coverage report formats, so I went about adding it to the project using the simple collector.
Within my build pipeline, I added the following (if you are referencing the snippet above, this is in place of the comment):
Out of the box, this command worked well: tests were executed, and both Tests and Coverage information were published to Azure DevOps. Apparently, the DotNetCoreCLI@2 task defaults publishTestResults to true.
While the tests ran, the coverage was not published to Sonarqube. I had hoped that the Sonarqube Extension for Azure DevOps would pick this up, but, at last, this was not the case.
Coverlet, DevOps, and Sonarqube… oh my.
As it turns out, you have to tell Sonarqube explicitly where to find the coverage report. And, while the Sonarqube documentation is pretty good at describing how to report coverage to Sonarqube from the CLI, the Azure DevOps integration documentation does not specify how to accomplish this, at least not outright. Also, while Azure DevOps recognizes cobertura coverage reports, Sonarqube prefers opencover reports.
I had a few tasks ahead of me:
Generate my coverage reports in two formats
Get a specific location for the coverage reports in order to pass that information to Sonarqube.
Tell the Sonarqube analyzer where to find the opencover reports
Generating Multiple Coverage Reports
As mentioned, I am using Coverlet to collect code coverage. Coverlet allows for the usage of RunSettings files, which helps to standardize various settings within the test project. It also allowed me to generate two coverage reports in different formats. I created a coverlet.runsettings file in my test project’s directory and added this content:
There are a number of settings for coverlet that can be set in the runsettings file, the full list can be found in their VSTest Integration documentation.
Getting Test Result Files
As mentioned above, the DotNetCoreCLI@2 action defaults publishTestResults to true. This setting adds some arguments to the test command to output trx logging and setting a results directory. However, this means I am not able to specify a results directory on my own.
Even specifying the directory myself does not fully solve the problem: Running the tests with coverage and trx logging generates a trx file using username/computer name/timestamp AND two sets of coverage reports: one set is stored under the username/computer name/timestamp folder, and the other under a random Guid.
To ensure I only pulled one set of tests, and that Azure DevOps didn’t complain, I updated my test execution to look like this:
I ran the dotnet test command with custom arguments to log to trx and set my own results directory. The Powershell script uses the name of the TRX file to find the coverage files, and copies them to a ResultsFiles folder. Then I added tasks to publish test results and code coverage results to the Azure DevOps pipeline.
Pushing coverage results to Sonarqube
I admittedly spent a lot of time chasing down what, in reality, was a very simple change:
- ${{ if eq(parameters.execute_sonar, true) }}:# Prepare Analysis Configuration task - task: SonarQubePrepare@5inputs:SonarQube: ${{ parameters.sonar_endpoint_name }}scannerMode: 'MSBuild'projectKey: ${{ parameters.sonar_project_key }}extraProperties: | sonar.cs.opencover.reportsPaths="$(Agent.TempDirectory)/ResultFiles/coverage.opencover.xml"### Build and test here - ${{ if eq(parameters.execute_sonar, true) }}: - powershell: | $params = "$env:SONARQUBE_SCANNER_PARAMS" -replace '"sonar.branch.name":"[\w,/,-]*"\,?' Write-Host "##vso[task.setvariable variable=SONARQUBE_SCANNER_PARAMS]$params"# Run Code Analysis task - task: SonarQubeAnalyze@5# Publish Quality Gate Result task - task: SonarQubePublish@5inputs:pollingTimeoutSec: '300'
That’s literally it. I experimented with a lot of different settings, but, in the end, simply setting sonar.cs.opencover.reportsPaths in the extraProperties input of the SonarQubePrepare task.
SonarQube Success!
Sample SonarQube Report
In the small project that I tested, I was able to get analysis and code coverage published to my SonarQube instance. Unfortunately, this means that I know have technical debt to fix and unit tests to write in order to improve my code coverage, but, overall, this was a very successful venture.
Among the Javascript/Typescript community, ESlint and Prettier are very popular ways to enforce some standards and formatting within your code. In trying to find similar functionality for C#, I did not find anything as ubiquitous as ESLint/Prettier, but there are some front runners.
Roslyn Analyzers and Dotnet Format
John Reilly has a great post on enabling Roslyn Analyzers in your .Net applications. He also posted some instructions on using the dotnet format tool as a “Prettier for C#” tool.
I will not bore you by re-hashing his posts, but following those posts allowed me to apply some basic formatting and linting rules to my projects. Additionally, the Roslyn Analyzers can be made to generate build warnings and errors, so any build worth its salt (builds that fail with warnings) will be free of undesirable code.
SonarLint
I was not really content to stop there, and a quick Google search led me to an interesting article around linting options for C#. One of those was SonarLint. While SonarLint bills itself as an IDE plugin, it has a Roslyn Analyzer package (SonarAnalyzer.CSharp) that can be added and configured in a similar fashion to the built-in Roslyn Analyzers.
Following the instructions in the article, I installed SonarAnalyzer and configured it alongside the base Roslyn Analyzers. It produced a few more warnings, particularly around some best practices from Sonar that go beyond what the Microsoft standards apply.
SonarQube, my old friend
Getting into SonarLint brought be back to SonarQube. What seems like forever ago, but really was only a few years ago, SonarQube was something of a go-to tool in my position. We had hoped to gather a portfolio-wide view of our bugs, vulnerabilities, and code smells. For one reason or another, we abandoned that particular tool set.
After putting SonarLint in place, I was interested in jumping back in, at least in my home lab, to see what kind of information I could get out of Sonar. I found the Kubernetes instructions and got to work setting up a quick instance on my production instance, alongside my Proget instance.
Once installed, I have to say, the application has done well to improve the user experience. Tying in to my Azure DevOps instance was quick and easy, with very good in-application tutorials for that configuration. I setup a project based on the pipeline for my test application, made my pipeline changes, and waited for results…
Failed! I kept getting errors about not being allowed to set the branch name in the Community edition. That is fair, and for my projects, I only really need analysis on the main branch, so I setup analysis to only happen on builds of main. Failed again!
There seems to be a known issue around this, but thanks to the SonarSource community, I found a workaround for my pipeline. With that in place, I had my code analysis in place, but, well, what do I do with it? Well, I can add quality gates to fail builds based on missing code coverage, tweak my rule sets, and have a “portfolio wide” view of my private projects.
Setting the Standard
For any open source C# projects, simply building the linting/formatting into the build/commit process might be enough. If project maintainers are so inclined, they can add their projects to SonarCloud and get the benefits of SonarQube (including adding quality gates).
For enterprise customers, the move to a paid tier depends on how much visibility you want in your code base. Sonar can be an expensive endeavor, but provides a lot of quality and tech debt tracking that you may find useful. My suggestion? Start with a trial or the community version, and see if you like it before you start requesting budget.
Either way, setting standards for formatting and analysis on your C# projects make contributions across teams much easier and safer. I suggest you try it!
My first “owned” open source project was a TeamCity plugin to send notifications to Microsoft Teams based on build notifications in Teamcity. It was based on a similar TeamCity plugin for Slack.
Why? Well, out of necessity. Professionally, we were migrating to using MS Teams, and we wanted functionality to post messages when builds failed/succeeded. So I copied the Slack notifier, made the requisite changes, and it worked well enough to publish. I even went the extra mile of adding some GitHub actions to build and deploy, so that I could fix dependabot security issues quickly.
Fast-forward 5 years: both professionally and personally I have moved towards Azure DevOps / GitHub Actions for building. Why? Well, the core of them is essentially the same, as Microsoft has melded them together. For open source projects in GitHub, it is a defacto standard, and for my lab instance of Azure DevOps, well, it makes transitioning lab work to professional recommendations much easier. But none of this uses TeamCity.
Additionally, I have spent the majority of my professional career in C/C++/C#. Java is not incredibly different at its core, but add in Maven, Spring, and the other tag-alongs that come with TeamCity plugin development, and I was well out of my league. And while I have expanded into the various Javascript languages and frameworks, I have never had a reason to dive into Java to learn.
So, with that, I am officially deprecating this plugin. Truthfully, I have not done much in the repository recently, so this should not be a surprise. However, I wanted to formally do this so that anyone who may want to take it over (or start over, if they so desire) can do so. I will gladly turn over ownership of the code to someone willing to spend their time to improve it.
To those who use the plugin: I appreciate all of the support from the community, and I apologize for not doing this sooner: perhaps someone will take the reins and bring the plugin up to the state it deserves.
I have had this post in a draft for almost a month now. I had planned to include statistics around the amount of data that humans are generating (it is a lot) and how we as are causing some of own problems by having too much data at our fingertips.
What I realized is, a lengthy post about information overload is, well, somewhat oxymoronic. If you would like to learn about the theory, check it out. We are absolutely generating more data than could possibly be used. This came to the forefront as I investigated my metrics storage in my Grafana Mimir instance.
I got a lot of… data
Right now, I’m collecting over 300,000 series worth of data. That means, there are about 300,000 unique streams of data for which I have a data point roughly every 30 seconds. On average, it is taking up 35 GB worth of disk space per month.
How many of those do I care about? Well, as of this moment, about 7. I have some alerts to monitor when applications are degraded, when I’m dropping logs, when some system temperatures go to high, and when my Ring Doorbell battery is low.
Now, I continue to find alerts to write that are helpful, so I anticipate expanding beyond 7. However, there is almost no way that I am going to have alerts across 300,000 series: I simply do not care about some of this data. And yet, I am storing it, to the tune of about 35 GB worth of data every month.
What to do?
For my home lab, the answer is relatively easy: I do not care about data outside of 3 months, so I can setup retention rules and clean some of this up. But, in business, retention rules become a question around legal and contractual obligations.
In other words, in business, not only are we generating a ton of data, but we can be penalized for not having the data that we generated, or even, not generating the appropriate data, such as audit histories. It is very much a downward spiral: the more we generate, the more we must store, which leads to larger and larger data stores.
Where do we go from here?
We are overwhelming ourselves with data, and it is arguably causing problems across business, government, and general interpersonal culture. The problem is not getting any better, and there really is not a clear solution. All we can do is attempt to be smart data consumers. So before you take that random Facebook ad as fact, maybe do a little more digging to corroborate. In the age where anyone can be a journalist, everyone has to be a journalist.
I have setup an instance of Home Assistant as the easiest front end for interacting with my home automation setup. While I am using the Universal Devices ISY994 as the primary communication hub for my Insteon devices, Home Assistant provides a much nicer interface for my family, including a great mobile app for them to use the system.
With my foray into monitoring, I started looking around to see if I was able to get some device metrics from Home Assistant into my Grafana Mimir instance. Turns out, there is an a Prometheus integration built right in to Home Assistant.
Read the Manual
Most of my blog posts are “how to” style: I find a problem that maybe I could not find an exact solution for online, and walk you through the steps. In this case, though, it was as simple as reading the configuration instructions for the Prometheus integration.
ServiceMonitor?
Well, almost that easy. I have been using ServiceMonitor resources within my clusters, rather than setting up explicit scrape configs. Generally, this is easier to manage, since I just install the Prometheus operator, and then create ServiceMonitor instances when I want Prometheus to scrape an endpoint.
The Home Assistant Prometheus endpoint requires a token, however, and I did not have the desire to dig in to configuring a ServiceMonitor with an appropriate secret. For now, it is a to-do on my ever-growing list
What can I do now?
This integration has opened up a LOT of new alerts on my end. Home Assistant talks to many of the devices in my home, including lights and garage doors. This means I can write alerts for when lights go on or off, when the garage door goes up or down, and, probably the best, when devices are reporting low battery.
The first alert I wrote was to alert me when my Ring Doorbell battery drops below 30%. Couple that with my Prometheus Alerts module for Magic Mirror, and I now get a display when the battery needs changed.
What’s Next?
I am giving back to the community. The Prometheus integration for Home Assistant does not currently report cover statuses. Covers are things like shades or, in my case, garage doors. Since I would like to be able to alert when the garage door is open, I am working on a pull request to add cover support to the Prometheus integration.
It also means I would LOVE to get my hands on some automated shades/blinds… but that sounds really expensive.
As we start to develop more containers that are being run in Kubernetes, we encounter non-http workloads. I came across a workload that represents a non-http processor for queued events. In .NET, I used the IHostedService offerings to run a simple service in a container to do this work.
However, when it came time to deploy to Kubernetes, I quickly realized that my standard liveness/health checks would not work for this container. I searched around, and the HealthChecks libraries are limited to ASP.NET Core. Not wanting to bloat my image, I looked for some alternatives. My Google searches led me to Bruce Lee.
No, not Bruce Lee the actor, but Bruce Lee Harrison. Bruce published a library called TinyHealthChecks, which provides the ability to add lightweight endpoints without dragging in the entire ASP.NET Core libraries.
While it seems a pretty simple concept, it solved an immediate need of mine with minimal effort. Additionally, there was a sample and documentation!
Why call this out? Many developers use open source software to solve these types of problems, and I feel as though they deserve a little publicity for their efforts. So, thanks to the contributors to TinyHealthCheck, I will certainly watch this repository and contribute as I can.
I have had MagicMirror running for about a year now, and I love having it in my office. A quick glance gives my family and I a look at information that is relevant for the days ahead. As I continue my dive into Prometheus for monitoring, it occurred to me that I might be able to create a new module for displaying Prometheus Alerts.
Current State
Presently, my Magic Mirror configuration uses the following modules:
In recent weeks, my experimentation with Mimir has lead me to write some alerts to keep tabs on things in my Kubernetes cluster and, well, the overall health of my systems. Currently, I have a personal Slack team with an alerts channel, and that has been working nicely. However, as I stared at my office panel, it occurred to me that there should be a way to gather these alerts and show them in Magic Mirror.
Since Grafana Mimir is Prometheus-compatible, I should be able to use the Prometheus APIs to get alert data. A quick Google search yielded the HTTP API for Prometheus.
With that in hand, I copied the StatusPage IO module’s code and got to work. In many ways, the Prometheus Alerts are simpler than Status Page, since it is a single collection of alerts with labels and annotations. So I stripped out some of the extra handling for Status Page Components, renamed a few things, and after some debugging, I have a pretty good MVP.
What’s next?
It’s pretty good, but not perfect. I started adding some issues to the GitHub repository for things like message templating and authentication, and when I get around to adding authentication to Grafana Mimir and Loki, well, I’ll probably need to update the module.