I have looked this up at least twice this year. Maybe if I write about it, it will stick with me. If it doesn’t, well, at least I can look here.
Options Pattern
The Options pattern is a set of interfaces that allow you to read options into classes in your ASP.NET application. This allows you to configure options classes which are strongly typed with default values and attributes for option validation. It also removes most of the “magic strings” that can come along with reading configuration settings. I will do you all a favor and not regurgitate the documentation, but rather leave a link so you can read all about the pattern.
A Small Sample
Let’s assume I have a small class called HostSettings to store my options:
{"HostSettings": {"Host": "http://0.0.0.0","Port": 5000 },/// More settings here}
Using Dependency Injection
For whatever reason, I always seem to remember how to configure options using the dependency injector. Assuming the above, adding options to the store looks something like this:
From here, to get HostSettings into your class, add an IOptions<HostSettings> parameter to your class, and access the options using the IOptions.Value implementation.
Yup. That’s it. Seems silly that I forget that, but I do. Pretty much every time I need to use it.
A Note on SectionName
You may notice the SectionName constant that I add to the class that holds the settings. This allows me to keep the name/location of the settings in the appsettings.json file within the class itself.
Since I only have a few classes which house these options, I load them manually. It would not be a stretch, however, to create a simple interface and use reflection to load options classes dynamically. It could even be encapsulated into a small package for distribution across applications… Perhaps an idea for an open source package.
A lot has been made in recent weeks about open source and its effects on all that we do in software. And while we all debate the ethics of Hashicorp’s decision to turn to a “more closed” licensing model and question the subsequent fork of their open source code, we should remember that there are companies who offer their cloud solutions free for open source projects.
But first, Github
Github has long been the mecca for open source developers, and even under Microsoft’s umbrella, that does not look to be slowing down. Things like CI/CD through Github Actions and Package Storage are free for public repositories. So, without paying a dime, you can store your open source code, get automatic security and version updates, build your code, and store build artifacts all in Github. All of this built on the back of a great ecosystem for pull request reviews and checks. For my open source projects, it provides great visibility into my code and puts MOST of what I want in one place.
And then SonarQube/Cloud
SonarSource’s SonarQube offering is a great way to get static code analysis on your code. While their community edition is missing features that require an enterprise license, their cloud offering provides free analysis of open source projects.
With that in mind, I have started to add my open source projects to SonarCloud.io. Why? Well, first, it does give me some insight into where my code could be better, which keeps me honest. Second, on the off chance that anyone wants to contribute to my projects, the Sonar analysis will help me quickly determine the quality of the incoming code before I accept the PR.
Configuring the SonarCloud integration with Github even provides a sonarcloud bot that reports on the quality gate for pull requests. What does that mean? It means I get a great picture of the quality of the incoming code:
What Next?
I have been spending a great deal of time on the Static Code Analysis side of the house, and I have been reasonably impressed with SonarQube. I have a few more public projects which will receive a SonarCloud instance, but at work, it is more about identifying the value that can come from this type of scanning.
So, what is that value, you may ask? Enhancing and automating your quality gates is always beneficial, as it streamlines your developer work flow. It also sets expectations: Engineers know that bad/smelly code will be caught well before a pull request is merged.
If NOTHING else, SonarQube allows you to track your testing coverage and ensuring it does not trend backwards. If we did nothing else, we should at least ensure that we continue to cover what we write new, even if those before us did not.
In the past few years I have started to become more and more comfortable with Typescript, I wanted to see if I could convert my modules to Typescript.
Finding an example
As is the case with most development, the first step was to see if someone else had done it. As it turns out, a few folks have done it.
I stumbled across Michael Scharl’s post on dev.to which covered his Typescript MagicMirror module. In the same search, I ran across a forum post by Jalibu that focused a little more on the nitty-gritty, including his contribution of the magicmirror-module in DefinitelyTyped.
Migrating to Typescript
Ultimately, the goal was to generate the necessary module files for MagicMirror through transpilation using Rollup (see below), but first I needed to move my code and convert it to Typescript. I created a src folder, moved my module file and node_helper into there, and changed the extension to .ts.
From there, I split things up into a more logical configuration, utilizing Typescript as well as taking advantage of ESNext based module imports. As it would all be transpiled into Javascript, I could take advantage of the module options in Typescript to clean up my code.
My modules already had a good amount of development packages around linting and formatting, so I updated all of those and added packages necessary for Typescript linting.
A Note on Typing
Originally, following Michael Scharl’s sample code, I had essentially copied the module-types.ts file from the MagicMirror repo and renamed it ModuleTypes.d.ts in my own code. I did not particularly like that method, as it required me to have extra code in my module, and I would have to update it as the MagicMirror types changed.
Jalibu‘s addition of the @types/magicmirror-module package simplified things greatly. I installed the package and imported what I needed.
A few tweaks to the tsconfig.json file, and the tsc command was running!
Using Rollup
The way that MagicMirror is set up, the modules generally need the following:
Core Module File, named after the module (<modulename>.js)
Node Helper (node_helper.js) that represents a Node.js backend task. It is optional, but I always seem to have one.
CSS file, if needed. Would contain any custom styling for the HTML generated in the Core Module file.
Michael Scharl’s post detailed his use of Rollup to create these files, however, as the post is a few years old, it required a few updates. Most of it was installing the scoped rollup packages (@rollup), but I also removed the banner plugin.
I configured my Rollup in a ‘one to one’ fashion, mapping my core module file (src/MMM-PrometheusAlerts.ts) to its output file (MMM-PrometheusAlerts.js) and my node helper (src/node_helper.ts) to its output file (node_helper.js). Rollup would use the Typescript transpiler to generate the necessary Javascript files, bringing in any of the necessary imports.
Taking a cue from Jalibu, I used the umd output format for node_helper, since it will be running on the backend, but iife for the core module, since it will be included in the browser.
Miscellaneous Updates
As I was looking at code that had not been touched in almost two years, I took the opportunity to update libraries. I also switched over to Jest for testing, as I am certainly more familiar with it, and I need the ability to mock to complete some of my tests. I also figured out how to implement a SASS compiler as part of rollup, so that I could generate my module CSS as well.
To make things easier on anyone who might use this module, I added a postinstall script that performs the build task. This generates the necessary Javascript files for MagicMirror using Rollup.
One down, one to go
I converted MMM-PrometheusAlerts, but I need to convert MMM-StatusPageIo. Sadly, the latter may require some additional changes, since StatusPage added paging to their APIs and I am not yet in full compliance…. I’ve never had enough incidents that I needed to page. But it has been on my task list for a bit now, and moving to Typescript might give me the excuse I need to drop back in.
I have been running in to some odd issues with ArgoCD not updating some of my charts, despite the Git repository having an updated chart version. As it turns out, my configuration and lack of Chart.lock files seems to have been contributing to this inconsistency.
My GitOps Setup
I have a few repositories that I use as source repositories for Argo. The contain mix of my own resource definition files, which are raw manifest files, and external Helm charts. The external Helm charts use an umbrella chart to allow me the ability to add supporting resources (like secrets). My Grafana chart is a great example of it.
Prior to this, I was not including the Chart.lock file in the repository. This made it easier to update the version in the Chart.yaml file without having to run a helm dependency update to update the lock file. I have been running this setup for at least a year, and I never really noticed much problem until recently. There were a few times where things would not update, but nothing systemic.
And then it got worse
More recently, however, I noticed that the updates weren’t taking. I saw the issue with both the Loki and Grafana charts: The version was updated, but Argo was looking at the old version.
I tried hard refreshes on the Applications in Argo, but nothing seemed to clear that cache. I poked around in the logs and noticed that Argo runs helm dependency build, not helm dependency update. That got me thinking “What’s the difference?”
As it turns out, build operates using the Chart.lock file if it exists, otherwise it acts like upgrade. upgrade uses the Chart.yaml file to install the latest.
Since I was not committing my Chart.lock file, it stands to reason that somewhere in Argo there is a cached copy of a Chart.lock file that was generated by helm dependency build. Even though my Chart.yaml was updated, Argo was using the old lock file.
Testing my hypothesis
I committed a lock file 😂! Seriously, I ran helm dependency update locally to generate a new lock file for my Loki installation and committed it to the repository. And, even though that’s the only file that changed, like magic, Loki determined it needed an update.
So I need to lock it up. But, why? Well, the lock file exists to ensure that subsequent builds use the exact version you specify, similar to npm and yarn. Just like npm and yarn, helm requires a command to be run to update libraries or dependencies.
By not committing my lock file, the possibility exists that I could get a different version than I intended or, even worse, get a spoofed version of my package. The lock file maintains a level of supply chain security.
Now what?
Step 1 is to commit the missing lock files.
At both work and home I have Powershell scripts and pipelines that look for potential updates to external packages and create pull requests to get those updates applied. So step 2 is to alter those scripts to run helm dependency update when the Chart.yaml is updated, which will update the Chart.lock and alleviate the issue.
I am also going to dig into ArgoCD a little bit to see where these generated Chart.lock values could be cached. In testing, the only way around it was to delete the entire ApplicationSet, so I’m thinking that the ApplicationSet controller may be hiding some data.
As I was upgrading WordPress from 6.2.2 to 6.3.0, I ran into a spot of trouble. Thankfully, ArgoCD rollback was there to save me.
It’s a minor upgrade…
I use the Bitnami WordPress chart as the template source for Argo to deploy my blog to one of my Kubernetes clusters. Usually, an upgrade is literally 1, 2, 3:
Get the latest chart version for the WordPress Bitnami chart. I have a Powershell script for that.
Commit the change to my ops repo.
Go into ArgoCD and hit Sync
That last one caused some problems. Everything seemed to synchronize, but the WordPress pod stopped at the connect to database section. I tried restarting the pod, but nothing.
Now, the old pod was still running. So, rather than mess with it, I used Argo’s rollback functionality to roll the WordPress application back to it’s previous commit.
What happened?
I’m not sure. You are able to upgrade WordPress from the admin panel, but, well, that comes at a potential cost: If you upgrade the database as part of the WordPress upgrade, but then you “lose” the pod, well, you lose the application upgrade but not the database upgrade, and you are left in a weird state.
So, first, I took a backup. Then, I started poking around in trying to run an upgrade. That’s when I ran into this error:
Unknowncommand"FLUSHDB"
I use the WordPress Redis Object Cache to get that little “spring” in my step. It seemed to be failing on the FLUSHDB command. At that point, I was stuck in a state where the application code was upgraded but the database was not. So I restarted the deployment and got back to 6.2.2 for both application code and database.
Disabling the Redis Cache
I tried to disable the Redis plugin, and got the same FLUSHDB error. As it turns out, the default Bitnami Redis chart disables these commands, but it would seem that the WordPress plugin still wants them.
So, I enabled the commands in my Redis instance (a quick change in the values files) and then disable the Redis Cache plugin. After that, I was able to upgrade to WordPress 6.3 through the UI.
From THERE, I clicked Sync in ArgoCD, which brought my application pods up to 6.3 to match my database. Then I re-enabled the Redis Plugin.
Some research ahead
I am going to check with the maintainers of the Redis Object Cache plugin. If they are relying on commands that are disabled by default, it most likely caused some issues in my WordPress upgrade.
For now, however, I can sleep under the warm blanket of Argo roll backs!
For what seems like a long time now, I have been using RittmanMead’s md_to_conf project to automatically publish documentation from Markdown files into Confluence Cloud. I am on something of a documentation kick, and what started as a quick fix ultimately turned into a new project.
It all started with the word “legacy”…
In publishing our Markdown files, I noticed that the pages all had the legacy editor text in Confluence Cloud. I wanted to move our pages to the updated editor, and my gut reaction was “well, maybe it is because the md_to_conf project was using the v1 APIs.
The RittmanMead project is great, but it has not changed in about a year. Now, granted, once things work, I wouldn’t expect much change.
So I forked the project and started changing some API calls. The issue is, well, I just did not know when to stop. My object-oriented tendencies took over, and I ended up way past the point of no return:
Split converter and client code into separate Python classes to simplify the main module.
Converted the entire project to a Python module and built a wheel for simplified installation and execution.
Added Flake8/Black for Linting
Added a GitHub workflow for building and publishing.
Added analysis steps to analyze the code in SonarCloud.io.
It is worth noting that, at the end of the day, the editor value was already supported in RittmanMead/md_to_conf: You just have to set the version argument to 2 when running. I found that out about an hour or so into my journey, but, by that time, I was committed.
Making a break for it
At this point, a few things had happened:
My code had diverged greatly from the RittmanMead project.
Most likely, the functionality and purpose for which the original project was written had changed.
I broke support for Confluence Server.
I have some plans for additional features for the module, including the ability to pull labels from the Markdown.
With that in mind, I had GitHub “break” the fork connection between my repository and the RittmanMead repository.
Let me be VERY clear: the README will ALWAYS reflect the source of this repository. This project would not be possible without the contributors to the RittmanMead script, and whatever happens in my respository is building on their fine work. But I have designs on a more formal package and process, as well as my own functional roadmap, so a split makes the most sense.
Introducing md-to-conf
With that in mind, I give you md-to-conf (PyPi / GitHub)! I will be adding some issues for feature enhancements and work on them as I can, although, my first order of business will most likely be some basic testing to make sure I don’t break anything as I work.
If you have a desire to contribute, please see the contribution guideline and have at it!
One of the most important aspects of a software engineer/architect is to document what they have done. While it is great that they solved the problem, that solution will become a problem for someone else if the solution has not been well documented. Truthfully, I have caused my own problems when revisiting old, improperly documented code.
Documentation Generation
Most languages today have tools that will extract code comments and turn them into formatted content. I used mkdocs to generate documentation for my simple Python monitoring tool.
Why generate from the code? The simple reason is, if the documentation lives within the code, then it is more likely to be updated. Requiring a developer to jump into another repository or project takes time and can be a forgotten step in development. The more automatic your documentation is, the more likely it is to get updated.
Infrastructure as Code (IaC) is no different. Lately I have been spending some time in Terraform, and sought out a solution for documenting Terraform. terraform-docs to the rescue!
Documenting Terraform with terraform-docs
The Getting Started guides for terraform-docs are pretty straight forward and allow you to generate content in a variety of different targets. All I really wanted was a simple README.md file for my projects and any sub-modules, so I left the defaults pretty much as-is.
I typically structure my repositories with a main terraform folder, and sub-modules under that if needed. Without any additional configuration, this command worked like a charm:
It generated a README.md file for my main module and all sub-modules. I played a little with configuration, mostly to set up default values in the configuration YAML file so that others could run a simpler command:
About a month ago, I signed up for Google’s generative AI offerings: SGE (generative AI in Search) and Code Tips. I was accepted to the labs a few weeks ago, and, overall, it’s been a pretty positive experience.
SGE – Yet Another Acronym
I don’t really have any idea what SGE means, but it is essentially the generative AI component of search. The implementation is fairly intrusive: your search if fed into the generative AI, which processes and returns answers that show at the top of the page. Longer searches require you to click a button to submit the phrase to generative AI, but short searches are always submitted.
Why is that intrusive? Well, when the results are returned, they can take up nearly all of the screen real estate, pushing actual search results to the bottom. In most cases, I just want the search results, not the generative AI’s opinion on the matter. I would prefer having to submit the search to generative AI explicitly. What would be nicer is if there was a different submit button: type your search and hit “Submit to SGE” or something like that, at which point it will not only search, but also submit to SGE. A standard search, though, would remain the same.
As to accuracy: It’s about what I’ve seen from other tools, but nothing that really gets me overly excited. Truthfully, Google’s search algorithm tends to return results that are more meaningful than the SGE results.
Code Tips
Where SGE failed, Code Tips has succeeded with flying colors. These are the types of searches that I perform on a daily basis, and the Code Tips does a pretty good job.
For example, this search:
powershell create object array
returns the following suggestion from Code Tips:
$array =@()$array += [pscustomobject]@{ name ="John Doe" age =30}$array += [pscustomobject]@{ name ="Jane Doe" age =25}
Every Code Tip comes with a warning: use code with caution with a link to a legal page around responsible use and citations. In other words, Google is playing “CYA”.
Code Tips works great for direct questions, but some indirect questions, like c# syntax for asynchronous lambda function, return nothing. However, with that same search string, SGE returns a pretty good synopsis of how to write an asynchronous lambda function.
Overall Impressions
Code Tips.. A solid B+. I would expect it to get better, but, unlike GitHub CoPilot, it’s not built into an IDE, which means I’m still defaulting to “ask the oracle” for answers.
SGE, I can’t go above a C+. It’s a little intrusive for my taste, and right now, the answers it provides aren’t nearly as helpful as the search results that I now need to scroll to see.
Through sheer happenstance I came across a posting for The Jaggerz playing near me and was taken back to my first time hearing “The Rapper.” I happened to go to school with one of the member’s kids, which made it all the more fun to reminisce.
But I digress. I spent time a while back getting Packer running at home to take care of some of my machine provisioning. At work, I have been looking for an automated mechanism to keep some of our build agents up to date, so I revisited this and came up with a plan involving Packer and Terraform.
The Problem
My current problem centers around the need to update our machine images weekly, but still using Terraform to manage our infrastructure. In the case of Azure DevOps, we can provision VM Scale Sets and assign those Scale Sets to an Azure DevOps agent pool. But, when I want to update that image, I can do it two different ways:
Using Azure CLI, I can update the Scale Set directly.
I can modify the Terraform repository to update the image and then re-run Terraform.
Now, #1 sounds easy, right? Run command and I’m done. But it then defeats the purpose of Terraform, which is to maintain infrastructure as code. So, I started down path #2.
Packer Revisit
I previously used Packer to provision Hyper-V VMs, but the provisioner for azure-rm is pretty similar. I was able to configure a simple windows based VM and get the only application I needed installed with a Powershell script.
One app? On a build agent? Yes, this is a very particular agent, and I didn’t want to install it everywhere, so I created a single agent image with the necessary software.
Mind you, I have been using the runner-images Packer projects to build my Ubuntu agent at home, and we use them to build both Windows and Ubuntu images at work, so, by comparison, my project is wee tiny. But it gives me a good platform to test. So I put a small repository together with a basic template and a Powershell script to install my application, and it was time to build.
Creating the Build Pipeline
My build process should be, for all intents and purposes, one step that runs the packer build command, which will create the image in Azure. I found the PackerBuild@1 task, and thought my job was done. It would seem that the Azure DevOps task hasn’t kept up with the times, either that, or Packer’s CLI needs help.
I wanted to use the PackerBuild@1 task to take advantage of the service connection. I figured, if I could run the task with a service connection, I wouldn’t have to store service principal credential in a variable library. As it turns out… well, I would have to do that anyway.
When I tried to run the task, I got an error that “packer fix only supports json.” My template is in HCL format, and everything I have seen suggests that Packer would rather move to HCL. Not to be beaten, I looked at the code for the task to see if I could skip the fix step.
Not only could I not skip that step, but when I dug into the task, I noticed that I wouldn’t be able to use the service connection parameter with a custom template. So with that, my dreams of using a fancy task went out the door.
Plan B? Use Packer’s ability to grab environment variables as default values and set the environment variables in a Powershell script before I run the Packer build. It is not super pretty, but it works.
The next step was terraforming the VM Scale Set. If you are familiar with Terraform, the VM Scale Set resource in the AzureRM provider is pretty easy to use. I used the Windows VM Scale Set, as my agents will be Windows based. The only “trick” is finding the image you created, but, thankfully, that can be done by name using a data block.
From there, just set source_image_id to data.azurerm_image.image.id, and you’re good. Why look this up by name? It makes automation very easy.
Gluing the two together
So I have a pipeline that builds an image, and I have another pipeline that executes the Terraform plan/apply steps. The latter is triggered on a commit to main in the Terraform repository, so how can I trigger a new build?
All I really need to do is “reach in” to the Terraform repository, update the variable file with the new image name, and commit it. This can be automated, and I spent a lot of time doing just that as part of implementing our GitOps workflow. In fact, as I type this, I realize that I probably owe a post or two on how exactly we have done that. But, using some scripted git commands, it is pretty easy.
So, my Packer build pipeline will checkout the Terraform repository, change the image name in the variable file, and commit. This is where the image name is important: Packer spit out the Azure Image ID (at least, not that I saw), so having a known name makes it easy for me to just tell Terraform to use the new image name, and it uses that to look up the value.
What’s next?
This was admittedly pretty easy, but only because I have been using Packer and Terraform for some time now. The learning curve is steep, but as I look across our portfolio, I can see areas where these types of practices can help us by allowing us to build fresh machine images on a regular cadence, and stop treating our servers as pets. I hope to document some of this for our internal teams and start driving them down a path of better deployment.
I dove back into React over the past few weeks, and was trying to figure out whether to use NPM or Yarn for package management. NPM has always seemed slow, and in the few times I tried Yarn, it seemed much faster. So I thought I would put them through their paces.
The Projects
I was able to test on a few different projects, some at home and some at work. All were React 18 with some standard functionality (testing, linting, etc), although I did vary between applications using Vite and component libraries that used webpack. While most of our work projects use NPM, I did want to try with Yarn in that environment, and I ended up moving my home environment to Yarn for the test.
The TLDR; version of this is: Yarn is great and fast, but I had so much trouble with authorizing scoped feeds with a Proget NPM feed that I ditched Yarn at work in favor of our NPM standard. At home, where I utilize public packages, it’s not an issue, so I’ll continue using Yarn at home.
Migrating to Yarn
NPM to Yarn 1.x is easy: the commands are pretty much fully compatible, node_modules is still used, and the authentication is pretty much the same. Migrating from Yarn 1 to “modern Yarn” is a little more involved.
However, the migration overall, was easy, at least at home, where I was not dealing with custom registries. At work, I had to use a .yarnrc.yml file to setup some configurations for NPM registries
Notable Differences
Modern Yarn has some different syntaxes, but, overall, is pretty close to its predecessor. It’s notably faster, and if you convert to PNP pacakge management, your node_modules folder goes away.
The package managers are still “somewhat” interchangeable, save for any “npm” commands you may have in custom scripts in your packages.json file. That said, I would NEVER advise you to use different package managers on the same project.
Yarn is much faster than NPM at pretty much every task. Also, the interactive upgrade plugin makes updating packages a breeze. But, I ran into an authentication problem I could not get past.
The Auth Problem
We use Proget for our various feeds. It provides a single repository for packages and container images. For our NPM packages, we have scoped them to our company name.
In configuring Yarn for these scoped repositories, I was never able to get the authentication working so that I could add a package from our private feeds. The error message was something to the effect of Invalid authentication (as an anonymous user). All my searching yielded no good solutions, in spite of hard-coding a valid auth token in the .yarnrc.yml file.
Now, I have been having some “weirder” issues with NPM authentication as well, so I am wondering if it is machine specific. I have NOT yet tested at home, which I will get to. However, my work projects have other deadlines, and I wasn’t about to burn cycles on getting auth to work. So, at work, I backed out of Yarn for the time being.
What to do??
As I mentioned above, some more research is required. I’d like to setup a private feed at home, just to prove that there is either something wrong with my work machine OR something wrong with Yarn connecting to Proget. I’m thinking it’s the former, but, until I can get some time to test, I’ll go with what I know.
That said, if it IS just a local issue, I will make an effort to move to Yarn. I believe the speed improvements are worth it alone, but there are some additional benefits that make it a good choice for package management.