Category: Software

  • The trip to Windows 11

    The past five weeks have been a blur. Spring soccer is in full swing, and my time at the keyboard has been limited to mostly work. A PC upgrade at work started a trickle-down upgrade, and with things starting to settle it’s probably worth a few notes.

    Swapping out the work laptop

    My old work laptop, a Dell Precision 7510, was about a year over our typical 4 year cycle on hardware. I am usually not one to swap right at 4 years: if the hardware is working for me, I’ll run it till it dies.

    Well, I was clearly killing the 7510: Every so often I would get random shutdowns that were triggered by the thermal protection framework. In other words, to quote Ricky Bobby, “I’m on fire!!!” As it had become more frequent, I put in a request for a new laptop. Additionally, since my 7510 was so old, it couldn’t be redistributed, so I requested to buy it back. My personal laptop, and HP Envy 17, is approaching the 11 year mark, so I believe I’ve used it enough.

    I was provisioned a Dell Precision 7550. As it came to me clean with Windows 10, I figured I’d upgrade it to Windows 11 before I reinstalled my apps and tools. That way, if I broke it, I would know before I wasted my time. The upgrade was pretty easy, and outside of some random issues with MS Teams and audio setup, I was back at work with minimal disruption.

    On to the “new old” laptop

    My company approved the buyback of my old Precision 7510, but I did have to ship it back to have it wiped and prepped. Once I got it back, I turned it on and….. boot media failure.

    Well, crap. As it turns out, the M.2 SSD died on the 7510. So, off to Amazon to pick up a 1TB replacement. A day later, new M.2 in hand, I was off to the races.

    I put in my USB drive with Windows 11 media, booted, and got the wonderful message that my hardware did not meet requirements. As it turns out, my TPM module was an old firmware version (1.2 instead of the required 2.0), and I was running an old processor that is not in the officially supported list for Windows 11.

    So I installed Windows 10 and tried to boot, but, well, that just failed. As it turns out, the BIOS was setup for legacy boot and insecure boot, which I needed for Windows 11. And, after changing the BIOS, my installed version of Windows 10 wouldn’t boot. I suppose I could have taken the time to work on the boot loader to get it working… but I just re-installed Windows 10 again.

    So after the second install of Windows 10, I was able to update the TPM Firmware to 2.0, bypass the CPU check, and install Windows 11. I started installing my standard library of tools, and things seemed great.

    Still on fire

    Windows 11, however, seems to have exacerbated the overheating issue. It came to a head when I tried to install Jetbrains Rider: every install caused the machine to overheat.

    I found a few different solutions in separate forums, but the working solution was to disable the “Turbo Boost” setting in the BIOS. My assumption is that is some sort of overclocking, but removing that setting has stabilized my system.

    Impressions

    So far, so good with Windows 11. Of all the changes, the ability to match contents of the taskbar to the monitor in a multi-monitor display is great, but I am still getting used to that functionality. Organizationally it is accurate, but muscle memory is hard to break.

  • Tech Tip – Markdown Linting in VS Code

    With a push to driving better documentation, it is worth remembering that Visual Studio Code has a variety of extensions that can help with linting/formatting of all types of files, including your README.md files.

    Markdown All in One and markdownlint are my current extensions of choice, and they have helped me clean up my README.md files in both personal and professional projects.

  • Not everything you read on the internet is true….

    I spend a lot of time searching the web for tutorials, walkthroughs, and examples. So much so, in fact, that “Google Search” could be listed as a top skill on my resume. With that in mind, though, it’s important to remember that not everything you read on the Internet is true, and to take care in how you approach things.

    A simple plan

    I was trying to create a new ASP.NET / .Net 6 application that I could use to test connectivity to various resources in some newly configured Kubernetes clusters. When I used the Visual Studio 2022 templates, I noticed that the new “minimal” styling was used in the template. This is my first opportunity to try the new minimal styling, so I looked to the web for help. I came across this tutorial on Medium.com.

    I followed the tutorial and, on my local machine, it worked like a charm. So I built a quick container image and started deploying into my Kubernetes cluster.

    Trouble brewing

    When I deployed into my cluster, I kept receiving errors about SQL connection errors. Specifically, that the server was not found. I added some logging, and the connection string seemed correct, but nothing was working.

    I thought it might be a DNS error within the cluster, so I spent at least 2 hours trying to determine if there was a DNS issues in my cluster. I even tried another cluster to see if it had to do with our custom subnet settings in the original cluster.

    After a while, I figured out the problem, and was about ready to quit my job. The article has a step to override the OnConfiguring method in the DbContext, like this:

    protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
            {
                var configuration = new ConfigurationBuilder()
                    .SetBasePath(Directory.GetCurrentDirectory())
                    .AddJsonFile("appsettings.json")
                    .Build();
    
                var connectionString = configuration.GetConnectionString("AppDb");
                optionsBuilder.UseSqlServer(connectionString);
            }

    I took this as necessary and put it in there. But what I glossed over was that this will run every time migrations happen. And the configuration is ONLY pulling from appsettings.json, not from environment variables or any other configuration sources.

    That “Oh……” moment

    At that point I realized that I had been troubleshooting the fact that this particular function was ignoring the connection string I was passing in using environment variables. To make matters worse, the logging I added that printed the connection string was using the full configuration (with environment variables), so it looked like the value was changing. In reality, it was the value from appSettings.json the whole time.

    The worst part: that override is completely unnecessary. I removed the entire function, and my code operates as normal.

    No Hard Feelings

    Let me be clear, though: as written, the article does a fine job of walking you through the process of setting up an ASP.NET / .Net 6 application in the minimal API styling. My issue was not recognizing how that OnConfiguring override would act in the container, and then bouncing around everywhere to figure it out.

    I will certainly be more careful about examining the tutorial code before traipsing around in Kubernetes command lines and DNS tools.

  • Can Yellowstone teach us about IT infrastructure management?

    It seems almost too fitting that, at a time when the popularity of gritty television like Yellowstone and 1883 is climbing, that I write to encourage you to stop taking on new pets and to start running a cattle ranch.

    Pet versus Cattle – The IT Version

    The “pet versus cattle” analogy is often used to describe different methodologies for IT management. You treat a pet by giving it a name like Zeus or Apollo. You give it a nice home home. You nurse it back to health when it’s sick. Cattle, on the other hand, get an ear tag with a number and are sent to roam the field until they are needed.

    Carrying this analogy into IT, I have seen my shared of pet servers. We build them up, nurse them back to health after a virus, upgrade them when needed, and do all the things a good pet owner would do. And when these go down, people notice. “Cattle” servers, on the other hand, are quickly provisioned and just as quickly replaced, often without any maintenance or downtime.

    Infrastructure as Code

    In its most basic definition, infrastructure as code is the idea that an infrastructure’s definition can be defined in some file (preferably source controlled). Using various tools, we can take this definition and create the necessary infrastructure components automatically.

    Why do we care if we can provision infrastructure from code? Treating our servers as cattle requires a much better management structure than “have Joe stand this up for us.” Sure, Joe is more than capable of creating and configuring all of the necessary components, but if we want to do it again, it requires more of Joe’s time.

    With the ability to define an infrastructure one time and deploy it many times, we gain the capacity to worry more about what is running on the infrastructure than the infrastructure itself.

    Putting ideas to action

    Changing my mindset from “pet” to “cattle” with virtual machines on my home servers has been tremendously valuable for me. As I mentioned in my post about Packer.io, you become much more risk tolerant when you know a new VM is 20 unattended minutes away.

    I have started to encourage our infrastructure team to accept nothing less than infrastructure as code for new projects. In order to lead by example, I have been working through creating Azure resources for one of our new projects using Terraform.

    Terraform: Star Trek or Nerd Tech?

    You may be asking: “Aren’t they both nerd tech?” And, well, you are correct. But when you have a group of people who grow up on science fiction and are responsible for keeping computers running, well, things get mixed up.

    Terraform the Hashicorp product is one of a number of tools which allow infrastructure engineers to automatically provision environments to host their products using various providers. I have been using its Azure Resource Manager provider, although there are more than a few available.

    While I cannot share the project code, here is what I was able to accomplish with Terraform:

    • Create and configure an Azure Kubernetes Service (AKS) instance
    • Create and configure an Azure SQL Server with multiple databases
    • Attach these instances to an existing virtual network subnet
    • Create and configure an Azure Key Vault
    • Create and configure a public IP address from an existing prefix.

    Through more than a few rounds of destroy and recreate, the project is in a state where it is ready to be deployed in multiple environments.

    Run a ranch

    So, perhaps, Yellowstone can teach us about IT Infrastructure management. A cold and calculated approach to infrastructure will lead to a homogenous environment that is easy to manage, scale, and replace.

    For what it’s worth, 1883 is simply a live-action version of Oregon Trail…

  • A little open source contribution

    The last month has been chock full of things I cannot really post about publicly, namely, performance reviews and security remediations. And while the work front has not been kind to public posts, I have taken some time to contribute back a bit more to the Magic Mirror project.

    Making ToDo Better

    Thomas Bachmann created MMM-MicrosoftToDo, a plugin for the Magic Mirror that pulls tasks from Microsoft’s ToDo application. Since I use that app for my day to day tasks, it would be nice to see my tasks up on the big screen, as it were.

    Unfortunately, the plugin used the old beta version of the APIs, as well as the old request module, which has been deprecated. So I took the opportunity to fork the repo and make some changes. I submitted a pull request to the owner, hopefully it makes its way into the main plugin. But, for now, if you want my changes, check them out here.

    Making StatusPage better

    I also took the time to improve on my StatusPage plugin, adding the ability to ignore certain components and removing components from the list when they are a part of an incident. I also created a small enhancement list for some future use.

    With the holidays and the rest of my “non-public” work taking up my time, I would not expect too much from me for the rest of the year. But I’ve been wrong before…

  • Git, you are messing with my flow!

    The variety of “flows” for developing using Git makes choosing the right one for your team difficult. When you throw true continuous integration and delivery into that, and add a requirement for immutable build objects, well…. you get a heaping mess.

    Infrastructure As Code

    Some of my recent work to help one of our teams has been creating a Terraform project and Azure DevOps pipeline based on a colleague’s work in standardizing Kubernetes cluster deployments in AKS. This work will eventually get its own post, but suffice to say that I have a repository which is a mix of Terraform files, Kubernetes manifests, Helm value files, and Azure DevOps pipeline definitions to execute them all.

    As I started looking into this, it occurred to me that there really is clear way to manage this project. For example, do I use a single pipeline definition with stages for each environment (dev/stage/production)? This would mean the Git repo would have one and only one branch (main), and each stage would need some checks (manual or automatic) to ensure rollout is controlled.

    Alternatively, I can have an Azure Pipeline for each environment. This would mean that each pipeline could trigger on its own branch. However, it also means that no standard Git Flow seems to work well with this.

    Code Flows

    Quite separately, as the team dove into creating new repositories, the question again came up around branching strategy and subsequent environment strategies for CI/CD. I am a vocal proponent of immutable build objects, but how each team chooses to get there is up to them.

    In some MS Teams channel discussion, there are pros and cons to nearly all of our current methods, and the team seems stuck on the best way to build and develop.

    The Internet is not helping

    Although it is a few years old, Scott Shipp’s War of the Git Flows article highlights the ongoing “flow wars.” One of the problems with Git is the ease with which all of these variations can be implemented. I am not blaming Git, per say, but because it is easy for nearly anyone to suggest a branching strategy, things get muddled quickly.

    What to do?

    Unfortunately, as with many things in software, there is no one right answer. The branch strategy you use depends on the requirements of not just the team, but the individual repository’s purpose. Additional requirements may come from the desired testing, integration, and deployment process.

    With that in mind, I am going to make two slightly radical suggestions

    1. Select a flow that is appropriate for the REPOSITORY, not your team.
    2. Start backwards. Working on identifying the requirements for your deployment process first. Answer questions like this:
    • How will artifacts, once created, be tested?
    • Will artifacts progress through lower environments?
    • Do you have to support old releases?
    • Where (what branch) can a release candidate artifacts come from? Or, what code will ultimately be applied to production?

    That last one is vital: defining on paper which branches will either be applied directly to production (in the case of Infrastructure as Code) or generate artifacts that can end up in production (in the case of application builds) will help outline the requirements for the project’s branching strategy.

    Once you have identified that, the next step is defining the team’s requirements WITHIN the above. In other words, try not to have the team hammer a square peg into a round hole. They have a branch or two that will generate release code, how they get their code into that branch should be as quick and direct as possible while supporting the collaboration necessary.

    What’s next

    What’s that mean for me? Well, for the infrastructure project, I am leaning towards a single pipeline, but I need to talk to the Infrastructure team to make sure they agree. As to the software team, I am going to encourage them to apply the process above for each repository in the project.

  • Making use of my office television with Magic Mirror

    I have a wall-mounted television in my office that, 99% of the time, sits idle. Sadly, the fully loaded RetroPie attached to it doesn’t get much Super Mario Bros action during the workday. But that idle Raspberry Pi had me thinking of ways to utilize that extra screen in my office. Since, well, 4 monitors is not enough.

    Magic Mirror

    At first, I contemplated writing my own application in .Net 5. But I really do not have the kind of time it would take to get something like that moving, and it seems counter-productive. I wanted something quick and easy, with no necessary input interface (it is a television, after all), and capable of displaying feed data quickly. That is when I stumbled on Magic Mirror 2.

    Magic Mirror is a Node-based app which uses Electron to display HTML on various platforms, including on the Raspberry Pi. It is popular, well-documented, and modular. It has default modules for weather and clock, as well as a third-party module for package tracking.

    Server Status… Yes please.

    There are several modules around displaying status, but nothing I saw that let me display the status from my statuspage.io page. And since statuspage.io has a public API, unique to each page, that doesn’t require an API key, it felt like a good first module to develop for myself.

    I spent a lot of time understanding the module aspect of MagicMirror, but, in the end, I have a pretty simplistic module that’ll display my statuspage.io status on the magic mirror.

    What is next?

    Well, there are two things I would really like to try:

    1. Work on integrating a floor plan of my house, complete with status from my home automation.
    2. Re-work MagicMirror as a React app.

    For the first one, well, there are some existing third-party components that I will test. That second one, well, that seems a slightly taller task. That might have to be a winter project.

  • Packages, pipelines, and environment variables…. oh my!

    I was clearly tempting the fates of package management when I called out NPM package management. Nuget was not to be outdone, and threw us a curveball in one of our Azure DevOps builds that is too good not to share.

    The Problem

    Our Azure Pipeline build was erroring out, however, it was erroring in different steps, and steps that seemingly had no connection to the changes that were made.

    Error 1 – Unable to install GitVersion.Tool

    We use GitVersion to version our builds and packages. We utilize the GitTools Azure DevOps extension to provide us with pipeline tasks to install and execute GitVersion. Most of our builds use step templates to centralize common steps, so these steps are executed every time, usually without error.

    However, in this particular branch, we were getting the following error:

    --------------------------
    Installing GitVersion.Tool version 5.x
    --------------------------
    Command: dotnet tool install GitVersion.Tool --tool-path /agent/_work/_temp --version 5.7.0
    /usr/bin/dotnet tool install GitVersion.Tool --tool-path /agent/_work/_temp --version 5.7.0
    The tool package could not be restored.
    /usr/share/dotnet/sdk/5.0.402/NuGet.targets(131,5): error : 'feature' is not a valid version string. (Parameter 'value') [/tmp/t31raumn.tj5/restore.csproj]

    Now, I would have concentrated more on the “‘feature’ is not a valid version string” error initially, however, I was distracted because, sometimes, we got past this error but into another error.

    Error 2 – Dotnet restore failed

    So, sometimes (not always), the pipeline would get past the installation of GitVersion and make it about three steps forward, into the dotnet restore step. Buried in that restore log was a similar error to the first one:

    error : 'feature' is not a valid version string. (Parameter 'value')

    Same error, different steps?

    So, we were seeing the same error, but in two distinct places: one in a step which installs the tool, and presumably has nothing to do with the code in the repository, and the other which is squarely around the dotnet restore command.

    A quick Google search yielded an issue in the dotnet core Github repository. At the tail end of that thread is this little tidbit from user fabioduque:

    The same thing happened to me, and it was due to me setting up the Environment Variable “VERSION”.
    Solved with with:
    set VERSION=

    Given that, I put a quick step in to print out the environment variables and determined the problem.

    Better the devil you know…

    As the saying goes: Better the devil you know than the devil you don’t. In this case, the devil we didn’t know about was an environment variable named VERSIONPREFIX. Now, we were not expressly setting that version in a script, but one of our variables was named versionPrefix. Azure DevOps makes pipeline variables available as environment variables, and it standardizes them into all caps, so we ended up with VERSIONPREFIX. Nuget, in versions after 3.4, provides for applying settings at runtime via environment variables. And since VERSIONPREFIX is applicable in dotnet / nuget, our variable was causing these random errors. I renamed the variable and viola, no more random errors.

    The Short, Short Version

    Be careful when naming variables in Azure Pipelines. They are made available as-named via environment variables. If some of the steps or commands that you use accept those environment variables as settings, you may be inadvertently affecting your builds.

  • Node_Modules is the new DLL hell… Change my mind.

    All I wanted to do over the weekend was take a React 16 class library, copy it, strip out the components (leaving the webpack configuration intact), and upgrade components.

    To call it a nightmare is being nice. My advice to anyone is this: upgrade components one at a time, and test between each upgrade voraciously. Not just “oh, it installed,” but make sure everything is compatible with everything else.

    I ran into at least three issues that were “this package relies on an older version of this other package, but you installed the newer package” or, even better, “this package relies on a version of a plugin package that is inconsistent with the core version of the plugin’s base package.” Note, that wasn’t the error, just my finding based on a tremendous amount of spelunking.

    This morning, I literally gave up: I reverted back to a known set of working packages/versions, deleted node_modules, cleared my NPM cache, and started over.

  • Lab time is always time well spent.

    The push to get a website out for my wife brought to light my neglect of my authentication site, both in function and style. As one of the ongoing projects at work has been centered around Identity Server, a refresher course would help my cause. So over the last week, I dove into my Identity Server to get a better feel for the function and to push some much needed updates.

    Function First

    I had been running Identity Server 4 in my local environments for a while. I wanted a single point of authentication for my home lab, and since Identity Server has been in use at my company for a few years now, it made sense to continue that usage in my lab environment.

    I spent some time over the last few years expanding the Quickstart UI to allow for visual management of the Identity Server. This includes utilizing the Entity Framework integration for the configuration and operational stores and the ASP.NET Identity integration for user management. The system works exactly as expected: I can lock down APIs or UIs under single authentication service, and I can standardize my authentication and authorization code. It has resulted in way more time for fiddling with new stuff.

    The upgrade from Identity Server 4 to Identity Server 5 (now under Duende’s Identity Server) was very easy, thanks to a detailed upgrade guide from the folks at Duende. And, truthfully, I left it at that for a few months. Then I got the bug to use Google as an external identity provider. After all, what would be nicer than to not even have to worry about remembering account passwords for my internal server?

    When I pulled the latest quick start for Duende, though, a lot had changed. I spent at least a few hours moving changes in the authorization flow to my codebase. The journey was worth it, however: I now have a working authentication provider which is able to use Google as an identity provider.

    Style next

    As I was messing around with the functional side, I got pretty tired of the old CoreUI theme I had applied some years ago. I went looking for a Bootstrap 5 compatible theme, and found one in PlainAdmin. It is simple enough for my identity server, but a little more crisp than the old Core UI.

    As is the case with many a software project, the functional changes took a few hours, the visual changes, quite a bit longer. I had to shuffle the layout to better conform with PlainAdmin’s templates, and I had a hell of a time stripping out old scss to make way for the new scss. I am quite sure my app is still bloated with unnecessary styles, but cleaning that up will have to wait for another day.

    And finally a logo…

    What’s a refresh without a new logo?

    A little background: spyder007, in some form or another, has been my online nickname/handle since high school. It is far from unique, but it is a digital identity I still use. So Spydersoft seemed like a natural name for private projects.

    I started with a spider logo that I found with a good creative commons license. But I wanted something a little more, well, plain. So I did some searching and found a “web” design that I liked with a CC license. I edited it to be square, which had the unintended affect of making the web look like a cube… and I liked it.

    The new Spydersoft logo

    So with about a week’s worth of hobby-time tinkering, I was able to add Google as an identity provider and re-style my authentication site. As I’m the only person that sees that sight, it may seem a bit much, but the lessons learned, particularly around the identity server functional pieces, have already translated into valuable insights for the current project at work.