• Moving to Ubuntu 24.04

    I have a small home lab running a few Kubernetes clusters, and a good bit of automation to deal with provisioning servers for the K8 clusters. All of my Linux VMs are based on Ubuntu 22.04. I prefer to stick with LTS for stability and compatibility.

    As April turns into July (missed some time there), I figured Ubuntu’s latest LTS (24.04) has matured to the point that I could start the process of updating my VMs to the new version.

    Easier than Expected

    In my previous move from 20.04 to 22.04, there were some changes to the automated installers for 22.04 that forced me down the path of testing my packer provisioning with the 22.04 ISOs. I expected similar changes with 24.04. I was pleasantly surprised when I realized that my existing scripts should work well with the 24.04 ISOs.

    I did spend a little time updating the Azure DevOps pipeline that builds a base image so that it supports building both a 22.04 and 24.04 image. I want to make sure I have the option to use the 22.04 images, should I find a problem with 24.04

    Migrating Cluster Nodes

    With a base image provisioned, I followed my normal process for upgrading cluster nodes on my non-production cluster. There were a few hiccups, mostly around some of my automated scripts that needed to have the appropriate settings to set hostnames correctly.

    Again, other than some script debugging, the process worked with minimal changes to my automation scripts and my provisioning projects.

    Azure DevOps Build Agent?

    Perhaps in a few months. I use the GitHub runner images as a base for my self-hosted agents, but there are some changes that need manual review. I destroy my Azure DevOps build agent weekly and generate a new one, and that’s a process that I need to make sure continues to work through any changes.

    The issue is typically time: the build agents take a few hours to provision because of all the tools that are installed. Testing that takes time, so I have to plan ahead. Plus, well, it is summertime, and I’d much rather be in the pool than behind the desk.

  • Drop that zero…

    I ran into a very weird issue with Nuget packages and the old packages.config reference style.

    Nuget vs Semantic Versioning

    Nuget grew up in Windows, where assembly version numbers support four numbers: major.minor.build.revision. Therefore, NugetVersion supports all four version segments. Semantic versioning, on the other hand, supports three numbers plus additional labels.

    As part of Nuget’s version normalization, in an effort to better support semantic versioning, the fourth segment version is dropped if it’s zero. So 1.2.3.0 becomes 1.2.3. In general, this does not present any problems, since the version numbers are retrieved from the feed by the package manager tools and references updated accordingly.

    Always use the tools provided

    When you ignore the tooling, well, stuff can get weird. This is particularly true in the old packages.config reference style.

    In that style, packages are listed in a packages.config file, and the .Net project file adds a reference to the DLL with a HintPath. That HintPath includes the folder where the package is installed, something like this:

     <ItemGroup>
        <Reference Include="MyCustomLibrary, Version=1.2.3.4, Culture=neutral, processorArchitecture=MSIL">
          <HintPath>..\packages\MyCustomLibrary.1.2.3.4\lib\net472\MyCustomLibrary.dll</HintPath>
        </Reference>
    </ItemGroup>

    But, for argument’s sake, let us assume we publish a new version of MyCustomLibrary, version 1.2.4. Even though the AssemblyVersion might be 1.2.4.0, the Nuget version will be normalized to 1.2.4. And, instead of upgrading the package using one of the package manager tools, you just update the reference file manually, like this:

    <ItemGroup>
        <Reference Include="MyCustomLibrary, Version=1.2.4.0, Culture=neutral, processorArchitecture=MSIL">
          <HintPath>..\packages\MyCustomLibrary.1.2.4.0\lib\net472\MyCustomLibrary.dll</HintPath>
        </Reference>
    </ItemGroup>

    This can cause weird issues. It will most likely build with a warning about not being able to find the DLL. Depending on how the package is used or referenced, you may not get a build error (I didn’t get one). But the build did not include the required library.

    Moving on…

    The “fix” is easy: use the Nuget tools (either the CLI or Visual Studio Package Manager) to update the packages. It will generate the appropriate HintPath for the package that is installed. An even better solution is to migrate to project reference style, where the project includes the Nuget references, and packages.config is not used. This presents immediate errors if an incorrect version is used.

  • Isolating your Azure Functions

    I spent a good bit of time over the last two weeks converting our Azure functions from the in-process to the isolated worker process model. Overall the transition was fairly simple, but there were a few bumps in the proverbial road worth noting.

    Migration Process

    Microsoft Learn has a very detailed How To Guide for this migration. The guide includes steps for updating the project file and references, as well as additional packages that are required based on various trigger types.

    Since I had a number of functions to process, I followed the guide for the first one, and that worked swimmingly. However, then I got lazy and started the “copy-paste” conversion. In that laziness, I missed a particular section of the project file:

    <ItemGroup>
      <Using Include="System.Threading.ExecutionContext" Alias="ExecutionContext"/>
    </ItemGroup>

    Unfortunately, if you forget this, you will not break your local development environment. However, when you publish to a function, it will not execute correctly.

    Fixing Dependency Injection

    When using the in-process model, there are some “freebies” that get added to the dependency injection system, as if by magic. ILogger, in particular, was allowed to be automatically injected into the function (as a function parameter). However, in the in-process model, you must get ILogger from either the FunctionContext or through dependency injection into the class.

    As part of our conversion, we removed the function parameters for ILogger and replaced them with service instances retrieved through dependency injection at the class level.

    What we did not realize until we got our functions into the test environments was that IHttpContextAccessor was not available in the isolated model. Apparently, that particular interface is available as part of the in-process model automatically, but is not added as part of the isolated model. So we had to add an instance of IHttpContextAccessor to our services collection in the Program.cs file.

    It is never easy

    Upgrades or migrations are never just “change this and go.” as much as we try to make it easy, there always seems to be a little change here or there that end up being a fly in the ointment. In our case, we simply assumed that IHttpContextAccessor was there because in-process put it there, and the code which needed that was a few layers deep in the dependency tree. The only way to find it was to make the change and see what breaks. And that is what keeps quality engineers up at night.

  • I appreciate feedback, but..

    I am really tired of deleting hundreds of spam comments every couple of days. While I have had a few posts generate some good feedback, generally, all I get is spam.

    It was not too bad until the last few months, when spam volume increased by an order of magnitude. I would rather not burn resources, even for a few days, on ridiculous incoming spam.

    So, while I really appreciate any feedback on my posts, you will have to find another channel through which to contact me. The management of spam comments far outweighs anything I have gained from the comments I have received.

  • Hitting my underwater stride

    It’s not always about tech. A recent trip to Cozumel has only strengthened my resolve to continue my underwater adventures.

    Hitting the Road

    Neither my wife nor I have ever been to Cozumel. Sure, we have been to Mexico a few times, including taking my kids to Cancun the past few summers. But, and I cannot quite stress this enough, Cozumel isn’t quite Mexico. This quiet little island situated about 12 miles off of the Mexican shores of Quintana Roo is a tourist mecca.

    We were able to get 5 nights away this time. Rather than dive four mornings, we took the opportunity to rent a Jeep and drive around the island. You can pretty much divide Cozumel into 4 parts:

    1. Town: San Miguel de Cozumel is the port city where multiple cruise ships can dock and offload their thousands of passengers. Plenty of shops, restaurants, beach clubs, and activities are available.
    2. Leeward beaches: The leeward beaches on the west side of the island, south of town, are either resorts or beach clubs which charge an admission fee. Most of the coast is rocky, but little wave action and coarse white sand make for a great beach day.
    3. Windward beaches: The east side of the island has significantly more wave action, with some beaches that offer a little more fine sand (more waves = finer sand). Still rocky, but more opportunity for water activities like kite surfing and surfing.
    4. Nature preserve: The north end of Cozumel is mostly natural preserve. There are some beach clubs and islands north of town, but we did not venture in that direction.

    The island caters to cruise ships. Certain activities, including the Mayan ruins, are only open when cruise ships are in port. “No cruise ships, no money” was a phrase I heard more than a few times. As we rented our Jeep on a day with no cruise ships, we missed out on some of those activities. We also missed out on the mass of humanity coming from those ships, so I was not terribly mad.

    If you venture to Cozumel, bring cash! Many places on the east side of the island are remote, with little cellular signal of any kind. Many places do not take credit cards, or charge a service fee for using cards. The west side is a little more tourist friendly, but its never a bad idea to have some cash. Most places seem to accept the US dollar, but pesos aren’t a terrible idea.

    Jump In!

    The Cozumel barrier reef is part of one of the largest reef systems on the planet. A quick glance in the luggage area at the airport will tell you it is a scuba diver’s destination. There are a ton of dive shops on the island, so many that I used two different ones for my three dive days.

    Nearly everyone does a two tank dive, with prices ranging from $80 to $110 for each two tank dive. Gear rental is available, I had to rent a BCD and regulator, which put me back about $25 USD a day.

    In 6 dives, we dove 6 unique spots. Both dive shops did a “deep/shallow” dive, with deep dives being wall dives that range from 75-85 feet, and shallow dives in the 50-65 foot range. One thing that caught my attention was the lack of attention to certification levels.

    I got my PADI advanced open water certification last year so that I would be certified for depths up to 100 feet. PADI open water certifications are only certified to 60 feet. By that standard, you need an advanced open water certification to dive on the deeper walls. I’m fairly certain that some of the folks I dove with did not have that level of certification. Now, it is not my business: I will always dive within my limits. But taking someone to 80 feet when they have not had some of the additional training seems dangerous, not to mention a bit of a liability.

    Both dive houses, though, we accommodating during the dives. This trip marked dives 18 through 23 for me, but I can feel myself getting more comfortable. But, as comfortable as it is, it is never truly comfortable. There is an element of risk in every dive, and situational awareness is critical to keep yourself and your dive buddy safe. But I find myself becoming more aware with each dive, and with that awareness comes a great appreciation for the sights of the reef.

    What did I see? Well, a ton of aquatic life, but the highlights have to be a 6 ft blacktip shark, a sea turtle, a couple large rays, and a few large Caribbean lobsters.

    Next trip?

    These dives brought my grand total to 23. Diving in Cozumel, I’m sitting next to folks who are easily in the hundreds, but never once was I intimidated. I have been very fortunate: my dive groups have been nothing but helpful. I get helpful pointer on nearly every dive, and it has made me more comfortable in the water.

    The only question is, where to next?

  • Tech Tip – Fixing my local Git Repository

    Twice now, I have run into some odd corruption with a local Git repository. Since I had to look it up twice, a quick post is in order.

    Symptom

    When doing a fetch or pull, I would get an error like this:

    error: cannot lock ref 'refs/remotes/origin/some/branch-name': is at <hash> but expected <different hash>

    I honestly haven’t the slightest idea what is causing it, but it has happened twice now.

    The Fix

    A Google search led to a few issues in various places, but the one that worked was from the Github desktop repository:

    git gc --prune=now
    rm -Rf .git/refs/remote/origin
    git fetch

    I never attempted this with staged or modified files, so, caveat emptor. But my local branches were just fine after, so make I would make sure you do not have any modified or staged files before trying this.

  • Nerd Humor

    Easter eggs in software are not a new thing. And I will always appreciate a good pop culture reference when I find it.

    As I was cycling my Kubernetes clusters, I had an issue with some resource contention. Things were not coming up as expected, so I started looking at the Rancher Kubernetes Engine 2 (RKE2) server logs. I ran across this gem:

    Apr 03 18:00:07 gk-internal-srv-03b rke2[747]: 2024/04/03 18:00:07 ERROR: [transport] Client received GoAway with error code ENHANCE_YOUR_CALM and debug data equal

    While I cannot be certain of the developer’s own reference, my mind immediately went to the Stallone/Snipes classic Demolition Man.

    You never know what you’ll find in software.

  • Centralized Authentication: My Hotel California

    When I left my previous role, I figured I would have some time before the idea of a centralized identity server popped back up. As the song goes, “You can checkout any time you like, but you can never leave…”

    The Short Short Version

    This is going to sound like the start of a very bad “birds and bees” conversion…

    When software companies merge, the primary drivers tend to be expansion of market through additional functionality. In other words, Company A buys Company B because Company A wants to offer functionality to its customer’s that Company B already provides. Rather than writing that functionality, you just buy the company.

    Usually, that also works in the inverse: customers of Company B might want some of the functionality from the products in Company A. And with that, the magical “cross sell” opportunity is born.

    Unfortunately, but much like human babies, the magic of this birth is tempered pretty quickly by what comes next. Mismatched technical stacks, inconsistent data models, poorly modularized software… the list goes on. Customers don’t want to have to input the same data twice (or three, four, even five times), nor do they want to have to configure different systems. The magic of “cross sell” is that, when it’s sold, it “just works.” But that’s nearly never the case.

    Universal Authentication

    That said, there is one important question that all systems ask, and it becomes the first (and probably one of the largest) hurdle: WHO ARE YOU?

    When you start to talk about integrating different systems and services, the ability to determine universal authentication (who is trying to access your service) becomes the linchpin around which everything else can be built. But what’s “universal authentication”?

    Yea, I made that up. As I have looked at these systems, the directive is pretty simple: Universal Authentication is “everyone looks to a system which provides the same user ID for the same user.”

    Now.. That seems a bit, easy. But there is an important point here: I’m ONLY talking about authentication, not authorization. Authentication (who are you) is different from authorization (what are you allowed to do). Aligning on authentication should be simpler (should), but provides for long term transition to alignment on authorization.

    Why just Authentication?

    If there is a central authentication service, then all applications can look to that service to authenticate users. They can send their users (or services) to this central service and trust that it will authenticate the user and provide a token with which that user can operate the system.

    If other systems use the same service, they too can look to the central service. In most cases, if you as a user are already logged in to this service, it will just redirect you back , with a new token in hand for the new application. This leads to a streamlined user experience.

    You make it sound easy…

    It’s not. There is a reason why Authentication as a Service (AaaS) platforms are popular and so expensive.

    • They are the most attacked services out there. Get into one of these, and you have carte blanche over the system.
    • They are the most important in terms of uptime and disaster recovery. If AaaS is down, everyone is down.
    • Any non-trivial system will throw requirements in for interfacing with external IDPs, managing tenants (customers) as groups, and a host of other functional needs.

    And yet, here I am, having some of these same discussions again. Unfortunately, there is no one magic bullet, so if you came here looking for me to enlighten you… I apologize.

    What I will tell you is that the discussions I have been a part of generally have the same basic themes:

    • Build/Buy: The age old question. Generally, authentication is not something I would suggest you build yourself, unless that is your core competency and your business model. If you build, you will end up spending a lot of time and effort “keeping up with the Jones”: Adding new features based on customer requests.
    • Self-Host/AaaS: Remember what I said earlier: attack vectors and SLAs are difficult, as this is the most-attacked and most-used service you will own. There is also a question of liability. If you host, you are liable, but liability for attacks on an AaaS product vary.
    • Functionality: Tenants, SCIM, External IDPs, social logins… all discussions that could consume an entire post. Evaluate what you would like and how you can get there without a diamond-encrusted implementation.

    My Advice

    Tread carefully: wading into the waters of central authentication can be rewarding, but fraught with all the dangers of any sizable body of water.

  • Tech Tip – Formatting External Secrets in Helm

    This has tripped me up a lot, so I figure it is worth a quick note.

    The Problem

    I use Helm charts to define the state of my cluster in a Git repository, and ArgoCD to deploy those charts. This allows a lot of flexibility in my deployments and configuration.

    For secrets management, I use External Secrets to populate secrets from Hashicorp Vault. In many of those cases, I need to use the templating functionality of External Secrets to build secrets that can be used from external charts. A great case of this is populating user secrets for the RabbitMQ chart.

    In the link above, you will notice the templates/default-user-secrets.yaml file. This file is meant to generate a Kubernetes Secret resource which is then sent to the RabbitMqCluster resource (templates/cluster.yaml). This secret is mounted as a file, and therefore, needs some custom formatting. So I used the template property to format the secret:

    template:
      type: Opaque
      engineVersion: v2
      data:
        default_user.conf: |
            default_user={{ `{{ .username  }}` }}
            default_pass={{ `{{ .password  }}` }}
        host: {{ .Release.Name }}.rabbitmq.svc
        password: {{`"{{ .password }}"`}}
        port: "5672"
        provider: rabbitmq
        type: rabbitmq
        username: {{`"{{ .username }}"`}}

    Notice in the code above the duplicated {{ and }} around the username/password values. These are necessary to ensure that the template is properly set in the ExternalSecret resource.

    But, Why?

    It has to do with templating. Helm uses golang templates to process the templates and create resources. Similarly, the ExternalSecrets template engine uses golang templates. When you have a “template in a template”, you have to somehow tell the processor to put the literal value in.

    Let’s look at one part of this file.

      default_user={{ `{{ .username  }}` }}

    What we want to end up in the ExternalSecret template is this:

    default_user={{ .username  }}

    So, in order to do that, we have to tell the Helm template to write {{ .username }} as written, not processing it as a golang template. In this case, we use the backtick (`) to allow for this escape without having that value written to the template. Notice that other areas use the double-quote (“) to wrap the template.

    password: {{`"{{ .password }}"`}}

    This will generate the quotes in the resulting template:

    password: "{{ .password }}"

    If you need a single quote, the use the same pattern, but replace the double quote with a single quote (‘).

    username: {{`'{{ .username }}'`}}

    For whatever it is worth, VS Code’s YAML parser did not like that version at all. Since I have not run into a situation where I need a single quote, I use double quotes if quotes are required, and backticks if they are not.

  • Printing for Printing’s sake

    I have spent a good portion of the last two weeks just getting my Bambu P1S setup to where I want it to be. I believe I’m closing in on a good setup.

    I thought you were printing in 15 minutes?

    I was! Out of the box, the P1S is great, and lets me start printing without a ton of calibration and tinkering. As I continue to print, however, I wanted some additional features that needed some new parts.

    External Spool

    By default, the P1S has an external spool mount on the rear of the device. This allows the spool a nice path to feed the printer. However, if you have an AMS (which I do), you need to hook up the tube from the filament buffer to the printer.

    There are several versions of a Y connector which allows for multiple paths into the printer. I chose the Smoothy on a recommendation from Justin. There is a mount available that lets you attach the Smoothy to an existing screw hole in the P1S, or use magnets.

    Since I have the P1S close to a wall, I wanted to relocate the external spool to the side of the machine. This gives me easier access to spool and allows the printer to sit closer to the wall. For that, I printed this external spool support.

    For assembly, I couldn’t print everything. I needed some M3 screws and a set of fittings. For the Smoothy and the external mount, I used all four of the 10mm fittings (three for the Smoothy, one for the external mount. However, everything went together pretty easily, and my setup now allows me to print from either the AMS or an external spool.

    Raise it up!

    Bambu recommends printing with the top vented for PLA. That is difficult to do when the AMS is sitting on top of it. Thankfully, there are a LOT of available risers, with many different features. After a good deal of searching, I found one that I like.

    Why that one? A few of the more popular risers are heavy (like, almost 3kg of filament heavy). And yes, they have drawers and places for storage, but ultimately, I just wanted a riser that held the glass top higher (so it could vent) and had a rim for the LED strip I purchased to illuminate the build plate better. My choice fits those requirements and comes in at under 500 grams.

    Final Product

    With all the prints completed, I am very happy with the results.

    AMS with Riser
    Side Spool Holder with Bowden Tubing

    Parts List

    Here’s everything I used for the setup: