Author: Matt

  • Tech Tip – Fixing my local Git Repository

    Twice now, I have run into some odd corruption with a local Git repository. Since I had to look it up twice, a quick post is in order.

    Symptom

    When doing a fetch or pull, I would get an error like this:

    error: cannot lock ref 'refs/remotes/origin/some/branch-name': is at <hash> but expected <different hash>

    I honestly haven’t the slightest idea what is causing it, but it has happened twice now.

    The Fix

    A Google search led to a few issues in various places, but the one that worked was from the Github desktop repository:

    git gc --prune=now
    rm -Rf .git/refs/remote/origin
    git fetch

    I never attempted this with staged or modified files, so, caveat emptor. But my local branches were just fine after, so make I would make sure you do not have any modified or staged files before trying this.

  • Nerd Humor

    Easter eggs in software are not a new thing. And I will always appreciate a good pop culture reference when I find it.

    As I was cycling my Kubernetes clusters, I had an issue with some resource contention. Things were not coming up as expected, so I started looking at the Rancher Kubernetes Engine 2 (RKE2) server logs. I ran across this gem:

    Apr 03 18:00:07 gk-internal-srv-03b rke2[747]: 2024/04/03 18:00:07 ERROR: [transport] Client received GoAway with error code ENHANCE_YOUR_CALM and debug data equal

    While I cannot be certain of the developer’s own reference, my mind immediately went to the Stallone/Snipes classic Demolition Man.

    You never know what you’ll find in software.

  • Centralized Authentication: My Hotel California

    When I left my previous role, I figured I would have some time before the idea of a centralized identity server popped back up. As the song goes, “You can checkout any time you like, but you can never leave…”

    The Short Short Version

    This is going to sound like the start of a very bad “birds and bees” conversion…

    When software companies merge, the primary drivers tend to be expansion of market through additional functionality. In other words, Company A buys Company B because Company A wants to offer functionality to its customer’s that Company B already provides. Rather than writing that functionality, you just buy the company.

    Usually, that also works in the inverse: customers of Company B might want some of the functionality from the products in Company A. And with that, the magical “cross sell” opportunity is born.

    Unfortunately, but much like human babies, the magic of this birth is tempered pretty quickly by what comes next. Mismatched technical stacks, inconsistent data models, poorly modularized software… the list goes on. Customers don’t want to have to input the same data twice (or three, four, even five times), nor do they want to have to configure different systems. The magic of “cross sell” is that, when it’s sold, it “just works.” But that’s nearly never the case.

    Universal Authentication

    That said, there is one important question that all systems ask, and it becomes the first (and probably one of the largest) hurdle: WHO ARE YOU?

    When you start to talk about integrating different systems and services, the ability to determine universal authentication (who is trying to access your service) becomes the linchpin around which everything else can be built. But what’s “universal authentication”?

    Yea, I made that up. As I have looked at these systems, the directive is pretty simple: Universal Authentication is “everyone looks to a system which provides the same user ID for the same user.”

    Now.. That seems a bit, easy. But there is an important point here: I’m ONLY talking about authentication, not authorization. Authentication (who are you) is different from authorization (what are you allowed to do). Aligning on authentication should be simpler (should), but provides for long term transition to alignment on authorization.

    Why just Authentication?

    If there is a central authentication service, then all applications can look to that service to authenticate users. They can send their users (or services) to this central service and trust that it will authenticate the user and provide a token with which that user can operate the system.

    If other systems use the same service, they too can look to the central service. In most cases, if you as a user are already logged in to this service, it will just redirect you back , with a new token in hand for the new application. This leads to a streamlined user experience.

    You make it sound easy…

    It’s not. There is a reason why Authentication as a Service (AaaS) platforms are popular and so expensive.

    • They are the most attacked services out there. Get into one of these, and you have carte blanche over the system.
    • They are the most important in terms of uptime and disaster recovery. If AaaS is down, everyone is down.
    • Any non-trivial system will throw requirements in for interfacing with external IDPs, managing tenants (customers) as groups, and a host of other functional needs.

    And yet, here I am, having some of these same discussions again. Unfortunately, there is no one magic bullet, so if you came here looking for me to enlighten you… I apologize.

    What I will tell you is that the discussions I have been a part of generally have the same basic themes:

    • Build/Buy: The age old question. Generally, authentication is not something I would suggest you build yourself, unless that is your core competency and your business model. If you build, you will end up spending a lot of time and effort “keeping up with the Jones”: Adding new features based on customer requests.
    • Self-Host/AaaS: Remember what I said earlier: attack vectors and SLAs are difficult, as this is the most-attacked and most-used service you will own. There is also a question of liability. If you host, you are liable, but liability for attacks on an AaaS product vary.
    • Functionality: Tenants, SCIM, External IDPs, social logins… all discussions that could consume an entire post. Evaluate what you would like and how you can get there without a diamond-encrusted implementation.

    My Advice

    Tread carefully: wading into the waters of central authentication can be rewarding, but fraught with all the dangers of any sizable body of water.

  • Tech Tip – Formatting External Secrets in Helm

    This has tripped me up a lot, so I figure it is worth a quick note.

    The Problem

    I use Helm charts to define the state of my cluster in a Git repository, and ArgoCD to deploy those charts. This allows a lot of flexibility in my deployments and configuration.

    For secrets management, I use External Secrets to populate secrets from Hashicorp Vault. In many of those cases, I need to use the templating functionality of External Secrets to build secrets that can be used from external charts. A great case of this is populating user secrets for the RabbitMQ chart.

    In the link above, you will notice the templates/default-user-secrets.yaml file. This file is meant to generate a Kubernetes Secret resource which is then sent to the RabbitMqCluster resource (templates/cluster.yaml). This secret is mounted as a file, and therefore, needs some custom formatting. So I used the template property to format the secret:

    template:
      type: Opaque
      engineVersion: v2
      data:
        default_user.conf: |
            default_user={{ `{{ .username  }}` }}
            default_pass={{ `{{ .password  }}` }}
        host: {{ .Release.Name }}.rabbitmq.svc
        password: {{`"{{ .password }}"`}}
        port: "5672"
        provider: rabbitmq
        type: rabbitmq
        username: {{`"{{ .username }}"`}}

    Notice in the code above the duplicated {{ and }} around the username/password values. These are necessary to ensure that the template is properly set in the ExternalSecret resource.

    But, Why?

    It has to do with templating. Helm uses golang templates to process the templates and create resources. Similarly, the ExternalSecrets template engine uses golang templates. When you have a “template in a template”, you have to somehow tell the processor to put the literal value in.

    Let’s look at one part of this file.

      default_user={{ `{{ .username  }}` }}

    What we want to end up in the ExternalSecret template is this:

    default_user={{ .username  }}

    So, in order to do that, we have to tell the Helm template to write {{ .username }} as written, not processing it as a golang template. In this case, we use the backtick (`) to allow for this escape without having that value written to the template. Notice that other areas use the double-quote (“) to wrap the template.

    password: {{`"{{ .password }}"`}}

    This will generate the quotes in the resulting template:

    password: "{{ .password }}"

    If you need a single quote, the use the same pattern, but replace the double quote with a single quote (‘).

    username: {{`'{{ .username }}'`}}

    For whatever it is worth, VS Code’s YAML parser did not like that version at all. Since I have not run into a situation where I need a single quote, I use double quotes if quotes are required, and backticks if they are not.

  • Printing for Printing’s sake

    I have spent a good portion of the last two weeks just getting my Bambu P1S setup to where I want it to be. I believe I’m closing in on a good setup.

    I thought you were printing in 15 minutes?

    I was! Out of the box, the P1S is great, and lets me start printing without a ton of calibration and tinkering. As I continue to print, however, I wanted some additional features that needed some new parts.

    External Spool

    By default, the P1S has an external spool mount on the rear of the device. This allows the spool a nice path to feed the printer. However, if you have an AMS (which I do), you need to hook up the tube from the filament buffer to the printer.

    There are several versions of a Y connector which allows for multiple paths into the printer. I chose the Smoothy on a recommendation from Justin. There is a mount available that lets you attach the Smoothy to an existing screw hole in the P1S, or use magnets.

    Since I have the P1S close to a wall, I wanted to relocate the external spool to the side of the machine. This gives me easier access to spool and allows the printer to sit closer to the wall. For that, I printed this external spool support.

    For assembly, I couldn’t print everything. I needed some M3 screws and a set of fittings. For the Smoothy and the external mount, I used all four of the 10mm fittings (three for the Smoothy, one for the external mount. However, everything went together pretty easily, and my setup now allows me to print from either the AMS or an external spool.

    Raise it up!

    Bambu recommends printing with the top vented for PLA. That is difficult to do when the AMS is sitting on top of it. Thankfully, there are a LOT of available risers, with many different features. After a good deal of searching, I found one that I like.

    Why that one? A few of the more popular risers are heavy (like, almost 3kg of filament heavy). And yes, they have drawers and places for storage, but ultimately, I just wanted a riser that held the glass top higher (so it could vent) and had a rim for the LED strip I purchased to illuminate the build plate better. My choice fits those requirements and comes in at under 500 grams.

    Final Product

    With all the prints completed, I am very happy with the results.

    AMS with Riser
    Side Spool Holder with Bowden Tubing

    Parts List

    Here’s everything I used for the setup:

  • Using Git Hooks on heterogenous repositories

    I have had great luck with using git hooks to perform tool executions before commits or pushes. Running a linter on staged changes before the code is committed and verifying that tests run before the code is pushed makes it easier for developers to write clean code.

    Doing this with heterogenous repositories, or repos which contain projects of different tech stacks, can be a bit daunting. The tools you want for one repository aren’t the tools you want for another.

    How to “Hook”?

    Hooks can be created directly in your repository following Git’s instructions. However, these scripts are seldom cross-OS compatible, so running your script will need some “help” in terms of compatibility. Additionally, the scripts themselves can be harder to find depending on your environment. VS Code, for example, hides the .git folder by default.

    Having used NPM in the past, Husky has always been at the forefront of my mind when it comes to tooling around Git hooks. It helps by providing some cross-platform compatibility and easier visibility, as all scripts are in the .husky folder in your repository. However, it requires some things that a pure .Net developer may not have (like NPM or some other package manager).

    In my current position, though, our front ends rely on either Angular or React Native, so the chance that our developers have NPM installed are 100%. With that in mind, I put some automated linting and building into our projects.

    Linting Different Projects

    For this article, assume I have a repository with the following outline:

    • docs/
      • General Markdown documentation
    • /source
      • frontend/ – .Net API project which hosts my SPA
      • ui/ – The SPA project (in my case, Angular)

    I like lint-staged as a tool to execute linting on staged files. Why only staged files? Generally, large projects are going to have a legacy of files with formatting issues. Going all in and formatting everything all at once may not be possible. But if you format as you make changes, eventually most everything should be formatted well.

    With the outline above, I want different tools to run based on which files need linted. For source/frontend, I want to use dotnet format, but for source/ui, I want to use ESLint and prettier.

    With lint-staged, you can configure individual folders using a configuration file. I was able to add a .lintstagedrc file in each folder, and specify the appropriate linter for the folder. for the .Net project:

    {
        "*.cs": "dotnet format --include"
    }

    And for the Angular project:

    {
        "*": ["prettier", "eslint --fix"]
    }

    Also, since I do have some documentation files, I added a .lintstagedrc file to the repository to run prettier on all my Markdown files.

    {
        "*.md": "prettier"
    }

    A Note on Settings

    Each linter has its own settings, so follow the instructions for whatever linter you may be running. Yes, I know, for the .Net project, I’m only running it on *.cs files. This may change in the future, but as of right now, I’m just getting to know what dotnet format does and how much I want to use it.

    Setting Up the Hooks

    The hooks are, in fact, very easy to configure: follow the instructions on getting started from Husky. The configured hooks for pre-commit and pre-push are below, respectively:

    npx lint-staged --relative
    dotnet build source/mySolution.sln

    The pre-commit hook utilizes lint-staged to execute the appropriate linter. The pre-push hook simply runs a build of the solution which, because of Microsoft’s .esproj project type, means I get an NPM build and a .Net build in the same step.

    What’s next?

    I will be updating the pre-push hook to include testing for both the Angular app and the .Net API. The goal is to provide our teams with a template to write their own tests, and have those be executed before they push their code. This level of automation will help our engineers produce cleaner code from the start, alleviating the need for massive cleanup efforts down the line.

  • Spoolman for Filament Management

    “You don’t know what you go ’til it’s gone” is a great song line, but a terrible inventory management approach. As I start to stock up on filament for the 3D printer, it occurred to me that I need a way to track my inventory.

    The Community Comes Through

    I searched around for different filament management solutions and landed on Spoolman. It seemed a pretty solid fit for what I needed. The owner also configured builds for container images, so it was fairly easy to configure a custom chart to run an instance on my internal tools cluster.

    The client UI is pretty easy to use, and the ability to add extra fields to the different modules makes the solution very extensible. I was immediately impressed and started entering information about vendors, filaments, and spools.

    Enhancing the Solution

    Since I am using a Bambu Labs printer and Bambu Studio, I do not have the ability to integrate Bambu into Spoolman to report filament usage. I searched around, but it does not seem that the Bambu reports such usage.

    My current plan for managing filament is by weight the spool when I open it, and then weighing it again after each use. That difference is the amount of filament I have used. But, to calculate the amount remaining, I need to know the weight of an empty spool. Assuming most manufacturers use the same spools, that shouldn’t be too hard to figure out long term.

    Spoolman is not quite set up for that type of usage. Weight and spool weight is set at the filament level and cannot be overridden at the spool level. Most spools will not be exactly 1000g of filament, so the need to track initial weight at the spool level is critical. Additionally, I want to support partial spools, including re-spooling.

    So, using all the Python I have learned recently, I took a crack at updating the API and UI to support this very scenario. In a “do no harm” type of situation, I made sure that I had all the integration tests running correctly, then went about adding the new fields and some of the new default functionality. After I had the updated functionality in place, I added a few new integration test to verify my work.

    Oddly, as I started working it, I found 4 feature requests in that were related to the changes I was suggesting. It took me a few nights, but I generated a pull request for the changes.

    And Now, We Wait…

    With my PR in place, I wait. The beauty of open source is that anyone can contribute, but the owners have the final say. This also means the owners need to respond, and most owners aren’t doing this as a full time job. So sometimes, there isn’t anything to do but wait.

    I’m hopeful that my changes will be accepted, but for now, I’m using Spoolman as-is, and just doing some of the “math” myself. It is definitely helping me keep track of my filament, and I’m keeping an eye on possible integrations with the Bambu ecosystem.

  • A new To(y)ol

    I have wanted to pick up a Bambu Labs P1S for a while now. I saved up enough to finally pull the trigger, and after a few days of use, I could not be more pleased with my decision.

    Why Bambu?

    There are literally dozens of 3D printers out there, and choosing can be difficult. I certainly spent a great deal of time mulling over the various options. But, as with most things, the best advice was from people who use them. Luckily, I have not one, but two resources at my disposal.

    An old colleague of mine, Justin, is usually the first to such things. I typically joke that I usually do the same things that Justin does, I am just usually lagging behind in both time and scale. IE, Justin does it bigger and better. He and I chat frequently, and his input was invaluable. With regard to 3D printers, the one comment he made still resonates:

    I want to design stuff and print it, not tinker endlessly with the printer itself.

    Justin had an Ender (I do not recall the model) for a bit, but never really messed with it too much. After he picked up a P1P, the design and printing started to flow. He had nothing but good things to say about most things Bambu, save, perhaps, the community… We’ll get to that in a minute.

    Additionally, I discussed different printers with the proprietor of Pittsburgh3DPrints.com. He has a number of different brands, and services them all, and he recommended the Bambu as a great first printer, for many of the same reasons. What reasons?

    1. Ease of Use – From “cut box” to print was honestly 15 minutes. The P1S is extremely user-friendly in terms of getting to printing.
    2. Feature Packed – Sure, the price tag for the full P1S combo is a little higher than most printers. But you get, out of box, the ability to print most filaments, including PLA/PETG/ABS/ASA, as well as 4 color multi-color prints.
    3. Expandable – Additional AMS units get you up to 16 color prints.
    4. Ecosystem – Bambu has been really trying to get makerworld.com going, and they have had some success. The makerworld tie-in to Bambu Studio makes printing other’s designs quick and easy.

    First Impressions

    As I mentioned above, unboxing was easy with the included Quick Setup guide and their unboxing video. The first thing I printed was the included model for the scraper handle and holder. I do not recall the exact print time, but it was under an hour, and I had my first print!

    The next two prints were some fidget toys for my kids. I can tell you these took 48 minutes each, as both kids were anxiously waiting every single one of those minutes. One feature the P1S has is the ability to capture time lapse videos for your prints. Here is the one for one of the fidget rings.

    Now, I do laugh, because the running joke with those who own a 3D printer is that you spend most of your time printing stuff for the printer, which is highly meta and also seems like a gimmick to buy more filament. The P1S ejects waste out of the back into, well, thin air. Many folks have designed various waste collection solutions, most affectionately knows as “poop chutes.” I found one that I liked and set about slicing it for printing.

    Oops…

    This is where I ran into a little issue. I tried to start one of the prints (for the bucket) from the Bambu app. Instead of slicing on my PC and sending to the printer, I used the profile published. That, well, failed. For whatever reason, the bed temperature setting was set to 35°C, instead of the 55°C that the Bambu Studio slicer sets.

    I’m not sure if the profile had a cool bed setting or what, but that through off adhesion of the print to the bed, and I ended up with a large mess. I re-started the print from the Bambu Studio, and had no problems. Printing the three pieces of the chute took about 9 hours, which represents my longest print to date.

    Next up?

    I have a few things on my list to print. Most center around some organizational aspects of my office, and looking to use the Gridfinity system to make things neat. My wife asked for a few bases for some signage that she uses for work. This requires some design on my part, so it is a nice challenge for me.

    Both my son and one of our neighbors have expressed some interest in design and printing, so I look forward to passing on some of what I’ve learned to new designers looking to bring their designs to reality.

  • Foyer Upgrade

    Not everything I do is “nerdy.” I enjoy making things with my hands, and my wife has an eye for design and a sadistic love of painting. Combined, we like to spend some time redesigning rooms around the house. We save a ton doing the work ourselves, and for me it is a great break from the keyboard.

    The Idea

    My wife has been eyeing up our foyer for quite some time. Our foyer is a two story entry featuring stairs leading to our second floor, doors to each of our office spaces, and a hallway back to our kitchen and living room area. Off of the foyer is a half bath.

    Foyer View from Front Door

    As part of our kitchen renovation a few years ago, we replaced the vinyl flooring with LVT, and had no plans to change that. However, we wanted to open up the stairway with a different railing and add some applied moldings to the walls.

    The Plan

    Timing on this was a little odd. I was well aware of the work, and we did not want to have a construction zone around Christmas, so we split the work into two phases.

    Phase 1 was removing the half-wall on the steps and replacing it with a newel post and railing. It also included removing the existing carpet, painting the stairs, and installing carpet treads.

    Phase 2 would be the installation of the applied moldings, some new light fixtures, and a lot of painting.

    Phase 1

    Demo took about a day. We ripped out the old carpet, cut the half wall down to the existing riser, and removed the quarter round and baseboards. We also removed the old air return grates, as we ordered some new ones and would not be re-using the old ones.

    The installation of the newel and banister was not incredibly difficult, but took a while to ensure everything was accurately cut and secured. I built out a good deal of blocking at the end of the stairs to ensure the newel was solidly anchored.

    After everything was installed, there were a few days of drywall patching to get everything back to acceptable. The steps took a few coats of floor paint, and a new set of stair treads. They are secured with adhesive, but easily replaceable, which I am sure will be helpful in the future.

    All in all, Phase 1 took about 8 weeks. Mind you, this is 8 weeks of work some evenings and weekends, working around sports and social calendars for the kids and ourselves. If you were working on this full time, you could probably get it done in 5-8 working days. We finished Phase 1 just in time for the new banister to get some holiday decorations!

    Phase 1 Complete

    Phase 2

    Phase 2 was, for me, the more dreadful of the two. The plan was to take applied moldings all the way to top of the half wall in the loft, and create a framed look with the moldings. This required 1″x4″ boards (we used pre-primed MDF) and 11/16″ concave molding, and a lot of it.

    Before we installed the molding, we hired a painter to come in and put a fresh coat on the ceiling and the upper walls of the foyer. While she loves to paint, getting up on a ladder to hit the 18′ ceilings and cut in to the wall did not appeal to my wife, and I agreed.

    The process for the 1″x4″ boards was pretty simple: frame each wall, running horizontal boards the length of the wall, then vertical boards on each end. In between, run boards horizontally to divide the wall into three, then add vertical boards to complete the rectangles. Around the door frames, we used 1″x4″ boards to build out the existing molding, giving the doors a larger look.

    After the 1″x4″ installation, I went back through and installed the 11/16″ concave molding inside of each square, creating a bit of a picture frame look. This was, well, a lot of cutting and placing. In particular, the triangles on the walls with the steps were challenging because of the angles that needed cut. I did more geometry in the last few months than I have in a while.

    After everything was buttoned up, my wife took on the task of painting all the installed trim. All it took was time, and a lot of it: I think that task alone took her about 4 days. With the painting complete, I installed the last of the quarter round and she added some design touches.

    The last install was to rent a 16′ step ladder to change out the chandelier. I had done this once before, and was not looking forward to doing it again. The combination of wrestling the ladder into the foyer, setting it up, and then climbing up and down a few dozen times makes the process somewhat cumbersome.

    Phase 2 took about 9-10 weeks, with the same caveat that it was not full time work. Full time, you are probably looking at a good 8-10 business days. But the end result was well worth it.

    After Pictures

    Overall, we are happy with the results. The style matches my wife’s office nicely, and transitions well into the kitchen and living area.

  • Cleaning out the cupboard

    I have been spending a little time in my server cabinet downstairs, trying to organize some things. I took what I thought would be a quick step in consolidation. It was not as quick as I had hoped.

    POE Troubles

    When I got into the cabinet, I realized I had 3 POE injectors in there, powering my three Unifi Access Points. Two of them are the UAP-AC-LR, and the third is a UAP-AC-M. My desire was simple: replace 3 POE injectors with a 5 port PoE switch.

    So, I did what I thought would be a pretty simple process:

    1. Order the switch
    2. Using the MAC, assign it a static IP in my current Unifi Gateway DHCP.
    3. Plug in the switch.
    4. Take the cable coming out of the POE injector and plug it into the switch.

    And that SHOULD be it: devices boot up and I remove the POE injectors. And, for two of the three devices, it worked fine.

    There’s always one

    One of the UAP-AC-LR endpoints simply would not turn on. I thought maybe it was the cable. So I checked the different cables, but still nothing. I swapped out the cables and nothing changed: the one UAP-AC-LRs and the UAP-AC-M worked, but the other UAP-AC-LR did not work.

    I consulted the Oracle and came to realize that I had an old UAP-AC-LR, which only supports a 24v Passive PoE, not the 48v standard that my switch supports. Obviously, the newer UAP-AC-LR and the UAP-AC-M have support 802.3at (or at least a legacy protocols for 48v), but my oldest UAP-AC-LR simply doesn’t turn on.

    The Choice

    There are two solutions, one more expensive than another:

    1. Find an indoor PoE Converter (INS-3AF-I-G) that can convert the 48V coming from my new switch to the 24v that the device needs.
    2. Upgrade! Buy a U6 Pro to replace my old long range access point.

    I like the latter, as it would give me WiFi 6 support and start my upgrade in that area. However, I’m not ready for the price tag at the moment. I was able to find the converter for about $25, and that includes shipping and tax. So I opted for the more economical route in order to get rid of that last PoE injector.