• Isolating your Azure Functions

    I spent a good bit of time over the last two weeks converting our Azure functions from the in-process to the isolated worker process model. Overall the transition was fairly simple, but there were a few bumps in the proverbial road worth noting.

    Migration Process

    Microsoft Learn has a very detailed How To Guide for this migration. The guide includes steps for updating the project file and references, as well as additional packages that are required based on various trigger types.

    Since I had a number of functions to process, I followed the guide for the first one, and that worked swimmingly. However, then I got lazy and started the “copy-paste” conversion. In that laziness, I missed a particular section of the project file:

    <ItemGroup>
      <Using Include="System.Threading.ExecutionContext" Alias="ExecutionContext"/>
    </ItemGroup>

    Unfortunately, if you forget this, you will not break your local development environment. However, when you publish to a function, it will not execute correctly.

    Fixing Dependency Injection

    When using the in-process model, there are some “freebies” that get added to the dependency injection system, as if by magic. ILogger, in particular, was allowed to be automatically injected into the function (as a function parameter). However, in the in-process model, you must get ILogger from either the FunctionContext or through dependency injection into the class.

    As part of our conversion, we removed the function parameters for ILogger and replaced them with service instances retrieved through dependency injection at the class level.

    What we did not realize until we got our functions into the test environments was that IHttpContextAccessor was not available in the isolated model. Apparently, that particular interface is available as part of the in-process model automatically, but is not added as part of the isolated model. So we had to add an instance of IHttpContextAccessor to our services collection in the Program.cs file.

    It is never easy

    Upgrades or migrations are never just “change this and go.” as much as we try to make it easy, there always seems to be a little change here or there that end up being a fly in the ointment. In our case, we simply assumed that IHttpContextAccessor was there because in-process put it there, and the code which needed that was a few layers deep in the dependency tree. The only way to find it was to make the change and see what breaks. And that is what keeps quality engineers up at night.

  • I appreciate feedback, but..

    I am really tired of deleting hundreds of spam comments every couple of days. While I have had a few posts generate some good feedback, generally, all I get is spam.

    It was not too bad until the last few months, when spam volume increased by an order of magnitude. I would rather not burn resources, even for a few days, on ridiculous incoming spam.

    So, while I really appreciate any feedback on my posts, you will have to find another channel through which to contact me. The management of spam comments far outweighs anything I have gained from the comments I have received.

  • Hitting my underwater stride

    It’s not always about tech. A recent trip to Cozumel has only strengthened my resolve to continue my underwater adventures.

    Hitting the Road

    Neither my wife nor I have ever been to Cozumel. Sure, we have been to Mexico a few times, including taking my kids to Cancun the past few summers. But, and I cannot quite stress this enough, Cozumel isn’t quite Mexico. This quiet little island situated about 12 miles off of the Mexican shores of Quintana Roo is a tourist mecca.

    We were able to get 5 nights away this time. Rather than dive four mornings, we took the opportunity to rent a Jeep and drive around the island. You can pretty much divide Cozumel into 4 parts:

    1. Town: San Miguel de Cozumel is the port city where multiple cruise ships can dock and offload their thousands of passengers. Plenty of shops, restaurants, beach clubs, and activities are available.
    2. Leeward beaches: The leeward beaches on the west side of the island, south of town, are either resorts or beach clubs which charge an admission fee. Most of the coast is rocky, but little wave action and coarse white sand make for a great beach day.
    3. Windward beaches: The east side of the island has significantly more wave action, with some beaches that offer a little more fine sand (more waves = finer sand). Still rocky, but more opportunity for water activities like kite surfing and surfing.
    4. Nature preserve: The north end of Cozumel is mostly natural preserve. There are some beach clubs and islands north of town, but we did not venture in that direction.

    The island caters to cruise ships. Certain activities, including the Mayan ruins, are only open when cruise ships are in port. “No cruise ships, no money” was a phrase I heard more than a few times. As we rented our Jeep on a day with no cruise ships, we missed out on some of those activities. We also missed out on the mass of humanity coming from those ships, so I was not terribly mad.

    If you venture to Cozumel, bring cash! Many places on the east side of the island are remote, with little cellular signal of any kind. Many places do not take credit cards, or charge a service fee for using cards. The west side is a little more tourist friendly, but its never a bad idea to have some cash. Most places seem to accept the US dollar, but pesos aren’t a terrible idea.

    Jump In!

    The Cozumel barrier reef is part of one of the largest reef systems on the planet. A quick glance in the luggage area at the airport will tell you it is a scuba diver’s destination. There are a ton of dive shops on the island, so many that I used two different ones for my three dive days.

    Nearly everyone does a two tank dive, with prices ranging from $80 to $110 for each two tank dive. Gear rental is available, I had to rent a BCD and regulator, which put me back about $25 USD a day.

    In 6 dives, we dove 6 unique spots. Both dive shops did a “deep/shallow” dive, with deep dives being wall dives that range from 75-85 feet, and shallow dives in the 50-65 foot range. One thing that caught my attention was the lack of attention to certification levels.

    I got my PADI advanced open water certification last year so that I would be certified for depths up to 100 feet. PADI open water certifications are only certified to 60 feet. By that standard, you need an advanced open water certification to dive on the deeper walls. I’m fairly certain that some of the folks I dove with did not have that level of certification. Now, it is not my business: I will always dive within my limits. But taking someone to 80 feet when they have not had some of the additional training seems dangerous, not to mention a bit of a liability.

    Both dive houses, though, we accommodating during the dives. This trip marked dives 18 through 23 for me, but I can feel myself getting more comfortable. But, as comfortable as it is, it is never truly comfortable. There is an element of risk in every dive, and situational awareness is critical to keep yourself and your dive buddy safe. But I find myself becoming more aware with each dive, and with that awareness comes a great appreciation for the sights of the reef.

    What did I see? Well, a ton of aquatic life, but the highlights have to be a 6 ft blacktip shark, a sea turtle, a couple large rays, and a few large Caribbean lobsters.

    Next trip?

    These dives brought my grand total to 23. Diving in Cozumel, I’m sitting next to folks who are easily in the hundreds, but never once was I intimidated. I have been very fortunate: my dive groups have been nothing but helpful. I get helpful pointer on nearly every dive, and it has made me more comfortable in the water.

    The only question is, where to next?

  • Tech Tip – Fixing my local Git Repository

    Twice now, I have run into some odd corruption with a local Git repository. Since I had to look it up twice, a quick post is in order.

    Symptom

    When doing a fetch or pull, I would get an error like this:

    error: cannot lock ref 'refs/remotes/origin/some/branch-name': is at <hash> but expected <different hash>

    I honestly haven’t the slightest idea what is causing it, but it has happened twice now.

    The Fix

    A Google search led to a few issues in various places, but the one that worked was from the Github desktop repository:

    git gc --prune=now
    rm -Rf .git/refs/remote/origin
    git fetch

    I never attempted this with staged or modified files, so, caveat emptor. But my local branches were just fine after, so make I would make sure you do not have any modified or staged files before trying this.

  • Nerd Humor

    Easter eggs in software are not a new thing. And I will always appreciate a good pop culture reference when I find it.

    As I was cycling my Kubernetes clusters, I had an issue with some resource contention. Things were not coming up as expected, so I started looking at the Rancher Kubernetes Engine 2 (RKE2) server logs. I ran across this gem:

    Apr 03 18:00:07 gk-internal-srv-03b rke2[747]: 2024/04/03 18:00:07 ERROR: [transport] Client received GoAway with error code ENHANCE_YOUR_CALM and debug data equal

    While I cannot be certain of the developer’s own reference, my mind immediately went to the Stallone/Snipes classic Demolition Man.

    You never know what you’ll find in software.

  • Centralized Authentication: My Hotel California

    When I left my previous role, I figured I would have some time before the idea of a centralized identity server popped back up. As the song goes, “You can checkout any time you like, but you can never leave…”

    The Short Short Version

    This is going to sound like the start of a very bad “birds and bees” conversion…

    When software companies merge, the primary drivers tend to be expansion of market through additional functionality. In other words, Company A buys Company B because Company A wants to offer functionality to its customer’s that Company B already provides. Rather than writing that functionality, you just buy the company.

    Usually, that also works in the inverse: customers of Company B might want some of the functionality from the products in Company A. And with that, the magical “cross sell” opportunity is born.

    Unfortunately, but much like human babies, the magic of this birth is tempered pretty quickly by what comes next. Mismatched technical stacks, inconsistent data models, poorly modularized software… the list goes on. Customers don’t want to have to input the same data twice (or three, four, even five times), nor do they want to have to configure different systems. The magic of “cross sell” is that, when it’s sold, it “just works.” But that’s nearly never the case.

    Universal Authentication

    That said, there is one important question that all systems ask, and it becomes the first (and probably one of the largest) hurdle: WHO ARE YOU?

    When you start to talk about integrating different systems and services, the ability to determine universal authentication (who is trying to access your service) becomes the linchpin around which everything else can be built. But what’s “universal authentication”?

    Yea, I made that up. As I have looked at these systems, the directive is pretty simple: Universal Authentication is “everyone looks to a system which provides the same user ID for the same user.”

    Now.. That seems a bit, easy. But there is an important point here: I’m ONLY talking about authentication, not authorization. Authentication (who are you) is different from authorization (what are you allowed to do). Aligning on authentication should be simpler (should), but provides for long term transition to alignment on authorization.

    Why just Authentication?

    If there is a central authentication service, then all applications can look to that service to authenticate users. They can send their users (or services) to this central service and trust that it will authenticate the user and provide a token with which that user can operate the system.

    If other systems use the same service, they too can look to the central service. In most cases, if you as a user are already logged in to this service, it will just redirect you back , with a new token in hand for the new application. This leads to a streamlined user experience.

    You make it sound easy…

    It’s not. There is a reason why Authentication as a Service (AaaS) platforms are popular and so expensive.

    • They are the most attacked services out there. Get into one of these, and you have carte blanche over the system.
    • They are the most important in terms of uptime and disaster recovery. If AaaS is down, everyone is down.
    • Any non-trivial system will throw requirements in for interfacing with external IDPs, managing tenants (customers) as groups, and a host of other functional needs.

    And yet, here I am, having some of these same discussions again. Unfortunately, there is no one magic bullet, so if you came here looking for me to enlighten you… I apologize.

    What I will tell you is that the discussions I have been a part of generally have the same basic themes:

    • Build/Buy: The age old question. Generally, authentication is not something I would suggest you build yourself, unless that is your core competency and your business model. If you build, you will end up spending a lot of time and effort “keeping up with the Jones”: Adding new features based on customer requests.
    • Self-Host/AaaS: Remember what I said earlier: attack vectors and SLAs are difficult, as this is the most-attacked and most-used service you will own. There is also a question of liability. If you host, you are liable, but liability for attacks on an AaaS product vary.
    • Functionality: Tenants, SCIM, External IDPs, social logins… all discussions that could consume an entire post. Evaluate what you would like and how you can get there without a diamond-encrusted implementation.

    My Advice

    Tread carefully: wading into the waters of central authentication can be rewarding, but fraught with all the dangers of any sizable body of water.

  • Tech Tip – Formatting External Secrets in Helm

    This has tripped me up a lot, so I figure it is worth a quick note.

    The Problem

    I use Helm charts to define the state of my cluster in a Git repository, and ArgoCD to deploy those charts. This allows a lot of flexibility in my deployments and configuration.

    For secrets management, I use External Secrets to populate secrets from Hashicorp Vault. In many of those cases, I need to use the templating functionality of External Secrets to build secrets that can be used from external charts. A great case of this is populating user secrets for the RabbitMQ chart.

    In the link above, you will notice the templates/default-user-secrets.yaml file. This file is meant to generate a Kubernetes Secret resource which is then sent to the RabbitMqCluster resource (templates/cluster.yaml). This secret is mounted as a file, and therefore, needs some custom formatting. So I used the template property to format the secret:

    template:
      type: Opaque
      engineVersion: v2
      data:
        default_user.conf: |
            default_user={{ `{{ .username  }}` }}
            default_pass={{ `{{ .password  }}` }}
        host: {{ .Release.Name }}.rabbitmq.svc
        password: {{`"{{ .password }}"`}}
        port: "5672"
        provider: rabbitmq
        type: rabbitmq
        username: {{`"{{ .username }}"`}}

    Notice in the code above the duplicated {{ and }} around the username/password values. These are necessary to ensure that the template is properly set in the ExternalSecret resource.

    But, Why?

    It has to do with templating. Helm uses golang templates to process the templates and create resources. Similarly, the ExternalSecrets template engine uses golang templates. When you have a “template in a template”, you have to somehow tell the processor to put the literal value in.

    Let’s look at one part of this file.

      default_user={{ `{{ .username  }}` }}

    What we want to end up in the ExternalSecret template is this:

    default_user={{ .username  }}

    So, in order to do that, we have to tell the Helm template to write {{ .username }} as written, not processing it as a golang template. In this case, we use the backtick (`) to allow for this escape without having that value written to the template. Notice that other areas use the double-quote (“) to wrap the template.

    password: {{`"{{ .password }}"`}}

    This will generate the quotes in the resulting template:

    password: "{{ .password }}"

    If you need a single quote, the use the same pattern, but replace the double quote with a single quote (‘).

    username: {{`'{{ .username }}'`}}

    For whatever it is worth, VS Code’s YAML parser did not like that version at all. Since I have not run into a situation where I need a single quote, I use double quotes if quotes are required, and backticks if they are not.

  • Printing for Printing’s sake

    I have spent a good portion of the last two weeks just getting my Bambu P1S setup to where I want it to be. I believe I’m closing in on a good setup.

    I thought you were printing in 15 minutes?

    I was! Out of the box, the P1S is great, and lets me start printing without a ton of calibration and tinkering. As I continue to print, however, I wanted some additional features that needed some new parts.

    External Spool

    By default, the P1S has an external spool mount on the rear of the device. This allows the spool a nice path to feed the printer. However, if you have an AMS (which I do), you need to hook up the tube from the filament buffer to the printer.

    There are several versions of a Y connector which allows for multiple paths into the printer. I chose the Smoothy on a recommendation from Justin. There is a mount available that lets you attach the Smoothy to an existing screw hole in the P1S, or use magnets.

    Since I have the P1S close to a wall, I wanted to relocate the external spool to the side of the machine. This gives me easier access to spool and allows the printer to sit closer to the wall. For that, I printed this external spool support.

    For assembly, I couldn’t print everything. I needed some M3 screws and a set of fittings. For the Smoothy and the external mount, I used all four of the 10mm fittings (three for the Smoothy, one for the external mount. However, everything went together pretty easily, and my setup now allows me to print from either the AMS or an external spool.

    Raise it up!

    Bambu recommends printing with the top vented for PLA. That is difficult to do when the AMS is sitting on top of it. Thankfully, there are a LOT of available risers, with many different features. After a good deal of searching, I found one that I like.

    Why that one? A few of the more popular risers are heavy (like, almost 3kg of filament heavy). And yes, they have drawers and places for storage, but ultimately, I just wanted a riser that held the glass top higher (so it could vent) and had a rim for the LED strip I purchased to illuminate the build plate better. My choice fits those requirements and comes in at under 500 grams.

    Final Product

    With all the prints completed, I am very happy with the results.

    AMS with Riser
    Side Spool Holder with Bowden Tubing

    Parts List

    Here’s everything I used for the setup:

  • Using Git Hooks on heterogenous repositories

    I have had great luck with using git hooks to perform tool executions before commits or pushes. Running a linter on staged changes before the code is committed and verifying that tests run before the code is pushed makes it easier for developers to write clean code.

    Doing this with heterogenous repositories, or repos which contain projects of different tech stacks, can be a bit daunting. The tools you want for one repository aren’t the tools you want for another.

    How to “Hook”?

    Hooks can be created directly in your repository following Git’s instructions. However, these scripts are seldom cross-OS compatible, so running your script will need some “help” in terms of compatibility. Additionally, the scripts themselves can be harder to find depending on your environment. VS Code, for example, hides the .git folder by default.

    Having used NPM in the past, Husky has always been at the forefront of my mind when it comes to tooling around Git hooks. It helps by providing some cross-platform compatibility and easier visibility, as all scripts are in the .husky folder in your repository. However, it requires some things that a pure .Net developer may not have (like NPM or some other package manager).

    In my current position, though, our front ends rely on either Angular or React Native, so the chance that our developers have NPM installed are 100%. With that in mind, I put some automated linting and building into our projects.

    Linting Different Projects

    For this article, assume I have a repository with the following outline:

    • docs/
      • General Markdown documentation
    • /source
      • frontend/ – .Net API project which hosts my SPA
      • ui/ – The SPA project (in my case, Angular)

    I like lint-staged as a tool to execute linting on staged files. Why only staged files? Generally, large projects are going to have a legacy of files with formatting issues. Going all in and formatting everything all at once may not be possible. But if you format as you make changes, eventually most everything should be formatted well.

    With the outline above, I want different tools to run based on which files need linted. For source/frontend, I want to use dotnet format, but for source/ui, I want to use ESLint and prettier.

    With lint-staged, you can configure individual folders using a configuration file. I was able to add a .lintstagedrc file in each folder, and specify the appropriate linter for the folder. for the .Net project:

    {
        "*.cs": "dotnet format --include"
    }

    And for the Angular project:

    {
        "*": ["prettier", "eslint --fix"]
    }

    Also, since I do have some documentation files, I added a .lintstagedrc file to the repository to run prettier on all my Markdown files.

    {
        "*.md": "prettier"
    }

    A Note on Settings

    Each linter has its own settings, so follow the instructions for whatever linter you may be running. Yes, I know, for the .Net project, I’m only running it on *.cs files. This may change in the future, but as of right now, I’m just getting to know what dotnet format does and how much I want to use it.

    Setting Up the Hooks

    The hooks are, in fact, very easy to configure: follow the instructions on getting started from Husky. The configured hooks for pre-commit and pre-push are below, respectively:

    npx lint-staged --relative
    dotnet build source/mySolution.sln

    The pre-commit hook utilizes lint-staged to execute the appropriate linter. The pre-push hook simply runs a build of the solution which, because of Microsoft’s .esproj project type, means I get an NPM build and a .Net build in the same step.

    What’s next?

    I will be updating the pre-push hook to include testing for both the Angular app and the .Net API. The goal is to provide our teams with a template to write their own tests, and have those be executed before they push their code. This level of automation will help our engineers produce cleaner code from the start, alleviating the need for massive cleanup efforts down the line.

  • Spoolman for Filament Management

    “You don’t know what you go ’til it’s gone” is a great song line, but a terrible inventory management approach. As I start to stock up on filament for the 3D printer, it occurred to me that I need a way to track my inventory.

    The Community Comes Through

    I searched around for different filament management solutions and landed on Spoolman. It seemed a pretty solid fit for what I needed. The owner also configured builds for container images, so it was fairly easy to configure a custom chart to run an instance on my internal tools cluster.

    The client UI is pretty easy to use, and the ability to add extra fields to the different modules makes the solution very extensible. I was immediately impressed and started entering information about vendors, filaments, and spools.

    Enhancing the Solution

    Since I am using a Bambu Labs printer and Bambu Studio, I do not have the ability to integrate Bambu into Spoolman to report filament usage. I searched around, but it does not seem that the Bambu reports such usage.

    My current plan for managing filament is by weight the spool when I open it, and then weighing it again after each use. That difference is the amount of filament I have used. But, to calculate the amount remaining, I need to know the weight of an empty spool. Assuming most manufacturers use the same spools, that shouldn’t be too hard to figure out long term.

    Spoolman is not quite set up for that type of usage. Weight and spool weight is set at the filament level and cannot be overridden at the spool level. Most spools will not be exactly 1000g of filament, so the need to track initial weight at the spool level is critical. Additionally, I want to support partial spools, including re-spooling.

    So, using all the Python I have learned recently, I took a crack at updating the API and UI to support this very scenario. In a “do no harm” type of situation, I made sure that I had all the integration tests running correctly, then went about adding the new fields and some of the new default functionality. After I had the updated functionality in place, I added a few new integration test to verify my work.

    Oddly, as I started working it, I found 4 feature requests in that were related to the changes I was suggesting. It took me a few nights, but I generated a pull request for the changes.

    And Now, We Wait…

    With my PR in place, I wait. The beauty of open source is that anyone can contribute, but the owners have the final say. This also means the owners need to respond, and most owners aren’t doing this as a full time job. So sometimes, there isn’t anything to do but wait.

    I’m hopeful that my changes will be accepted, but for now, I’m using Spoolman as-is, and just doing some of the “math” myself. It is definitely helping me keep track of my filament, and I’m keeping an eye on possible integrations with the Bambu ecosystem.