Category: Software

  • Git Out! Migrating to GitHub

    Git is Git. Wherever it’s hosted, the basics are the same. But the features and community around tools has driven me to make a change.

    Starting Out

    My first interactions with Git happened around 2010, when we decided to move away from Visual SourceSafe and Subversion and onto Git. At the time, some of the cloud services were either in their infancy or priced outside of what our small business could absorb. So we stood up a small Git server to act as our centralized repository.

    The beauty of Git is that, well, everyone has a copy of the repository locally, so it’s a little easier to manage the backup and disaster recovery aspects of a centralized Git server. So the central server is pretty much a glorified file share.

    To the Cloud!

    Our acquisition opened up access to some new tools, including Bitbucket Cloud. We quickly moved our repositories to Bitbucket Cloud so that we could decommission our self-hosted server.

    Personally, I started storing my projects in Bitbucket Cloud. Sure, I had a GitHub account. But I wasn’t ready for everything to be public, and Bitbucket Cloud offered unlimited private repos. At the time, I believe GitHub was charging for private repositories.

    I also try to keep my home setup as close to work as possible in most cases. Why? Well, if I am working on a proof of concept that involves specific tools and their interaction with one another, it’s nice to have a sandbox that I can control. My home lab ecosystem has evolved based on the ecosystem at my job:

    • Self-hosted Git / TeamCity
    • Bitbucket Cloud / TeamCity
    • Bitbucket Cloud / Azure DevOps
    • Bitbucket Cloud / Azure DevOps / ArgoCD

    To the Hub!

    Even before I changed jobs, a move to GitHub was in the cards, both personally and professionally.

    Personally, as a community, I cannot think of a more popular platform than GitHub for sharing and finding open/public code. My GitHub profile is, in a lot of ways, a portfolio of my work and contributions. As I have started to invest more time into open source projects, my portfolio has grown. Even some of my “throw away” projects are worth a little, if only as a reference for what to do and what not to do.

    Professionally, GitHub has made a great many strides in its Enterprise offering. Microsoft’s acquisition only pushed to give GitHub access to some of the CI/CD Pipeline solutions that Azure DevOps has, coupled with the ease of use of GitHub. One of the projects on the horizon at my old company was to identify if GitHub and GitHub actions could be the standard for build and deploy moving forward.

    With my move, we have a mix of ecosystem: GitHub + Azure DevOps Pipelines. I would like to think, long term, I could get to GitHub + GitHub Actions (at least at home), the interoperability of Azure DevOps Pipelines with Azure itself makes it hard to migrate completely. So, with a new professional ecosystem in front of me, I decided it was time to drop BitBucket Cloud and move to GitHub for everything.

    Organize and Move

    Moving the repos is, well, simple. Using GitHub’s Import functionality, I pointed at my old repositories, entered my BitBucket Cloud username and personal access token, and GitHub imported it.

    This simplicity meant I had time to think about organization. At this point, I am using GitHub for two pretty specific types of projects:

    • Storage for repositories, either public or private, that I use for my own portfolio or personal projects.
    • Storage for repositories, all public, that I have published as true Open Source projects.

    I wanted to separate the projects into different organizations, since the hope is the true Open Source projects could see contributions from others in the future. So before I started moving everything, I created a new GitHub organization. As I moved repositories from BitBucket Cloud, I put them in either my personal GitHub space or this new organization space, based on their classification above. I also created a new SonarCloud organization to link to the new GitHub organization.

    All Moved In!

    It really only took about an hour to move all of my repositories and re-configure any automation that I had to point to GitHub. I setup new scans in the new SonarCloud organization and re-pointed the actions correctly, and everything seems to be working just fine.

    With all that done, I deleted my BitBucket Cloud workspaces. Sure, I’m still using Jira Cloud and Confluence Cloud, but I am at least down a cloud service. Additionally, since all of the projects that I am scanning with Sonar are public, I moved them to SonarCloud and deleted my personal instance of SonarQube. One less application running in the home lab.

  • Stacks on Stacks!

    I have Redis installed at home as a simple caching tool. Redis Stack adds on to Redis OSS with some new features that I am eager to start learning. But, well, I have to install it first.

    Charting a Course

    I have been using the Bitnami Redis chart to install Redis on my home K8 cluster. The chart itself provides the necessary configuration flexibility for replicas and security. However, Bitnami does not maintain a similar chart for redis-stack or redis-stack-server.

    There are some published Helm charts from Redis, however, they lack the built-in flexibility and configurability that the Bitnami charts provide. The Bitnami chart is so flexible, I wondered if it was possible to use it with the redis-stack-server image. A quick search showed I was not the only person with this idea.

    New Image

    Gerk Elznik posted last year about deploying Redis Stack using Bitnami’s Redis chart. Based on this post, I made attempted to customize the Bitnami chart to use the redis-stack-server image. Gerk’s post indicated that a new script was needed to successfully start the image. That seemed like an awful lot of work, and, well, I really didn’t want to do that.

    In the comments of Gerk’s post, Kamal Raj posted a link to his version of the Bitnami Redis Helm chart, modified for Redis Stack. This seemed closer to what I wanted: a few tweaks and off to the races.

    In reviewing Kamal’s changes, I noticed that everything he changed could be overridden in the values.yaml file. So I made a few changes to my values file:

    1. Added repository and tag in the redis.image section, pointing the chart to the redis-stack-server image.
    2. Updated the command for both redis.master and redis.replica to reflect Kamal’s changes.

    I ran a quick template, and everything looked to generate correctly, so I committed the changes and let ArgoCD take over.

    Nope….

    ArgoCD synchronized the stateful set as expected, but the pod didn’t start. The error in the K8 events was about “command not found.” So I started digging into the “official” Helm Chart for the redis-stack-server image.

    That chart is very simple, which means it was pretty easy to see that there was no special command for startup. So, I started to wonder if I really needed to override the command, or simply use the redis-stack-server in place of the default image.

    So I commented out the custom overrides to the command settings for both master and replica, and committed those changes. Lo and behold, ArgoCD synced and the pod started up great!

    What Matters Is, Does It Work?

    Excuse me for stealing from Celebrity Jeopardy, but “Gussy it up however you want, Trebek, what matters is, does it work?” For that, I needed a Redis client.

    Up to this point, most of my interactions with Redis have simply been through the redis-cli that’s installed on the image. I use kubectl to get into the pod and run redis-cli in the pod to see what keys are in the instance.

    Sure, that works fine, but as I start to dig into to Redis a bit more, I need a client that lets me visualize the database a little better. As I was researching Redis Stack, I came across RedisInsight, and thought it was worth a shot.

    After installing RedisInsight, I set up port forwarding on my local machine into the Kubernetes service. This allows me to connect directly to the Redis instance without creating a long term service on Node Port or some other forwarding mechanism. Since I only need access to the Redis server within the cluster, this helps me secure it.

    I got connected, and the instance shows. But, no modules….

    More Hacking Required

    As it turns out, the Bitnami Redis chart changes the startup command to a script within the chart. This allows some of the flexibility, but comes at the cost of not using the entrypoint scripts that are in the image. Specifically, the entrypoint script for redis-stack-server, which uses the command line to load the modules.

    Now what? Well, there’s more than one way to skin a cat (to use an arcane and cruel sounding metaphor). Reading through the Redis documentation, you can also load modules through the configuration. Since the Bitnami Redis chart allows you to add to the configuration using the values.yaml file, that’s where I ended up. I added the following to my values.yaml file:

    master:
        configuration: | 
          loadmodule /opt/redis-stack/lib/redisearch.so MAXSEARCHRESULTS 10000 MAXAGGREGATERESULTS 10000
          loadmodule /opt/redis-stack/lib/redistimeseries.so
          loadmodule /opt/redis-stack/lib/rejson.so
          loadmodule /opt/redis-stack/lib/redisbloom.so
    

    With those changes, I now see the appropriate modules running.

    Lots Left To Do

    As I mentioned, this seems pretty “hacky” to me. Right now, I have it running, but only in standalone mode. I haven’t had the need to run a full Redis cluster, but I’m SURE that some additional configuration will be required to apply this to running a Redis Stack cluster. Additionally, I could not get the Redis Gears module loaded, but I did get Search, JSON, Time Series, and Bloom installed.

    For now, that’s all I need. Perhaps if I find I need Gears, or I want to run a Redis cluster, I’ll have to revisit this. But, for now, it works. The full configuration can be found in my non-production infrastructure repository. I’m sure I’ll move to production, but everything that happens here happens in non-production first, so keep tabs on that if you’d like to know more.

  • WSL for Daily Use

    Windows Subsystem for Linux (WSL) lets me do what I used to do in college: have Windows and Linux on the same machine. In 1999, that mean dual booting. Hypervisors everywhere and increased computing power mean today, well, I just run a VM, without even knowing I’m running one.

    Docker Started It All

    When WSL first came out, I read up on the topic, but never really stepped into it in earnest. At the time, I had no real use for a Linux environment on my desktop. As my home lab grew and I dove into the world of Kubernetes, I started to use Linux systems more.

    With that, my familiarity with, and love of, the command line started to come back. Sure, I use Powershell a lot, but there’s nothing more nerdy than running headless Linux servers. What really threw me back into WSL was some of the ELT work I did at my previous company.

    Diving In

    It was much easier to get the various Python tools running in Linux, including things like the Anaconda virtual environment manager. At first, I was using Windows to clone and edit the files using VS Code. Through WSL, I accessed the files using the /mnt/ paths in Ubuntu to get to my drives.

    In some reading, I came across the guide for using VS Code with WSL. It describes how to use VS Code to connect to WSL as a remote computer and edit the files “remotely.” Which sounds weird because it’s just a VM, but it’s still technically remote.

    With VS Code setup to access remotely, I stopped using the /mnt/ folders and started cloning repositories within the WSL Ubuntu instance itself.

    Making It Pretty

    I am a huge fan of a pretty command line. I have been using Oh My Posh as an enhancement to Powershell and Powershell Core for some time. However, Oh My Posh is meant to be used for any shell, so I got to work on installing it in WSL.

    As it turns out, in this case, I did use the /mnt mount path in order to share my Oh My Posh settings file between my Windows profile and the WSL Ubuntu box. In this way, I have the same Oh My Posh settings, regardless of whether I’m using Windows Powershell/Powershell Core or WSL Ubuntu.

    Bringing It All Together

    How can I get to WSL quickly? Well, through Windows Terminal! Windows Terminal supports a number of different prompts, including the standard command prompt, Powershell, and Powershell Core. It also lets you start a WSL session via a Terminal Profile.

    This integration means my Windows Terminal is now my “go to” window for most tasks, whether in WSL or on my local box.

  • Don’t Mock Me!

    I spent almost two hours on a unit test issue yesterday, walking away with the issue unresolved and myself frustrated. I came back to it this morning and fixed it in 2 minutes. Remember, you don’t always need a mock library to create fakes.

    The Task at Hand

    In the process of removing some obsolete warnings from our builds, I came across a few areas where the change was less than trivial. Before making the changes, I decided it would be a good idea to write some unit tests to ensure that my changes did not affect functionality.

    The class to be tested, however, took IConfiguration in the constructor. Our current project does not make use of the Options pattern in .Net Core, meaning anything that needs configuration values has to carry around a reference to IConfiguration and then extract the values manually. Yes, I will want to change that, but not right now.

    So, in order to write these unit tests, I had to create a mock of IConfiguration that returned the values this class needed. Our project currently uses Telerik JustMock, so I figured it would be a fairly easy task to mock. However, I ran into a number of problems that had me going down the path of creating multiple mock classes for different interfaces, including IConfigurationSection. I immediately thought “There has to be a better way.”

    The Better Way

    Some quick Google research led me to this gem of a post on StackOverflow. In all my time with .Net configuration, I never knew about or used the AddInMemoryCollection extension. And that led me to the simplest solution: create an “real boy” instance of IConfiguration with the properties my class needs, and pass that to the class in testing.

    I suppose this is “dumb mocking” in the sense that it doesn’t use libraries written and dedicated to mocking objects. But it gets the job done in the simplest method possible.

  • Using Architectural Decision Records

    Recently, I was exposed to Architectural Decision Records (ADRs) as a way to document software architecture decisions quickly and effectively. The more I’ve learned, the more I like.

    Building Evolutionary Architectures

    Architecture, software or otherwise, is typically a tedious and time-consuming process. We must design to meet existing requirements, but have to anticipate potential future requirements without creating an overly complex (i.e. expensive) system. This is typically accomplished through a variety of patterns which aim to decouple components and make them easily replaceable.

    Replace it?!? Everything ages, even software. If I have learned one thing, it is that the code you write today “should” not exist in its same form in the future. All code needs to change and evolve as the platforms and frameworks we use change and evolve. Building Evolutionary Architectures is a great read for any software engineer, but I would suggest it to be required reading for any software architect.

    How architecture is documented and communicated has evolved in the last 30 years. The IEEE published an excellent white paper outlining how early architecture practices have evolved into these ADRs.

    But what IS it?

    Architectural Decision Records (ADRs) are, quite simply, records of decisions made that affect the architecture of the system. ADRs are simple text documents with a concise format that anyone (architect, engineer, or otherwise) can consume quickly. ADRs are stored next to the code (in the same repository), so they are subject to the same peer review process. Additionally, ADRs add the ability to track decisions and changes over time.

    Let’s consider a simple example, taken straight from the adr-manager tool used to create ADRs in a GitHub repository. The context/problem is pretty simple:

    We want to record architectural decisions made in this project. Which format and structure should these records follow?

    The document then outlines some potential options for tracking architectural decisions. In the end, the document states that MADR 2.1.2 will be used, and outlines the rationale behind the decision.

    It may seem trivial, but putting this document in the repository, accessible to all, gives great visibility to the decision. Changes, if any, are subject to peer review.

    Now, in this case, say 6 months down the road the team decides that they hate MADR 2.1.2 and want to use Y-Statements instead. That’s easy: create a new ADR that supersedes the old one. In the new ADR, the same content should exist: what’s the problem, what are our options, and define the final decision and rationale. Link the two so that it’s easy to see related ADRs, and you are ready to go.

    Tools of the Trade

    There is an ADR GitHub organization that is focused on standardizing some of the nomenclature around ADRs. The page includes links to several articles and blog posts dedicated to describing ADRs and how to implement and use them within your organization. Additionally, the organization has started to collect and improve upon some of the tooling for supporting an ADR process. One that I found beneficial is ADR-Manager.

    ADR-Manager is a simple website that lets you interact with your GitHub repositories (using your own GitHub credentials) to create and edit ADRs. Through your browser, you connect to your repositories and view/edit ADR documents. It generates MADR-styled files within your repository which can be committed to branches with appropriate comments.

    Make it so…

    As I work to get my feet under me in my new role, the idea of starting to use ADRs has gained some traction. As we continue to scale, having these decision records readily available for new teams will be important to maintain consistency across the platform.

    As I continue to work through my home projects, I will use ADRs to document decisions I make in those projects. While no one may read them, it’s a good habit to build.

  • Streamlining my WordPress Install

    My professional change served as a catalyst for some personal change. Nothing drastic, just messing with this site a little.

    New Look!

    I have been sitting on the Twenty Twenty-One theme for a few years now. When it comes to themes, I just want something that looks nice and is low maintenance, and it served its purpose well. I skipped Twenty Twenty-Two because, well, I did not really want to dig into changing it to my personal preference.

    The latest built-in theme, Twenty Twenty-Three, is nice and clean, and pretty close to what I was using in Twenty Twenty-One. I went ahead and activated that one, and chose the darker style to match my soul…. I am kidding. I appreciate a good dark theme, so you know my site will reflect that.

    New Plugins!

    From time to time, I will make sure that my plug-ins are updated and that the WordPress Site Health page does not have any unexpected warnings. This time around, I noticed that I had no caching detected.

    But, wait…. I have Redis Object Cache installed and running. And, literally, as soon as I read that plugin name, I realized that “object cache” is not the same as “browser cache.” So I started looking for a browser cache plugin.

    I landed on WP-Optimize from UpdraftPlus. The free version is sufficient for what I need, and the setup was very easy. I got the plugin installed, and just before I ran the optimization, I noticed the warning to backup the DB using UpdraftPlus. And that’s when I realized, my backup process was, well, non-existent.

    In the past, I have used the All-in-One WP Migration plugin to backup/restore. However, the free version is limited to a size that I have long surpassed, and there is no way that I saw to automate backups. Additionally, the “backups” are stored in the same storage location, so unless I manually grabbed them, they did not go offsite.

    UpdraftPlus provides scheduled backups as well as the ability to push those backups to external storage. Including, as luck would have it, an S3 bucket. So I was able to configure UpdraftPlus to push backups to a new bucket in my MinIO instance, which means I know have daily backups of this site…. It only took 2 years.

    With UpdraftPlus and WP-Optimize installed, I dropped the All-in-One WP Migration plugin.

    New Content?

    Nope…. Not yet, anyway. Over the past year, I have really tried to post every four days. While I do not always hit that, having deadlines pushed me to post more often than I have in the past. While I don’t have the capacity to increase the number of posts, I am targeting to add some variety to my posts. I have been leaning heavily towards technical posts, but there’s a lot of non-technical topics on which I can wax poetic… Or, more like, do a brain dump on…

  • One Task App to Rule them All!

    New job means new systems, which prompted me to reevaluate my task tracking.

    State of the Union

    For the last, oh, decade or more, I have been using the ClearContext Outlook plugin for task management built into Outlook. And I really like it. I have become proficient with the keyboard shortcuts that let me quickly review, file, and organize my emails. They “Email to Task and Appointment” feature is great to turn emails into tasks, and allows me to quickly follow the “Getting Things Done” methodology by David Allen.

    I use Gmail for personal emails, though, and I had no real drive to find a GTD pattern for Gmail. And then I changed jobs.

    Why Switch?

    I started using Microsoft To Do for personal tasks, displaying them on my office television via MagicMirror. However, as ClearContext was my muscle memory, I never switched over to using To Do for work tasks. So I had two areas where tasks were listed: In Microsoft To Do for personal tasks, and in Outlook for professional ones.

    My new company uses Google workplace. This change has driven two changes:

    1. Find a GTD workflow for Gmail to allow me to get to “zero inbox.”
    2. Find a Google Tasks module for Magic Mirror.

    Regarding #1, this will be a “trial and error” type of thing. I have started writing some filters and such, which should help with keeping a zero inbox.

    As for #2, well, it looks like it is time for some MagicMirror module work.

    MMM-GoogleTasks

    When I started looking, there were two MMM-GoogleTasks repositories in GitHub, both related to one another. I forked the most recently updated one and began poking around.

    This particular implementation allows you to show tasks from only one list in Google, and shows all tasks on that list. Microsoft To Do has the notion of a “planned” view which only shows non-completed Tasks with a due date. I contributed a change to MMM-MicrosoftToDo to allow for this view in that MagicMirror module, so I started down the path of updating MMM-GoogleTasks.

    I could not help but start to convert this project to Typescript, which means, most likely, it will never get merged back. However, I appreciate the ability to create Typescript modules and classes, but ultimately have things rollup into three files for the MagicMirror module system.

    So I got the planned view added to my fork of MMM-GoogleTasks. Now what? I have two Gmail accounts and my work account, and I would like to see tasks from all three of those accounts on my MagicMirror display. Unfortunately, I do not have a ton of time to refactor for multiple account support right now… so it made the issues list.

    First Impressions

    It has been about two weeks since I switched over. I am making strides in finding a pattern that works for me to keep me at zero inbox, both in my personal inbox as well as my professional one. I am sure I will run into some hiccups, but, for now, things look good.

  • Jumping in to 3D Design and Printing

    As I’ve been progressing through various projects at home, I have a few 3D printing projects that I would like to tackle. With that, I needed to learn how to design models to print. This has led me down a bit of a long road, and I’m still not done….

    What to use?

    A colleague of mine who does a fair amount of 3D printing suggested the personal use version of Autodesk’s Fusion360. With my home laptop running a pretty fresh version of Windows 11, I figured it would be worth a shot. He pointed me to a set of Youtube tutorials for learning Fusion360 in 30 Days, so I got started.

    I pretty promptly locked up my machine… In doing some simple rendering (no materials, no complex build/print patterns), my machine simply locked up into a lovely pinstripe pattern. This persisted after following all of Autodesk’s recommendations, including a fresh install of Fusion360.

    Downgrading!

    Knowing that some or all of the devices on the laptop may not have appropriate Windows 11 drivers, I made the decision to re-install Windows 10 and stay there. It’s a little painful, as I have gotten somewhat used to the quirks of 11, but I want to be able to draw!

    So I installed Windows 10 fresh, got all the latest updates (including the AMD Pro software), and tried Fusion360 again. I got paste where I locked up in Windows 11, and actually got to Day 8 of the tutorials. And then the lockups came back.

    A small hiccup on my part

    I may have gotten a little impatient, and simultaneously uninstalled the AMD drivers while installing some other drivers, and I pretty much made my machine unbootable… So, I am in the process of re-installing Windows 10 and applying all the latest updates.

    As part of this, however, I am going to take things a TOUCH slower. I had Fusion360 running pretty smoothly up until my Day 8 lesson, but I also installed Windows Subsystem Linux between my Day 7 and Day 8 lessons. And while I truly hope this isn’t the case, I am wondering if something in WSL is causing issues with Fusion360…

    So I’m going to take my machine back to the same state it was in, minus the WSL install, to see if I get the same lockups in Fusion360. I’ll let you know how it turns out!

    Update – 9/18/2023

    I got everything re-installed, including drivers for my GPU, but it is still locking up. However, there is some added information: I got it to lock up outside of Fusion360 in the same way.

    I searched a number of online forums, and the suggestions seem to center around a dying GPU… Doh! So, I have a few options:

    1. Build a new system….
    2. Fix this one.

    I do not like the idea of time/money spent on a new system, especially when the specs on this laptop are more than sufficient for what I need. I found a replacement GPU today for under $100, so it is on its way. I took a peek at the installation video and I am not looking forwards to a full disassemble, but it will allow me to clean out the drives, reset the heat sinks, and hopefully solve the GPU issue.

  • Hackintosh – Windows 11 Edition

    About a year ago, I upgraded my home laptop to Windows 11. I swapped out my old system drive (which was a spinner) for a new SSD, so I had to go through the process again. I ran into a few different issues this time around that are worth putting to paper.

    Windows 10 Install – Easy

    After installing the new drive, I followed the instructions to create Windows 10 bootable media. Booted from the USB, installed Windows 10. Nothing remarkable, just enough to get into the machine. After a few rounds of Windows updates, I felt like I was ready to go.

    Windows 11 Upgrade – Not so Easy

    My home laptop is running a CPU that isn’t compatible with Windows 11. That doesn’t mean I can’t run it, it just means I have to hack Windows a bit.

    In the past, I followed this guide to set the appropriate registry entry and get things installed. This time around should be no different, right?

    Wrong. I made the change, but the installer continued to crash. A little Googling took me to this post, which lead to this article about resetting Windows Update in Windows 10. After downloading and running the batch file from the instructions, I was able to install Windows 11 again.

    Done!

    After a bit of waiting, I have a Windows 11 machine running. Now time to rebuilt to my liking… Thank goodness for Chocolatey.

  • Tech Tip – Options Pattern in ASP.NET Core

    I have looked this up at least twice this year. Maybe if I write about it, it will stick with me. If it doesn’t, well, at least I can look here.

    Options Pattern

    The Options pattern is a set of interfaces that allow you to read options into classes in your ASP.NET application. This allows you to configure options classes which are strongly typed with default values and attributes for option validation. It also removes most of the “magic strings” that can come along with reading configuration settings. I will do you all a favor and not regurgitate the documentation, but rather leave a link so you can read all about the pattern.

    A Small Sample

    Let’s assume I have a small class called HostSettings to store my options:

     public class HostSettings
     {
         public const string SectionName = "HostSettings";
         public string Host { get; set; } = string.Empty;
         public int Port { get; set; } = 5000;
     }

    And my appsettings.json file looks like this:

    {
      "HostSettings": {
        "Host": "http://0.0.0.0",
        "Port": 5000
      },
      /// More settings here
    }

    Using Dependency Injection

    For whatever reason, I always seem to remember how to configure options using the dependency injector. Assuming the above, adding options to the store looks something like this:

    var builder = WebApplication.CreateBuilder(options);
    builder.Services.Configure<HostSettings>(builder.Configuration.GetSection(HostSettings.SectionName));

    From here, to get HostSettings into your class, add an IOptions<HostSettings> parameter to your class, and access the options using the IOptions.Value implementation.

    public class MyService
    {
       private readonly HostSettings _settings;
    
       public MyService(IOptions<HostSettings) options)
       {
          _settings = options.Value;
       }
    }

    Options without Dependency Injection

    What I always, always forget about is how to get options without using the DI pattern. Every time I look it up, I have that “oh, that’s right” moment.

    var hostSettings = new HostSettings();
    builder.Configuration.GetSection(HostSEttings.SectionName).Bind(hostSettings);

    Yup. That’s it. Seems silly that I forget that, but I do. Pretty much every time I need to use it.

    A Note on SectionName

    You may notice the SectionName constant that I add to the class that holds the settings. This allows me to keep the name/location of the settings in the appsettings.json file within the class itself.

    Since I only have a few classes which house these options, I load them manually. It would not be a stretch, however, to create a simple interface and use reflection to load options classes dynamically. It could even be encapsulated into a small package for distribution across applications… Perhaps an idea for an open source package.