Author: Matt

  • Distributing C# Code Standards with a Nuget Package

    My deep dive into Nuget packages made me ask a new question: Can I use Nuget packages to deploy code standards for C#?

    It’s Good To Have Standards

    Generally, the goal of coding standards is to produce a uniform code base. My go-to explanation of what coding standards are is this: “If you look at the code, you should not be able to tell who wrote it.”

    Linting is, to some degree, an extension of formatting. Linting applies a level of analysis to your code, identifying potential runtime problems. Linting rules typically derive from various best practices which can (and do) change over time, especially for languages in active development. Thing of it this way: C++ has not changed much in the last 20 years, but C#, JavaScript, and, by association, TypeScript, have all seen active changes in recent history.

    There are many tools for each language that offer linting and formatting. I have developed some favorites:

    • Formatting
      • dotnet format – C#
      • prettier – Javascript/TypeScript
      • black – Python
    • Linting
      • Sonar – support for various languages
      • eslint – Javascript/TypeScript
      • flake8 – Python

    Regardless of your tool of choice, being able to deploy standards across multiple projects has been difficult. If your projects live in the same repository, you can store settings files in a common location, but for projects across multiple repositories, the task becomes harder.

    Nuget to the Rescue!

    I spent a good amount of time in the last week relearning Nuget. In this journey, it occurred to me that I could create a Nuget package that only contained references and settings for our C# standards. So I figured I’d give it a shot. Turns out, it was easier than I anticipated.

    There are 3 files in the project:

    • The csproj file – MyStandards.CSharp.csproj
    • main.editorconfig
    • MyStandards.CSharp.props

    I put the content of the CSProj and Props files below. main.editorconfig will get copied into the project where this package is referenced as .editorconfig. You can read all about customizing .editorconfig here.

    MyStandards.CSharp.csproj

    <Project Sdk="Microsoft.NET.Sdk">
    
    	<PropertyGroup>
    		<TargetFramework>netstandard2.1</TargetFramework>
    		<Nullable>enable</Nullable>
    		<IncludeBuildOutput>false</IncludeBuildOutput>
    		<DevelopmentDependency>true</DevelopmentDependency>
    	</PropertyGroup>
    
    	<ItemGroup>
    		<Content Include="main.editorconfig">
    			<CopyToOutputDirectory>Always</CopyToOutputDirectory>
    			<PackagePath>build</PackagePath>
    		</Content>
    		<Content Include="MyStandards.CSharp.props">
    			<CopyToOutputDirectory>Always</CopyToOutputDirectory>
    			<PackagePath>build</PackagePath>
    		</Content>
    	</ItemGroup>
    
    	<ItemGroup>
    		<PackageReference Include="SonarAnalyzer.CSharp" Version="9.15.0.81779" PrivateAssets="None" />
    	</ItemGroup>
    
    </Project>
    

    MyStandards.CSharp.props

    <Project>
    	<Target Name="CopyFiles" BeforeTargets="Build">
    		<ItemGroup>
    			<SourceFile Include="$(MSBuildThisFileDirectory)main.editorconfig"></SourceFile>
    			<DestFile Include="$(ProjectDir)\.editorconfig"></DestFile>
    		</ItemGroup>
    		<Copy SourceFiles="@(SourceFile)" DestinationFiles="@(DestFile)"></Copy>
    	</Target>
    </Project>
    

    What does it do?

    When this project is built as a Nuget package, it contains the .props and main.editorconfig files in the build folder of the package. Additionally, it has a dependency on SonarAnalyzer.CSharp.

    Make it a Development Dependency

    Note that IncludeBuildOutput is set to false, and DevelopmentDependency is set to true. This tells Nuget to not include the empty MyStandards.CSharp.dll file that gets built, and to package this Nuget file as a development only dependency.

    When referenced by another project, the reference will be added with the following defaults:

    <PackageReference Include="MyStandards.CSharp" Version="1.0.0">
      <PrivateAssets>all</PrivateAssets>
      <IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
    </PackageReference>

    So, the consuming package won’t include a dependency for my standards, meaning no “chain” of references.

    Set the Settings

    The .props file does the yeoman’s work: it copies main.editorconfig into the target project’s folder as .editorconfig. Notice that this happens before every build, which prevents individual projects from overriding the .editorconfig file.

    Referencing Additional Development Tools

    As I mentioned, I’m a fan of the Sonar toolset for linting. As such, I like to add their built-in analyzers wherever I can. This includes adding a reference to SonarAnalyzer.CSharp to get some built-in analysis.

    Since SonarAnalyzer.CSharp is a development dependency, when installed, it is not included in the package. It would be nice if my standards package would carry this over to projects which use it. To do this, I set PrivateAssets="None" in the PackageReference, and removed the default settings.

    The Road Ahead

    With my C# standards wrapped nicely in a Nuget package, my next steps will be to automate the build pipeline for the packages so that changes can be tracked.

    For my next trick, I would like to dig in to eslint and prettier, to see if there is a way to create my own extensions for those tools to distribute our standards for JavaScript/TypeScript. However, that task may have to wait a bit, as I have some build pipelines to create.

  • Pack it up, Pack it In…

    I dove very deeply into Nuget packages this week, and found some pretty useful steps in controlling how my package can control properties in the consuming project.

    Problem at Hand

    It’s been a long time since I built anything more than a trivial Nuget package. By “trivial” I mean telling the .Net project to package the version, and letting it take care of generating the Nuget package based on the project. If I added a .nuspec file, it was for things like description or copyright.

    I am working on building a package that contains some of our common settings for Unit Test execution, including Coverlet settings and XUnit runner settings. Additionally, I wanted the references in my package to be used in the consuming package, so that we did not need to update xunit or coverlet packages across all of our Unit Test projects.

    With these requirements, I needed to build a package that does two things:

    1. Copy settings files to the appropriate location in the consuming project.
    2. Make sure references in the package were added to the consuming project.

    Content Files… What happened??

    It would seem that the last time I used content files, packages.config was the way to include Nuget packages in a project. However, with the migration to using the PackageReference attribute, some things changed, including the way content files are handled.

    The long and short of it is, packages are no longer “copied on install,” so additional steps are required to move files around. However, I wanted those additional steps to be part of my package, rather than added to every referencing project.

    Out comes “props and targets.” You can add .props and/or .targets files to your package, using the same name as the package, and those props will be included in the consuming project. I have to admit, I spent a great deal of time researching MSBuild and how to actually make this do what I wanted. But, I will give you a quick summary and solution.

    My Solution

    myCompany.Test.UnitTest.Core.props

    <?xml version="1.0" encoding="utf-8"?>
    <Project ToolsVersion="12.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
    	<PropertyGroup>
    		<RunSettingsFilePath>$(MSBuildThisFileDirectory)\coverlet.runsettings</RunSettingsFilePath>
    	</PropertyGroup>
    	<ItemGroup>
    		<Content Include="$(MSBuildThisFileDirectory)xunit.runner.json">
    			<Link>xunit.runner.json</Link>
    			<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
    			<Visible>False</Visible>
    		</Content>
    	</ItemGroup>
    </Project>

    myCompany.Test.UnitTest.Core.csproj

    <Project Sdk="Microsoft.NET.Sdk">
    
    	<PropertyGroup>
    		<TargetFramework>net6.0</TargetFramework>
    		<OutputType>Library</OutputType>
    		<IsPackable>true</IsPackable>
    	</PropertyGroup>
    
    	<ItemGroup>
    		<PackageReference Include="JetBrains.Annotations" Version="2023.3.0" PrivateAssets="None" />
    		<PackageReference Include="Newtonsoft.Json" Version="13.0.3" PrivateAssets="None" />
    		<PackageReference Include="xunit" Version="2.6.3" PrivateAssets="None" />
    		<PackageReference Include="xunit.runner.utility" Version="2.6.3" PrivateAssets="None" />
    		<PackageReference Include="JustMock" Version="2023.3.1122.188" PrivateAssets="None" />
    		<PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.8.0" PrivateAssets="None" />
    		<PackageReference Include="coverlet.collector" Version="6.0.0" PrivateAssets="None" />
    		<PackageReference Include="xunit.runner.visualstudio" Version="2.5.5" PrivateAssets="None" />
    	</ItemGroup>
    
    	<ItemGroup>
    		<Content Include="xunit.runner.json" CopyToOutputDirectory="Always">
    			<PackagePath>build</PackagePath>
    		</Content>
    		<Content Include="coverlet.runsettings" CopyToOutputDirectory="Always">
    			<PackagePath>build</PackagePath>
    		</Content>
    		<Content Include="myCompany.Test.UnitTest.Core.props" CopyToOutputDirectory="Always">
    			<PackagePath>build</PackagePath>
    		</Content>
    	</ItemGroup>
    
    </Project>

    Using Content Files

    We have a common settings file (xunit.runner.json) that needs to be copied to the output directory of the Unit Tests, so that XUnit runs with common settings. To accomplish this, I added the file to my package’s .csproj file as a content file (lines 21-23 in the .csproj file). Then, in the .props file (lines 7-11), I added a step to ensure that the file is copied to the output directory.

    For the coverlet.runsettings file, all I need is to make sure that the consuming project uses that file as for its run settings. I set the RunSettingsFilePath in the .props file, and pointed it to the file in the package.

    Using References

    In the .csproj file, you’ll notice that I have a number of references to packages with PrivateAssets="None" configured. These packages have some assets that are “hidden” from the consuming project, meaning when these projects are installed as transitive packages, those assets are hidden.

    In this case, I do not want to hide any assets from these packages, I want them to flow through to the consuming project. Setting PrivateAssets="None" tells the consuming project that these packages should be used in full. This post and the MSDN documentation shed most of the light on this particular topic for me.

    Miscellaneous Notes

    You may have noticed that, for the settings files, I sent them to the build folder alongside the .props file, rather than in content or contentFiles. Since these things are only used during build or execution of unit tests, this seemed the appropriate place.

    It’s also worth noting that I went through SEVERAL iterations of solutions on this. There is a ton of flexibility in MSBuild and its interactions with the packages installed, but I ended up with a solution which requires minimal effort on the package consumer side. And, for me, this is the intent: keep it DRY (Don’t Repeat Yourself).

  • When a Build is not a Build

    In my experience, the best way to learn a new software system is to see how it is built. That path lead me to some warnings housekeeping, which lead me to a weird issue with Azure DevOps.

    Housekeeping!

    In starting my new role, I have been trying to get a lay of the land. That includes cloning repositories, configuring authentication, and getting local builds running. As I was doing that, I noticed some build warnings.

    Now, I will be the first to admit, in the physical world, I am not a “neat freak.” But, in my little digital world, I appreciate a world of builds free of warnings, unit tests where appropriate, and a well-organized distribution system.

    So with that, I spent a little time fixing the build warnings or entering new technical debt tickets for them to be addressed at a later date. I also spent some time digging into the build process to start making these more apparent to the teams.

    Where are the Warnings?

    In digging through our builds, I noticed that none of them listed the warnings that I was seeing locally. I found that odd, so I started looking at my local build to see what was wrong.

    And I did, in fact, find some issues with my local environment. After some troubleshooting, I literally deleted everything (Nuget caches and all my bin/obj folders), and it did get rid of some of the warnings.

    However, there were still general warnings that the compiler was showing that I was not seeing in the build pipeline. I noticed that we were using the publish command of the DotNetCoreCli@2 task to effectively build and publish the projects, creating build artifacts. Looking at the logs for those publish tasks, I saw the warnings I expected. But Azure DevOps was not picking them up in the pipeline.

    Some Google searches led me to this issue. As it turns out, it is a known problem that warnings output using the publish command do not get picked up by Azure DevOps. So what do we do?

    Build it!

    While publish does not output the warnings correctly, the build command for the same task does. So, instead of one step to build and publish, we have to separate:

    - task: DotNetCoreCLI@2
      displayName: 'Build Project'
      inputs:
        command: 'build'
        publishWebProjects: false
        projects: '${{ parameters.project }}'
        ${{ if eq(parameters.forceRestore, true) }}:
          arguments: '-c ${{ parameters.configuration }}'
        ${{ else }}:
          arguments: '-c ${{ parameters.configuration }} --no-restore' 
    
    - task: DotNetCoreCLI@2
    
      displayName: 'Publish Project'
      inputs:
        command: 'publish'
        publishWebProjects: false
        nobuild: true
        projects: '${{ parameters.project }}'
        arguments: '-c ${{ parameters.configuration }} --output ${{ parameters.outputDir }} --no-build --no-restore' 
        zipAfterPublish: '${{ parameters.zipAfterPublish }}'
        modifyOutputPath: '${{ parameters.modifyOutputPath }}'

    Notice in the build task, I have a parameter to optionally pass --no-restore to the build, and in the publish I am passing in --no-build and --no-restore. Since these steps are running on the same build agent, we can do a build and then a publish with a --no-build parameter to prevent a double build scenario. Our pipeline also performs package restores separately, so most of the time, a restore is not necessary on the build. For the times that it is required, we can pass in the forceRestore parameter to configure that project.

    With this change, all the warnings that were not showing in the pipeline are now showing. Now it is time for me to get at fixing those warnings!

  • Do the Right Thing

    My home lab clusters have been running fairly stable, but there are still some hiccups every now and again. As usual, a little investigation lead to a pretty substantial change.

    Cluster on Fire

    My production and non-production clusters, which mostly host my own projects, have always been pretty stable. Both clusters are set up with 3 nodes as the control plane, since I wanted more than 1 and you need an odd number for quorum. And since I didn’t want to run MORE machines as agents, I just let those nodes host user workloads in addition to the control plane. With 4 vCPUs and 8 GB of RAM per node, well, those clusters had no issues.

    My “internal tools” cluster is another matter. Between Mimir, Loki, and Tempo running ingestion, there is a lot going on in that cluster. I added a 4th node that serves as just an agent for that cluster, but I still had some pod stability issues.

    I started digging into the node-exporter metrics for the three “control plane + worker” nodes in the internal cluster, and they were, well, on fire. The system load was consistently over 100% (the 15 minute average was something like 4.05 out of 4 on all three). I was clearly crushing those nodes. And, since those nodes hosted the control plane as well as the workloads, instability in the control plane caused instability in the cluster.

    Isolating the Control Plane

    At that point, I decided that I could not wait any longer. I had to isolate the control plane and etcd from the rest of my workloads. While I know that it is, in fact, best practice, I was hoping to avoid it in the lab, as it causes a slight proliferation in VMs. How so? Let’s do the math:

    Right now, all of my clusters have at least 3 nodes, and internal has 4. So that’s 10 VMs with 4 vCPU and 8 GB of RAM assigned, or 40 vCPUs and 80 GB of RAM. If I want all of my clusters to have isolated control planes, that means more VMs. But…

    Control plane nodes don’t need nearly the size if I’m not opening them up to other workloads. And for my non-production cluster, I don’t need the redundancy of multiple control plane nodes. So 4 vCPUs/8GB RAM becomes 2 vCPU/4GB RAM for control plane node, and I can use 1 node for the non-production control plane. But what about the work? To start, I’ll use 2 4 vCPUs/8GB RAM nodes for production and non-production, and 3 of those same node sizes for the internal cluster.

    In case you aren’t keeping a running total, the new plan is as follows:

    • 7 small nodes (2 vCPU/4GB RAM) for control plane nodes across the three clusters (3 for internal and production, 1 for non-production)
    • 7 medium nodes (4 vCPU/8GB RAM) for worker nodes across the three clusters (2 for non-production and production, 3 for internal).

    So, it’s 14 VMs, up from 10, but it is only an extra 2 vCPUs and 2 GB of RAM. I suppose I can live with that.

    Make it so!

    With the scripting of most of my server creation, I made a few changes to support this updated structure. I added a taint to the RKE2 configuration for the server so that only critical items are scheduled.

    node-taint:
    - CriticalAddonsOnly=true:NoExecute

    I also removed any server nodes from the tfx-<cluster name> DNS record, since the Nginx pods will only run on agent nodes now.

    Once that was done, I just had to provision new agent nodes for each of the clusters, and then replace the current server nodes with newly provisioned nodes that have a smaller footprint and the appropriate taints.

    It’s worth noting, in order to prevent too much churn, I manually added the above taint to each existing server node AFTER I had all the agents provisioned but before I started replacing server nodes. That way, Kubernetes would not attempt to schedule a user workload coming off the old server onto another server node, but instead force it on to an agent. For your reference and mine, that command looks like this:

    kubectl taint nodes <node name> CriticalAddonsOnly=true:NoSchedule

    Results

    I would classify this as a success with an asterisk next to it. I need more time to determine if the cluster stability, particularly for the internal cluster, improves with these changes, so I am not willing to declare outright victory.

    It has, however, given me a much better view into how much processing I actually need in a cluster. For my non-production cluster, the two agents are averaging under 10% load, which means I could probably lose one agent and still be well under 50% load on that node. The production agents are averaging about 15% load. Sure, I could consolidate, but part of the desire is to have some redundancy, so I’ll stick with two agents in production.

    The internal cluster, however, is running pretty hot. I’m running a number of pods for Grafana Mimir/Loki/Tempo ingestion, as well as Prometheus on that cluster itself. So those three nodes are running at about 50-55% average load, with spikes above 100% on the one agent that is running both the Prometheus collector and a copy of the Mimir ingester. I’m going to keep an eye on that and see if the load creeps up. In the meantime, I’ll also be looking to see what, if anything, can be optimized or offloaded. If I find something to fix, you can be sure it’ll make a post.

  • Git Out! Migrating to GitHub

    Git is Git. Wherever it’s hosted, the basics are the same. But the features and community around tools has driven me to make a change.

    Starting Out

    My first interactions with Git happened around 2010, when we decided to move away from Visual SourceSafe and Subversion and onto Git. At the time, some of the cloud services were either in their infancy or priced outside of what our small business could absorb. So we stood up a small Git server to act as our centralized repository.

    The beauty of Git is that, well, everyone has a copy of the repository locally, so it’s a little easier to manage the backup and disaster recovery aspects of a centralized Git server. So the central server is pretty much a glorified file share.

    To the Cloud!

    Our acquisition opened up access to some new tools, including Bitbucket Cloud. We quickly moved our repositories to Bitbucket Cloud so that we could decommission our self-hosted server.

    Personally, I started storing my projects in Bitbucket Cloud. Sure, I had a GitHub account. But I wasn’t ready for everything to be public, and Bitbucket Cloud offered unlimited private repos. At the time, I believe GitHub was charging for private repositories.

    I also try to keep my home setup as close to work as possible in most cases. Why? Well, if I am working on a proof of concept that involves specific tools and their interaction with one another, it’s nice to have a sandbox that I can control. My home lab ecosystem has evolved based on the ecosystem at my job:

    • Self-hosted Git / TeamCity
    • Bitbucket Cloud / TeamCity
    • Bitbucket Cloud / Azure DevOps
    • Bitbucket Cloud / Azure DevOps / ArgoCD

    To the Hub!

    Even before I changed jobs, a move to GitHub was in the cards, both personally and professionally.

    Personally, as a community, I cannot think of a more popular platform than GitHub for sharing and finding open/public code. My GitHub profile is, in a lot of ways, a portfolio of my work and contributions. As I have started to invest more time into open source projects, my portfolio has grown. Even some of my “throw away” projects are worth a little, if only as a reference for what to do and what not to do.

    Professionally, GitHub has made a great many strides in its Enterprise offering. Microsoft’s acquisition only pushed to give GitHub access to some of the CI/CD Pipeline solutions that Azure DevOps has, coupled with the ease of use of GitHub. One of the projects on the horizon at my old company was to identify if GitHub and GitHub actions could be the standard for build and deploy moving forward.

    With my move, we have a mix of ecosystem: GitHub + Azure DevOps Pipelines. I would like to think, long term, I could get to GitHub + GitHub Actions (at least at home), the interoperability of Azure DevOps Pipelines with Azure itself makes it hard to migrate completely. So, with a new professional ecosystem in front of me, I decided it was time to drop BitBucket Cloud and move to GitHub for everything.

    Organize and Move

    Moving the repos is, well, simple. Using GitHub’s Import functionality, I pointed at my old repositories, entered my BitBucket Cloud username and personal access token, and GitHub imported it.

    This simplicity meant I had time to think about organization. At this point, I am using GitHub for two pretty specific types of projects:

    • Storage for repositories, either public or private, that I use for my own portfolio or personal projects.
    • Storage for repositories, all public, that I have published as true Open Source projects.

    I wanted to separate the projects into different organizations, since the hope is the true Open Source projects could see contributions from others in the future. So before I started moving everything, I created a new GitHub organization. As I moved repositories from BitBucket Cloud, I put them in either my personal GitHub space or this new organization space, based on their classification above. I also created a new SonarCloud organization to link to the new GitHub organization.

    All Moved In!

    It really only took about an hour to move all of my repositories and re-configure any automation that I had to point to GitHub. I setup new scans in the new SonarCloud organization and re-pointed the actions correctly, and everything seems to be working just fine.

    With all that done, I deleted my BitBucket Cloud workspaces. Sure, I’m still using Jira Cloud and Confluence Cloud, but I am at least down a cloud service. Additionally, since all of the projects that I am scanning with Sonar are public, I moved them to SonarCloud and deleted my personal instance of SonarQube. One less application running in the home lab.

  • I Did a Thing

    I have been participating in the open source software community for a while. My expedition into 3D modeling and printing has brought me to a new type of open community.

    Finding Inspiration

    The Google Feed on my phone popped up an article on all3dp.com called “The 30 Most Useful Things to Print in PLA.” I was intrigued, so I clicked on it and read through.

    There were a number of useful items, but many of them are duplicates of things I already owned. Phone stands and Raspberry Pi cases are great first prints. However, I usually buy mine to get built-in wireless charging on phone stands and various features on Pi cases.

    The third item in that article, however, almost spoke to me through the screen.

    A Stack of Cards

    I recently went through my office desk drawers in an attempt to organize. What I quickly realized was I had a collection of USB, SD, and MicroSD cards. The MicroSDs I’ve amassed as a side effect of my Raspberry Pi projects. The USB sticks are just “good things to have around,” especially in an era where none of my laptops even have a CD drive anymore. I can load up an OS on a bootable USB and reload the laptop.

    The problem I had was, well, they all sat in a small container in my drawer. There was little organization, just a bucket of parts. As I’m browsing the all3dp.com article, I came across the USB SD and MicroSD Holder. The design had such beauty in its simplicity. No frills, just slots for USB, SD, and MicroSD cards in a way that makes them organized and easily accessible.

    Making It My Own

    Sure, I could have printed it as is and been done with my journey. But, well, what fun is that! I took a look at the design and added a few requirements of my own.

    1. I wanted something that fit relatively neatly in my desk drawer, and used the space that I had.
    2. I needed some additional slots for all storage types.
    3. Most importantly, I wanted some larger spacing in between items to allow the bear claws that I called hands easy access to the smaller cards.

    With these new requirements, I fired up Fusion 360 and got to work. I used some of the measurements from Lalo_Solo’s design for the card slots, but added some additional spacing in between for easy access. I extended the block to fit the width of my desk, with a enough padding to grab the block out of the desk without an issue. That extension was enough to get the additional storage I needed.

    And with that, I sent the design off to Pittsburgh3DPrints to get printed. A few days later, I picked up my print.

    It turned out great! I brought it home and loaded it up with a few of my USB sticks and SD cards, as seen above. I have a few more, but not enough to fill up the entire holder, which means I have some room for expansion.

    I chuckled a little when I picked up the print: at the shop, I noticed another print of my design on the desk with a few USB/SD cards in it. It felt awesome to see that the design is useful for more than just me!

    Dropping the Remix

    Like DJ Khalid, I wanted to drop the remix on the world. I went about creating a Thingiverse.com account and posted the new design file, along with the above picture. With Thingiverse, you can post things as “remixes” of existing designs. This is a great way to attribute inspiration to the original designers, and keep in line with the Creative Commons license.

    With a new Thingiverse account, I will work on posting the designs for the other projects I printed. At this point, nothing I did was groundbreaking, but it’s nice to share… you never know when someone might find your design useful.

  • Stacks on Stacks!

    I have Redis installed at home as a simple caching tool. Redis Stack adds on to Redis OSS with some new features that I am eager to start learning. But, well, I have to install it first.

    Charting a Course

    I have been using the Bitnami Redis chart to install Redis on my home K8 cluster. The chart itself provides the necessary configuration flexibility for replicas and security. However, Bitnami does not maintain a similar chart for redis-stack or redis-stack-server.

    There are some published Helm charts from Redis, however, they lack the built-in flexibility and configurability that the Bitnami charts provide. The Bitnami chart is so flexible, I wondered if it was possible to use it with the redis-stack-server image. A quick search showed I was not the only person with this idea.

    New Image

    Gerk Elznik posted last year about deploying Redis Stack using Bitnami’s Redis chart. Based on this post, I made attempted to customize the Bitnami chart to use the redis-stack-server image. Gerk’s post indicated that a new script was needed to successfully start the image. That seemed like an awful lot of work, and, well, I really didn’t want to do that.

    In the comments of Gerk’s post, Kamal Raj posted a link to his version of the Bitnami Redis Helm chart, modified for Redis Stack. This seemed closer to what I wanted: a few tweaks and off to the races.

    In reviewing Kamal’s changes, I noticed that everything he changed could be overridden in the values.yaml file. So I made a few changes to my values file:

    1. Added repository and tag in the redis.image section, pointing the chart to the redis-stack-server image.
    2. Updated the command for both redis.master and redis.replica to reflect Kamal’s changes.

    I ran a quick template, and everything looked to generate correctly, so I committed the changes and let ArgoCD take over.

    Nope….

    ArgoCD synchronized the stateful set as expected, but the pod didn’t start. The error in the K8 events was about “command not found.” So I started digging into the “official” Helm Chart for the redis-stack-server image.

    That chart is very simple, which means it was pretty easy to see that there was no special command for startup. So, I started to wonder if I really needed to override the command, or simply use the redis-stack-server in place of the default image.

    So I commented out the custom overrides to the command settings for both master and replica, and committed those changes. Lo and behold, ArgoCD synced and the pod started up great!

    What Matters Is, Does It Work?

    Excuse me for stealing from Celebrity Jeopardy, but “Gussy it up however you want, Trebek, what matters is, does it work?” For that, I needed a Redis client.

    Up to this point, most of my interactions with Redis have simply been through the redis-cli that’s installed on the image. I use kubectl to get into the pod and run redis-cli in the pod to see what keys are in the instance.

    Sure, that works fine, but as I start to dig into to Redis a bit more, I need a client that lets me visualize the database a little better. As I was researching Redis Stack, I came across RedisInsight, and thought it was worth a shot.

    After installing RedisInsight, I set up port forwarding on my local machine into the Kubernetes service. This allows me to connect directly to the Redis instance without creating a long term service on Node Port or some other forwarding mechanism. Since I only need access to the Redis server within the cluster, this helps me secure it.

    I got connected, and the instance shows. But, no modules….

    More Hacking Required

    As it turns out, the Bitnami Redis chart changes the startup command to a script within the chart. This allows some of the flexibility, but comes at the cost of not using the entrypoint scripts that are in the image. Specifically, the entrypoint script for redis-stack-server, which uses the command line to load the modules.

    Now what? Well, there’s more than one way to skin a cat (to use an arcane and cruel sounding metaphor). Reading through the Redis documentation, you can also load modules through the configuration. Since the Bitnami Redis chart allows you to add to the configuration using the values.yaml file, that’s where I ended up. I added the following to my values.yaml file:

    master:
        configuration: | 
          loadmodule /opt/redis-stack/lib/redisearch.so MAXSEARCHRESULTS 10000 MAXAGGREGATERESULTS 10000
          loadmodule /opt/redis-stack/lib/redistimeseries.so
          loadmodule /opt/redis-stack/lib/rejson.so
          loadmodule /opt/redis-stack/lib/redisbloom.so
    

    With those changes, I now see the appropriate modules running.

    Lots Left To Do

    As I mentioned, this seems pretty “hacky” to me. Right now, I have it running, but only in standalone mode. I haven’t had the need to run a full Redis cluster, but I’m SURE that some additional configuration will be required to apply this to running a Redis Stack cluster. Additionally, I could not get the Redis Gears module loaded, but I did get Search, JSON, Time Series, and Bloom installed.

    For now, that’s all I need. Perhaps if I find I need Gears, or I want to run a Redis cluster, I’ll have to revisit this. But, for now, it works. The full configuration can be found in my non-production infrastructure repository. I’m sure I’ll move to production, but everything that happens here happens in non-production first, so keep tabs on that if you’d like to know more.

  • WSL for Daily Use

    Windows Subsystem for Linux (WSL) lets me do what I used to do in college: have Windows and Linux on the same machine. In 1999, that mean dual booting. Hypervisors everywhere and increased computing power mean today, well, I just run a VM, without even knowing I’m running one.

    Docker Started It All

    When WSL first came out, I read up on the topic, but never really stepped into it in earnest. At the time, I had no real use for a Linux environment on my desktop. As my home lab grew and I dove into the world of Kubernetes, I started to use Linux systems more.

    With that, my familiarity with, and love of, the command line started to come back. Sure, I use Powershell a lot, but there’s nothing more nerdy than running headless Linux servers. What really threw me back into WSL was some of the ELT work I did at my previous company.

    Diving In

    It was much easier to get the various Python tools running in Linux, including things like the Anaconda virtual environment manager. At first, I was using Windows to clone and edit the files using VS Code. Through WSL, I accessed the files using the /mnt/ paths in Ubuntu to get to my drives.

    In some reading, I came across the guide for using VS Code with WSL. It describes how to use VS Code to connect to WSL as a remote computer and edit the files “remotely.” Which sounds weird because it’s just a VM, but it’s still technically remote.

    With VS Code setup to access remotely, I stopped using the /mnt/ folders and started cloning repositories within the WSL Ubuntu instance itself.

    Making It Pretty

    I am a huge fan of a pretty command line. I have been using Oh My Posh as an enhancement to Powershell and Powershell Core for some time. However, Oh My Posh is meant to be used for any shell, so I got to work on installing it in WSL.

    As it turns out, in this case, I did use the /mnt mount path in order to share my Oh My Posh settings file between my Windows profile and the WSL Ubuntu box. In this way, I have the same Oh My Posh settings, regardless of whether I’m using Windows Powershell/Powershell Core or WSL Ubuntu.

    Bringing It All Together

    How can I get to WSL quickly? Well, through Windows Terminal! Windows Terminal supports a number of different prompts, including the standard command prompt, Powershell, and Powershell Core. It also lets you start a WSL session via a Terminal Profile.

    This integration means my Windows Terminal is now my “go to” window for most tasks, whether in WSL or on my local box.

  • Don’t Mock Me!

    I spent almost two hours on a unit test issue yesterday, walking away with the issue unresolved and myself frustrated. I came back to it this morning and fixed it in 2 minutes. Remember, you don’t always need a mock library to create fakes.

    The Task at Hand

    In the process of removing some obsolete warnings from our builds, I came across a few areas where the change was less than trivial. Before making the changes, I decided it would be a good idea to write some unit tests to ensure that my changes did not affect functionality.

    The class to be tested, however, took IConfiguration in the constructor. Our current project does not make use of the Options pattern in .Net Core, meaning anything that needs configuration values has to carry around a reference to IConfiguration and then extract the values manually. Yes, I will want to change that, but not right now.

    So, in order to write these unit tests, I had to create a mock of IConfiguration that returned the values this class needed. Our project currently uses Telerik JustMock, so I figured it would be a fairly easy task to mock. However, I ran into a number of problems that had me going down the path of creating multiple mock classes for different interfaces, including IConfigurationSection. I immediately thought “There has to be a better way.”

    The Better Way

    Some quick Google research led me to this gem of a post on StackOverflow. In all my time with .Net configuration, I never knew about or used the AddInMemoryCollection extension. And that led me to the simplest solution: create an “real boy” instance of IConfiguration with the properties my class needs, and pass that to the class in testing.

    I suppose this is “dumb mocking” in the sense that it doesn’t use libraries written and dedicated to mocking objects. But it gets the job done in the simplest method possible.

  • New Cert, now what?

    I completed my PADI Advanced Open Water certification over the weekend. The question is, what’s next?

    Advanced Open Water

    The Advanced Open Water certification is a continuation of the Open Water training with a focus on opening up new dive sites, primarily by expanding the available depth. My certification focused on five areas:

    1. Deep Dive (below 18m/60ft)
    2. Navigation
    3. Bouyancy
    4. Boat
    5. Drift

    The Boat and Drift specialties were a fun introspection into pretty much the only dive types I’ve ever done: of my 17 official dives, 16 have been boat AND drift dives. Truthfully, I’d be a little anxious if I had to find an anchored dive boat by myself.

    The Deep specialty opens up a number of new dive sites below 60 feet, and taught me a little more about the pressure at those depths. On paper, I see the math regarding depth and atmosphere, but it’s incredible to see just how much difference there is between 20m and 30m in terms of pressure. I also learned a bow knot, and how to tie one at 90 feet.

    Navigation was interesting, although pretty easy considering the environment. Swimming a square with a compass is much different when you can see the full square in the 30+ feet of visibility of the Caribbean versus the 6 feet of visibility in a quarry. Considering I’ve only ever done drift dives, precise navigation has been somewhat less important. I will have to work on my orienteering on dry land so that I’m more comfortable with a compass.

    Buoyancy was, by far, the most useful of the specialty dives. I’ve been pretty consistently using 10 kilograms (22 lbs.) since I got certified. However, I forgot that, when I did my certification dives, I was wearing a 3mm shorty wetsuit (short sleeves, short legs). Since then, I’ve shed the wetsuit since I’ve been diving in warmer waters. However, I didn’t shed the weight. Through some trial and error, my instructor helped me get down to diving with 4 kilograms (8.8 lbs.). My last 4 dives were at that weight and there was a tremendous difference. Far less fiddling with my BCD to adjust buoyancy, and a lot more opportunity to use breathing and thrust to control depth.

    So many specialties…

    PADI offers a TON of specialty courses. Wreck Diving, Night Diving, Peak Performance Buoyancy, Search and Rescue, and so many more. I’m interested in a number of them, so the question really is, what’s my plan?

    Right now, well, I think I am going to review their specialty courses and make a list. As for “big” certifications, Rescue Diver seems like the next logical step, but it requires a number of specialties first. However, there is something to be said for just diving. Every dive has increased my confidence in the basics, making every dive more enjoyable. So I don’t anticipate every trip being a “certification trip.” Sometimes, it’s just nice to dive!