Author: Matt

  • Node_Modules is the new DLL hell… Change my mind.

    All I wanted to do over the weekend was take a React 16 class library, copy it, strip out the components (leaving the webpack configuration intact), and upgrade components.

    To call it a nightmare is being nice. My advice to anyone is this: upgrade components one at a time, and test between each upgrade voraciously. Not just “oh, it installed,” but make sure everything is compatible with everything else.

    I ran into at least three issues that were “this package relies on an older version of this other package, but you installed the newer package” or, even better, “this package relies on a version of a plugin package that is inconsistent with the core version of the plugin’s base package.” Note, that wasn’t the error, just my finding based on a tremendous amount of spelunking.

    This morning, I literally gave up: I reverted back to a known set of working packages/versions, deleted node_modules, cleared my NPM cache, and started over.

  • Lab time is always time well spent.

    The push to get a website out for my wife brought to light my neglect of my authentication site, both in function and style. As one of the ongoing projects at work has been centered around Identity Server, a refresher course would help my cause. So over the last week, I dove into my Identity Server to get a better feel for the function and to push some much needed updates.

    Function First

    I had been running Identity Server 4 in my local environments for a while. I wanted a single point of authentication for my home lab, and since Identity Server has been in use at my company for a few years now, it made sense to continue that usage in my lab environment.

    I spent some time over the last few years expanding the Quickstart UI to allow for visual management of the Identity Server. This includes utilizing the Entity Framework integration for the configuration and operational stores and the ASP.NET Identity integration for user management. The system works exactly as expected: I can lock down APIs or UIs under single authentication service, and I can standardize my authentication and authorization code. It has resulted in way more time for fiddling with new stuff.

    The upgrade from Identity Server 4 to Identity Server 5 (now under Duende’s Identity Server) was very easy, thanks to a detailed upgrade guide from the folks at Duende. And, truthfully, I left it at that for a few months. Then I got the bug to use Google as an external identity provider. After all, what would be nicer than to not even have to worry about remembering account passwords for my internal server?

    When I pulled the latest quick start for Duende, though, a lot had changed. I spent at least a few hours moving changes in the authorization flow to my codebase. The journey was worth it, however: I now have a working authentication provider which is able to use Google as an identity provider.

    Style next

    As I was messing around with the functional side, I got pretty tired of the old CoreUI theme I had applied some years ago. I went looking for a Bootstrap 5 compatible theme, and found one in PlainAdmin. It is simple enough for my identity server, but a little more crisp than the old Core UI.

    As is the case with many a software project, the functional changes took a few hours, the visual changes, quite a bit longer. I had to shuffle the layout to better conform with PlainAdmin’s templates, and I had a hell of a time stripping out old scss to make way for the new scss. I am quite sure my app is still bloated with unnecessary styles, but cleaning that up will have to wait for another day.

    And finally a logo…

    What’s a refresh without a new logo?

    A little background: spyder007, in some form or another, has been my online nickname/handle since high school. It is far from unique, but it is a digital identity I still use. So Spydersoft seemed like a natural name for private projects.

    I started with a spider logo that I found with a good creative commons license. But I wanted something a little more, well, plain. So I did some searching and found a “web” design that I liked with a CC license. I edited it to be square, which had the unintended affect of making the web look like a cube… and I liked it.

    The new Spydersoft logo

    So with about a week’s worth of hobby-time tinkering, I was able to add Google as an identity provider and re-style my authentication site. As I’m the only person that sees that sight, it may seem a bit much, but the lessons learned, particularly around the identity server functional pieces, have already translated into valuable insights for the current project at work.

  • React in a weekend…

    Last week was a bit of a ride. My wife was thrust into her real estate career by a somewhat abrupt (but not totally unexpected) reduction in force at her company. We spent the middle of the week shopping to replace her company vehicle, which I do not recommend in the current market. I also offered to spend some time on standing up a small lead generation site for her so that she can establish a presence on the web for her real estate business.

    I spent maybe 10-12 hours getting something running. Could it have been done more quickly? Sure. But I am never one to resist the chance to setup deployment pipelines and API specifications. I figured the project would run longer than 5 days.

    Why only 5 days? Well, Coldwell Banker (her real estate broker) provides a LOT of tools for her, including a branded website with tie in’s to the MLS system. So I forwarded www.agentandalyn.com to her website and my site will be digitally trashed.

    Frameworks

    For familiarity, I chose the .Net 5 ASP.NET / React template as a starter. I have a number of API projects running so I was able to utilize some boilerplate code from those projects to configure Serilog to my internal ELK stack and basic authentication to my Identity Server. The above tutorial is a good start to getting things moving forward with the site.

    On the React app site, I immediately updated all the components to their latest versions. This included moving Bootstrap to version 5. Reactstrap is installed by default, but does not have support for Bootstrap 5. I could have dropped Reactstrap in favor of the RC version of React-Bootstrap, but I’m comfortable enough in my HTML styling, so I just used the base DOM elements and styled them with the Bootstrap styles.

    It probably took me an hour or so to take the template code and turn it into the base for the home page. And then I built a deployment pipeline…

    A Pipeline, you say..

    Yes. For what amounts to a small personal project, I built an Azure DevOps pipeline that builds the project and its associated Helm chart, publishes the resulting docker image and Helm chart to feeds in Proget, and initiates a release in Octopus Deploy.

    While this may seem like overkill, I actually have most of this down to a pretty simple process using some standard tools and templates.

    Helm Charts made easy

    For the Helm charts, I utilizes the common chart from k8s-at-home’s library-charts repository. This limits my Helm chart to some custom values in values.yaml file to define my image, services, ingress, and any other customizations I may want.

    I typically use a custom liveness and readiness probe that lets me hit a custom health endpoint served up using ASP.NET Core’s Custom Health Checks. This allows me some control to add more than just a ping check for the service.

    Azure DevOps Pipelines

    As mentioned in more than one previous post, I am thoroughly impressed with Azure DevOps Build Pipelines thus far. One of the nicer features is the ability to save common steps/jobs/stages to a separate repository, and then re-use those build templates in other pipelines.

    Using my own templates, I was able to construct a build pipeline that compiles, creates and publishes a docker image, creates and publishes a Helm chart, and creates and deploys an Octopus Release in a 110 line YAML file.

    Octopus Project / Release Pipeline

    I have been experimenting with different ways to deploy and manage deployments to Kubernetes. While not the fanciest, Octopus Deploy does the job. I am able to execute a single step to deploy the Helm chart to the cluster, and can override various values with Octopus variables, meaning I can use the same process to deploy to test, stage, and production.

    Wasted effort?

    So I spent a few days standing up a website that I am in the middle of deleting. Was it worth it? Actually, yes. I learned more about my processes and potential ways to make them more efficient. It also spiked my interest in putting some UIs on top of some of my API wrappers.

  • Packer.io : Making excess too easy

    As I was chatting with a former colleague the other day, I realized that I have been doing some pretty diverse work as part of my home lab. A quick scan of my posts in this category reveal a host of topics ranging from home automation to Python monitoring to Kubernetes administration. One of my colleague’s questions was something to the effect of “How do you have time to do all of this?”

    As I thought about it for a minute, I realized that all of my Kubernetes research would not have been possible if I had not first taken the opportunity to automate the process of provisioning Hyper-V VMs. In my Kubernetes experimentation, I have easily provisioned 35-40 Ubuntu VMs, and then promptly broken two-thirds of them through experimentation. Thinking about taking the time to install Ubuntu and provision it before I can start work, well, that would have been a non-starter.

    It started with a build…

    In my corporate career, we have been moving more towards Azure DevOps and away from TeamCity. To date, I am impressed with Azure DevOps. Pipelines-as-code appeals to my inner geek, and not having to maintain a server and build agents has its perks. I had visions of migrating from TeamCity to Azure DevOps, hoping I could take advantage of Microsoft’s generosity with small teams. Alas, Azure DevOps is free for small teams ONLY if you self host your build agents, which meant a small amount of machine maintenance.. I wanted to be able to self-host agents with the same software that Microsoft uses for their Github Actions/Azure DevOps agents. After reading through the Github Virtual Environments repository, I determined it was time to learn Packer.

    The build agents for Github/Azure Devops are provisioned using Packer. My initial hope was that I would just be able to clone that repository, run packer, and viola! It’s never that easy. The Packer projects in that repository are designed to provision VM images that run in Azure, not on Hyper-V. Provisioning Hyper-V machines is possible through Packer, but requires different template files and some tweaking of the provisioning scripts. Without getting too much into the weeds, Packer uses different builders for Azure and Hyper-V. So I had to grab all the provisioning scripts I wanted from the template files in the Virtual Environments repository, but configure a builder for Hyper-V. Thankfully, Nick Charlton provided a great starting point for automating Ubuntu 20.04 installs with Packer. From there, I was off to the races.

    Enabling my excess

    Through probably 40 hours of trial and error, I got to the point where I was building my own build agents and hooking them up to my Azure DevOps account. It should be noted that fully provisioning a build agent takes six to eight hours, so most of that 40 hours was “fire and forget.” With that success, I started to think: “Could I provision simpler Ubuntu servers and use those to experiment with Kubernetes?”

    The answer, in short, is “Of course!” I went about creating some Powershell scripts and Packer templates so that I could provision various levels of Ubuntu servers. I have shared those scripts, along with my build agent provisioning scripts, in my provisioning-projects repository on Github. With those scripts, I was off to the races, provisioning new machines at will. It is remarkable the risks you will take in a lab environment, knowing that you are only 20-30 minutes away from a clean machine should you mess something up.

    A note on IP management

    If you dig into the repository above, you may notice some scripts and code around provisioning a MAC address from a “Unifi IP Manager.” I created a small API wrapper that utilizes the Unifi Controller APIs to create clients with fixed IP addresses. The API generates a random, but valid, MAC Address for Hyper-V, then uses the Unifi Controller API to assign a fixed IP.

    That project isn’t quite ready for public consumption, but if you are interested, drop a comment on this post.

  • Who says the command line can’t be pretty?

    The computer, in many ways, is my digital office space. Just like that fern in your office, you need to tend to your digital space. What better way to water your digital fern than to revamp the look and feel of your command line?

    I extolled the virtues of the command line in my Windows Terminal post, and today, as I was catching up on my “Hanselman reading,” I came across an update to his “My Ultimate PowerShell prompt with Oh My Posh and the Windows Terminal” post that included new updates to make my command line shine.

    What’s New?

    Oh-My-Posh v3

    What started as a prompt theme engine for Powershell has grown into a theme engine for multiple shells, including ZSH and Bash. The v3 documentation was all I needed to upgrade from v2 and modify the powerline segments to personalize my prompt.

    Nerd Fonts

    That’s right, Nerd Fonts. Nerd Fonts are “iconic fonts” which build hundreds of popular icons into the font for use in the command line. As I was using Cascadia Code PL (Cascadia Code with Powerline glyphs), it felt only right to upgrade to the Caskaydia Code NF Nerd font.

    It should be noted that the Oh-My-Posh prompts are configured as part of your Windows Powershell prompt, meaning they show up in any window running Powershell. For me, this is three applications: Microsoft Windows Terminal, Visual Studio Code, and the Powershell Core command line application. It is important to set the font family correctly in all of these places.

    Microsoft Windows Terminal

    Follow Oh-My-Posh’s instructions for setting the default font face in Windows Terminal.

    Visual Studio Code

    For Visual Studio, you need to change the fontFamily property of the integrated terminal. The easiest way to do this is to open the settings JSON (Ctrl-Shift-p and search for Open Settings (JSON)) and make sure you have the following line:

    {
      "terminal.integrated.fontFamily": "CaskaydiaCove NF"
    }

    When I was editing my Oh-my-Posh profile, I realized that it might be helpful to be able to see the icons I was using in the prompt, so I also changed my editor font.

    {
      "editor.fontFamily": "'CaskaydiaCove NF', Consolas, 'Courier New', monospace"
    }

    You can use the Nerd Font cheat sheet to search for icons to use and copy/paste the icon value into your profile.

    Powershell Application

    With Windows Terminal, I rarely use the Windows Powershell application, but it soothes my digital OCD to have it working. To change that window’s font, right click in the window’s title and select Properties. Go to the Font tab, and choose CaskaydiaCove NF (or your installed Nerd Font) from the list. This will only change the properties for the current window. If you want to change the font for any new windows, right click in the window’s title bar and select Defaults, then follow the same steps to set the default font.

    Terminal Icons

    This one is fun. In the screenshot above, notice the icons next to different file types. This is accomplished with the Terminal-Icons Powershell Module. First, install the module using the following Powershell Command:

    Install-Module -Name Terminal-Icons -Repository PSGallery

    Then, add the Import-Module command to your Powershell Profile:

    Import-Module -Name Terminal-Icons

    Too Much?

    It could be said that spending about an hour installing and configuring my machine prompts is, well, a bit much. However, as I mentioned above, sometimes you need to refresh your digital work space.

  • Home Lab: Disaster Recovery and time for an Upgrade!

    I had the opportunity to travel the U.S. Virgin Islands last week on a “COVID delayed” honeymoon. I have absolutely no complaints: we had amazing weather, explored beautiful beaches, and got a chance to snorkel and scuba in some of the clearest water I have seen outside of a pool.

    Trunk Bay, St. John, US VI

    While I was relaxing on the beach, Hurricane Ida was wrecking the Gulf and dropping rain across the East, including at home. This lead to power outages, which lead to my home server having something of a meltdown. And in this, I learned the value of good neighbors who can reboot my server and the cost of not setting up proper disaster recovery in Kubernetes.

    The Fall

    I was relaxing in my hotel room when I got text messages from my monitoring solution. And, at first, I figured “The power went out, things will come back up in 30 minutes or so.” But, after about an hour, nothing. So I texted my neighbor and asked if he could reset the server. After he reset, most of my sites came back up, with the exception of some of my SQL-dependent sites. I’ve had some issues with SQL Server’s not starting their service correctly, so some sites were down… but that’s for another day.

    A few days later, I got the same monitoring alerts. My parents were house-sitting, so I had my mom reset the server. Again, most of my sites came up. Being in America’s Caribbean Paradise, I promptly forgot all about it, figuring things were fine.

    The Realization

    Boy, was I wrong. When I sat down at the computer on Sunday, I randomly checked my Rancher instance. Dead. My other clusters were running, but all of the clusters were reporting issues with etcd on one of the nodes. And, quite frankly, I was at a loss. Why?

    • While I have taken some Pluralsight courses on Kubernetes, I was a little overly dependent on the Rancher UI to manage Kubernetes. With it down, I was struggling a bit.
    • I did not take the time to find and read the etcd troubleshooting steps for RKE. Looking back, I could most likely have restored the etcd snapshots and been ok. Live and learn, as it were.

    Long story short, my hacking attempts to get things running again pretty much killed my rancher cluster and made a mess of my internal cluster. Thankfully, the stage and production clusters were still running, but with a bad etcd node.

    The Rebuild

    At this point, I decided to throw in the towel and rebuild the Rancher cluster. So I deleted the existing rancher machines and provisioned a new cluster. Before I installed Rancher, though, I realized that my old clusters might still try to connect to the new Rancher instance and cause more issues. I took the time to remove the Rancher components from the stage and production clusters using the documented scripts.

    When I did this with my internal tools cluster… well, it made me realize there was a lot of unnecessary junk on that cluster. It was only running ELK (which was not even fully configured) and my Unifi Controller, which I moved to my production box. So, since I was already rebuilding, I decided to decommission my internal tools box and rebuild that as well.

    With two brand new clusters and two RKE clusters clean of Rancher components, I installed Rancher and got all the management running.

    The Lesson

    From this little meltdown and reconstruction I have learned a few important lessons.

    • Save your etcd snapshots off machine – RKE takes snapshots of your etcd regularly, and there is a process for restoring from a snapshot. Had I known that those snapshots were there, I would have tried this before killing my cluster.
    • etcd is disk-heavy and “touchy” when it comes to latency – My current setup has my hypervisor using my Synology as an iSCSI disk for all my VMs. With 12 VMs running as Kubernetes nodes, any disruption in either network or I/O lag can cause leader changes. This is minimal during normal operation, but performing deployments or updates can sometimes cause issues. I have a small upgrade planned for the Synology to add a 1 TB SSD Read/Write cache which hopefully improves the issue, but I may end up creating a new subnet for iSCSI traffic to alleviate network hiccups.
    • Slow and steady wins the race – In my haste to get everything working again, I tried some things that did more harm than good. Had I done a bit more research and found the appropriate articles earlier, I probably could have recovered without rebuilding.

  • Inducing Panic in a Software Architect

    There are many, many ways to induce panic in people, and, by no means will I be cluing you in to all the different ways you can succeed in sending me into a tail spin. However, if there is one thing that anyone can do that immediately has me at a loss for words and looking for the exit, it is to ask this one question: “What do you do for a living?”

    It seems a fairly straight-forward question, and one that should have a relatively straight forward answer. If I said “I am a pilot,” then it can be assumed that I fly airplanes in one form or another. It might lead to a conversation about the different types of planes I fly, whether it’s cargo or people, etc. However, answering “I am a software architect” usually leads to one of two responses:

    1. “Oh..”, followed by a blank stare from the asker and an immediate topic change.
    2. “What’s that?”, followed by a blank stare from me as I try to encapsulate my various functions in a way that does not involve twelve Powerpoint slides and a scheduled break.

    In social situations, or, at the very least, the social situations in which I am involved, being asked what I do is usually an immediate buzz kill. While I am sure people are interested in what I do, there is no generic answer. And the specific answers are typically so dull and boring that people lose interest quickly.

    Every so often, though, I run across someone in the field, and the conversation turns more technical. Questions around favorite technology stacks, innovate work in CI/CD pipelines, or some sexy new machine learning algorithms are discussed. But for most, describing the types of IT Architects out there is confusing because, well, even we have trouble with it.

    IT Architect Types

    Redhat has a great post on the different types of IT architects. They outline different endpoints of the software spectrum and how different roles can be assigned based on these endpoints. From those endpoints they illustrate the different roles an architect can play, color-coordinated to the different orientations along the software spectrum.

    However, only the largest of companies can afford to confine their architects to a single circle in this diagram, and many of us where one or more “role hats” as we progress through our daily work.

    My architecture work to this point has been primarily developer-oriented. While I have experimented in some of operations-oriented areas, my knowledge and understanding lies primarily in the application realm. Prior to my transfer to an architecture role, I was an engineering manager. This previous role exposed me to much more of the business side of things, and my typical frustrations today are more about what we as architects can and should be doing to support the business side of things.

    So what do I do?

    In all honesty, I used to just say “software developer” or “software engineer.” Those titles are usually more generally understood, and I can be very generic about it. But as I work to progress in my own career, the need for me to be able to articulate my current position (and desired future positions) becomes critical.

    Today, I try to phrase my answers around being a leader in delivering software that helps customers do their job better. It is usually not as technical, and therefore not as boring, but does drive home the responsibilities of my position as a leader.

    How that answer translates to a cookout, well, that always remains to be seen.

  • Simple Site Monitoring with Raspberry PI and Python

    My off-hours time this week has been consumed by writing some Python scripts to help monitor uptime for some of my sites.

    Build or Buy?

    At this point in my career, “build or buy” is a question I ask more often than not. As a software engineer, there is no shortage of open source and commercial solutions for almost any imaginable task. Web Site monitoring is no different. Tools such as StatusCake, Pingdom, and LogicMonitor offer hosted platforms, while tools like Nagios and PRTG offer on-premise installations, there is so much to choose from, it’s hard to decide.

    I had a few simple requirements:

    • Simple at first, but expandable as needed.
    • Runs on my own network so that I can monitor sites that are not available outside of my firewall.
    • Since most of my servers are virtual machines consolidated on the one lab server, well, it does not make much sense to put it on that server. I needed something I could run on easily with little to no power.

    Build it is!

    I own a few Raspberry Pis, but the Model 3B and 4B are currently in use. The lone unused Pi is an old Model B (i.e., model 1B), so installing something like Nagios would be, well, not usable when it was all said and done. Given that the Raspberry Pi is at home with Python, I thought I would dust off my “language learning skills” and figure out how to make something useful.

    As I started, though, I remembered my free version of Atlassian’s Status Page. Although the free version is limited in the number of subscribers and no text subscriptions, for my usage, it’s perfect. And, near and dear to my developer heart, it has a very well defined set of APIs for managing statuses and incidents.

    So, with Python and some additional modules, I created a project that lets me do a quick request on a desired website. If the website is down, the Status Page status for the associated component is changed and an incident is created. If/when it comes back up, any open incidents associated with that component are closed.

    Viola!

    After a few evening hours tinkering, I have the scripts doing some basic work. For now, a cron job executes the script every 5 minutes, and if a site goes down it is reported to my statuspage site.

    Long term, I plan on adding support for more in-depth checks of my own projects, which utilized .Net’s HealthChecks namespace to report service health automatically. I may also look into setting up the scripts as a service running on the Pi.

    If you are interested, the code is shared on Github.

  • Free Guy and the “Myth” of AI

    I have been able to get out and enjoy some movies with my kids over the last few weeks. Black Widow, Jungle Cruise, and, most recently, Free Guy, have given me the opportunity to get back in the theaters, something I did not realize I missed as much as I did.

    The last of those, Free Guy, is one of the funniest movies I have seen in a long time, and, considering the trailers, I am not giving anything away when I say there is an element of artificial intelligence within the plot. And it got me thinking more about how AI is perceived versus what it can do, and perhaps how that perception is oddly self-limiting.

    Erik Larson’s The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do explores this topic in much greater depth than I can here, but Larson’s views mirror my own: the myth isn’t that true AI is possible, but rather, the myth is that its arrival is inevitable based on our present trajectory. And the business of AI is interfering with the science of AI in some very big ways.

    Interfering? How, do you ask, can monetizing artificial intelligence interfere with its own progress?

    AI today is good at narrow applications involving inductive reasoning and data processing, like recognizing images or playing games. But these successes do not push AI towards a more general intelligence. These successes do, however, make AI a viable business offering.

    Human intelligence is a blend of inductive reasoning and conjecture, i.e. guesses. These are guesses informed by our own experiences and the context of the situation, called abduction by the AI community. And we have no idea how to program this type of contextual/experiential guessing into computers today. But our success in those narrow areas has taken the focus away from understanding the complexities of abduction, and has stifled innovation within the field.

    It is a scientific unknown as to whether or not we are capable of producing an artificial intelligence with levels of both inductive reasoning and conjecture. But to assume it will “just happen” by chipping away at one part of the problem is folly. Personally, I believe artificial intelligence is possible, but not without a shift in focus from productization to research and innovation. If we understand how people make decisions, we can not only try to mimic that behavior with AI, but also gain more insight into ourselves in the process.

  • Hardening your Kubernetes Cluster: Don’t run as root!

    People sometimes ask my why I do not read for pleasure. As my career entails ingesting the NSA/CISA technical report on Kubernetes Hardening Guidance and translating it into actionable material, I ask that you let me enjoy hobbies that do not involve the written word.

    The NSA/CISA technical report on came out on Tuesday, August 3. Quite coincidentally, my colleagues and I have started asking questions about our internal standards for securing Kubernetes clusters for production. Coupled with my current home lab experimentation, I figured it was a good idea to read through this document and see what I could do to secure my lab.

    Hopefully, I will get through this document and how I’ve applied everything to my home lab (or, at least, the production cluster in my home lab). For now, though, I thought it prudent to look at the Pod Security section. And, as one might expect, the first recommendation is…

    Don’t run as root!

    For as long as I can remember working in Linux, not running as root was literally “step one.” So it amazes me that, by default, containers are configured to run as root. All of my home-built containers are pretty much the same: simple docker files that copy the results of an external dotnet publish command into the container and then run the dotnet entry point.

    My original docker files used to look like this:

    FROM mcr.microsoft.com/dotnet/aspnet:5.0-focal AS base
    WORKDIR /app
    COPY . /app
    
    EXPOSE 80
    ENTRYPOINT ["dotnet", "my.dotnet.dll"]

    With some StackOverflow assistance, now, it looks something like this:

    FROM mcr.microsoft.com/dotnet/aspnet:5.0-focal AS base
    WORKDIR /app
    COPY . /app
    # Create a group and user so we are not running our container and application as root and thus user 0 which is a security issue.
    RUN addgroup --system --gid 1000 mygroup \
        && adduser --system --uid 1000 --ingroup mygroup --shell /bin/sh nmyuser
      
    # Serve on port 8080, we cannot serve on port 80 with a custom user that is not root.
    ENV ASPNETCORE_URLS=http://+:8080
    EXPOSE 8080
      
    # Tell docker that all future commands should run as the appuser user, must use the user number
    USER 1000
    
    ENTRYPOINT ["dotnet", "my.dotnet.dll"]

    What’s with the port change?

    The docker file changes are pretty simple: add the commands to add a new group and a new user, and using the USER command, tell the docker file to execute as the new user. But why change the ASPNETCORE_URLS and port change? When running as a non-root user, you are restricted to ports above 1024, so I had to change the exposed port. This necessitated some changes to my helm charts and their service deployment, but, overall, the process was straightforward.

    My next steps

    When I find some spare time in next few months, I’m going to revisit Pod Security Policies and, more specifically, it’s upcoming replacement: PSP Replacement Policy. I find it amusing that the NSA/CISA released guidelines that specify usage of what is now a deprecated feature. I also find it typical of our field that our current name for the new version is, quite simply, “Pod Security Policy Replacement Policy.” I really hope they get a better name for that…