• Git, you are messing with my flow!

    The variety of “flows” for developing using Git makes choosing the right one for your team difficult. When you throw true continuous integration and delivery into that, and add a requirement for immutable build objects, well…. you get a heaping mess.

    Infrastructure As Code

    Some of my recent work to help one of our teams has been creating a Terraform project and Azure DevOps pipeline based on a colleague’s work in standardizing Kubernetes cluster deployments in AKS. This work will eventually get its own post, but suffice to say that I have a repository which is a mix of Terraform files, Kubernetes manifests, Helm value files, and Azure DevOps pipeline definitions to execute them all.

    As I started looking into this, it occurred to me that there really is clear way to manage this project. For example, do I use a single pipeline definition with stages for each environment (dev/stage/production)? This would mean the Git repo would have one and only one branch (main), and each stage would need some checks (manual or automatic) to ensure rollout is controlled.

    Alternatively, I can have an Azure Pipeline for each environment. This would mean that each pipeline could trigger on its own branch. However, it also means that no standard Git Flow seems to work well with this.

    Code Flows

    Quite separately, as the team dove into creating new repositories, the question again came up around branching strategy and subsequent environment strategies for CI/CD. I am a vocal proponent of immutable build objects, but how each team chooses to get there is up to them.

    In some MS Teams channel discussion, there are pros and cons to nearly all of our current methods, and the team seems stuck on the best way to build and develop.

    The Internet is not helping

    Although it is a few years old, Scott Shipp’s War of the Git Flows article highlights the ongoing “flow wars.” One of the problems with Git is the ease with which all of these variations can be implemented. I am not blaming Git, per say, but because it is easy for nearly anyone to suggest a branching strategy, things get muddled quickly.

    What to do?

    Unfortunately, as with many things in software, there is no one right answer. The branch strategy you use depends on the requirements of not just the team, but the individual repository’s purpose. Additional requirements may come from the desired testing, integration, and deployment process.

    With that in mind, I am going to make two slightly radical suggestions

    1. Select a flow that is appropriate for the REPOSITORY, not your team.
    2. Start backwards. Working on identifying the requirements for your deployment process first. Answer questions like this:
    • How will artifacts, once created, be tested?
    • Will artifacts progress through lower environments?
    • Do you have to support old releases?
    • Where (what branch) can a release candidate artifacts come from? Or, what code will ultimately be applied to production?

    That last one is vital: defining on paper which branches will either be applied directly to production (in the case of Infrastructure as Code) or generate artifacts that can end up in production (in the case of application builds) will help outline the requirements for the project’s branching strategy.

    Once you have identified that, the next step is defining the team’s requirements WITHIN the above. In other words, try not to have the team hammer a square peg into a round hole. They have a branch or two that will generate release code, how they get their code into that branch should be as quick and direct as possible while supporting the collaboration necessary.

    What’s next

    What’s that mean for me? Well, for the infrastructure project, I am leaning towards a single pipeline, but I need to talk to the Infrastructure team to make sure they agree. As to the software team, I am going to encourage them to apply the process above for each repository in the project.

  • ISY and the magic network gnomes

    For nearly 2 years, I struggled mightily with communication issues between my ISY 994i and some of my docker images and servers. So much, in fact, that I had a fairly long running post in the Universal Devices forums dedicated to the topic.

    I figure it is worth a bit of a rehash here, if only to raise the issue in the hopes that some of my more network-experienced contacts can suggest a fix.

    The Beginning

    The initial post was essentially about my ASP.NET Core API (.net Core 2.2 at the time) not being able to communicate with the ISY’s REST API. You can read through the initial post for details, but, basically, it would hit it once, then timeout on subsequent requests.

    It would seem that some time between my original post and the administrator’s reply, I set the container’s networking to host and the problem went away.

    In retrospect, I had not been heavily using that API anyway, so it may have just been hidden a bit better by the host network. In any case, I ignored it for a year.

    The Return

    About twenty (that’s right, 20) months later, I started moving my stuff to Kubernetes, and the issue reared its ugly head. I spent a lot of time trying to get some debug information from the ISY, which only confused me more.

    As I dug more into when it was happening, it occurred to me that I could not reliably communicate with the ISY from any of the VMs on my HP Proliant server. Also, and, more puzzling, I could not do a port 80 retrieval from the server itself to the ISY. Oddly, though, I’m able to communicate with other hardware devices on the network (such as my MiLight Gateway) from the server and it’s VMs. Additionally, the ISY responds to pings, so it is able to be reached.

    Time for a new proxy

    Now, one of the VMs on my server was an Ubuntu VM that was serving as an NGINX reverse proxy. For various reasons, I wanted to move that from a virtual machine to a physical box. This, it would seem, would be a good time to see if a new proxy leads to different results.

    I had an old Raspberry Pi 3B+ lying around, and that seemed like the perfect candidate for a stand alone proxy. So I did a quick image of an SD card with Ubuntu 20, copied my Nginx configuration files from the VM to the Pi, and re-routed my firewall traffic to the proxy.

    Not only did that work, but it solved the issue of ISY connectivity. Routing traffic through the PI, I am able to communicate with the ISY reliably from my server, all of my VMs, and other PCs on the network.

    But, why?

    Well, that is the million dollar question, and, frankly, I have no clue. Perhaps it has to do with the NIC teaming on the server, or some oddity in the network configuration on the server. But I burned way too many hours on it to want to dig more into it.

    You may be asking, why a hardware proxy? I liked the reliability and smaller footprint of a dedicated Raspberry PI proxy, external to the server and any VMs. It made the networking diagram much simpler, as traffic now flows neatly from my gateway to the proxy and then to the target machine. It also allows me to control traffic to the server in a more granular fashion, rather than having ALL traffic pointed to a VM on the server, and then routed via proxy from there.

  • Making use of my office television with Magic Mirror

    I have a wall-mounted television in my office that, 99% of the time, sits idle. Sadly, the fully loaded RetroPie attached to it doesn’t get much Super Mario Bros action during the workday. But that idle Raspberry Pi had me thinking of ways to utilize that extra screen in my office. Since, well, 4 monitors is not enough.

    Magic Mirror

    At first, I contemplated writing my own application in .Net 5. But I really do not have the kind of time it would take to get something like that moving, and it seems counter-productive. I wanted something quick and easy, with no necessary input interface (it is a television, after all), and capable of displaying feed data quickly. That is when I stumbled on Magic Mirror 2.

    Magic Mirror is a Node-based app which uses Electron to display HTML on various platforms, including on the Raspberry Pi. It is popular, well-documented, and modular. It has default modules for weather and clock, as well as a third-party module for package tracking.

    Server Status… Yes please.

    There are several modules around displaying status, but nothing I saw that let me display the status from my statuspage.io page. And since statuspage.io has a public API, unique to each page, that doesn’t require an API key, it felt like a good first module to develop for myself.

    I spent a lot of time understanding the module aspect of MagicMirror, but, in the end, I have a pretty simplistic module that’ll display my statuspage.io status on the magic mirror.

    What is next?

    Well, there are two things I would really like to try:

    1. Work on integrating a floor plan of my house, complete with status from my home automation.
    2. Re-work MagicMirror as a React app.

    For the first one, well, there are some existing third-party components that I will test. That second one, well, that seems a slightly taller task. That might have to be a winter project.

  • Teaching in a Professional Setting

    Those who can, do; those who can’t, teach.”

    George Bernard Shaw

    As you can imagine, in a family heavy in educators, this phrase lands anywhere from irksome to derogatory. As I have worked to define my own expectations in my new role, it occurs to me that Shaw got this wrong, and we have to go back a few thousand years to correct it.

    Do we hate teaching?

    As you may have noticed via my LinkedIn, I have stepped into a new role as Chief Architect. The role has been vacant here for some time, which means I have some flexibility to refine the position and how it will assist the broader engineering staff moving forward.

    My current team is comprised strictly of highly experienced software architects who have, combined, spent at least fifty years working on software. As a leader, how can I best utilize this group to move our company forward? As I pondered this question, the Shaw quote stuck in my head: why such a disdain for teaching, and does that attitude prevent us from becoming better?

    Teaching makes us better

    Our current group of software architects spends a lot of time understanding the business problems and technical limitations of our current platforms and then working out solutions that allow us to deliver customer value quickly. In this cycle, however, we leave ourselves little time to fully understand some of the work that we are doing in order to educate others.

    I can say, with no known exceptions, that I my level of understanding of a topic is directly related to how many times I have had to educate someone on said topic. If I have never had to teach someone how my algorithm works, chances are I am not fully aware of what it is doing. I simply got it to work.

    For topics that I have presented to others, written blog posts about, or even just had to document in an outside medium, my level of understanding increases significantly.

    Teaching is a skill

    Strontium sums up The ‘Those who Can’t Do, Teach’ Fallacy very well. The ability to teach someone else a particular task or skill requires so more than simple knowledge of the task. Teachers must have an understanding of the task at hand. They must be able to answer challenging questions from students with more than “that’s just how it is.” And, perhaps most importantly, a teacher must learn to educate without indoctrinating.

    As technical leaders in software engineering, architects must be not only be able to know and put into practice patterns and practices that make the software better, but they must be able to educate others on the use of those patterns and practices. Additionally, they must have the understanding and ability to answer challenging questions.

    What is our return?

    In a professional environment, we often look at teaching as a sunk cost. When high level engineers and architects spend the additional time to understand and teach complex topics, they are not producing code that leads to better sales numbers and a healthier bottom line. At least not directly.

    Meanwhile, there is always mandatory training on HR compliance, security, and privacy. Why? If there is a breach in one of those areas, the company could be liable for millions in damages.

    How much would it cost the company if a well-designed system suffered from poor implementation due to lack of education on the design?

    Teach it!

    Part of the reason I wanted to restart my blog was to give me an outlet to document and explain some of the work that I have been doing. This blog gives me an outlet to have to explain what I have done, not just “make it work.”

    As you progress through your professional journey, in software or not, I encourage you to find a way to teach what you know. In the end, you will learn much more than you might think, and you will be better for it.

    As I mentioned above, I prefer Aristotle’s views on this matter over Shaw’s:

    Those who know do, those that understand teach.”

    Aristotle
  • Packages, pipelines, and environment variables…. oh my!

    I was clearly tempting the fates of package management when I called out NPM package management. Nuget was not to be outdone, and threw us a curveball in one of our Azure DevOps builds that is too good not to share.

    The Problem

    Our Azure Pipeline build was erroring out, however, it was erroring in different steps, and steps that seemingly had no connection to the changes that were made.

    Error 1 – Unable to install GitVersion.Tool

    We use GitVersion to version our builds and packages. We utilize the GitTools Azure DevOps extension to provide us with pipeline tasks to install and execute GitVersion. Most of our builds use step templates to centralize common steps, so these steps are executed every time, usually without error.

    However, in this particular branch, we were getting the following error:

    --------------------------
    Installing GitVersion.Tool version 5.x
    --------------------------
    Command: dotnet tool install GitVersion.Tool --tool-path /agent/_work/_temp --version 5.7.0
    /usr/bin/dotnet tool install GitVersion.Tool --tool-path /agent/_work/_temp --version 5.7.0
    The tool package could not be restored.
    /usr/share/dotnet/sdk/5.0.402/NuGet.targets(131,5): error : 'feature' is not a valid version string. (Parameter 'value') [/tmp/t31raumn.tj5/restore.csproj]

    Now, I would have concentrated more on the “‘feature’ is not a valid version string” error initially, however, I was distracted because, sometimes, we got past this error but into another error.

    Error 2 – Dotnet restore failed

    So, sometimes (not always), the pipeline would get past the installation of GitVersion and make it about three steps forward, into the dotnet restore step. Buried in that restore log was a similar error to the first one:

    error : 'feature' is not a valid version string. (Parameter 'value')

    Same error, different steps?

    So, we were seeing the same error, but in two distinct places: one in a step which installs the tool, and presumably has nothing to do with the code in the repository, and the other which is squarely around the dotnet restore command.

    A quick Google search yielded an issue in the dotnet core Github repository. At the tail end of that thread is this little tidbit from user fabioduque:

    The same thing happened to me, and it was due to me setting up the Environment Variable “VERSION”.
    Solved with with:
    set VERSION=

    Given that, I put a quick step in to print out the environment variables and determined the problem.

    Better the devil you know…

    As the saying goes: Better the devil you know than the devil you don’t. In this case, the devil we didn’t know about was an environment variable named VERSIONPREFIX. Now, we were not expressly setting that version in a script, but one of our variables was named versionPrefix. Azure DevOps makes pipeline variables available as environment variables, and it standardizes them into all caps, so we ended up with VERSIONPREFIX. Nuget, in versions after 3.4, provides for applying settings at runtime via environment variables. And since VERSIONPREFIX is applicable in dotnet / nuget, our variable was causing these random errors. I renamed the variable and viola, no more random errors.

    The Short, Short Version

    Be careful when naming variables in Azure Pipelines. They are made available as-named via environment variables. If some of the steps or commands that you use accept those environment variables as settings, you may be inadvertently affecting your builds.

  • Node_Modules is the new DLL hell… Change my mind.

    All I wanted to do over the weekend was take a React 16 class library, copy it, strip out the components (leaving the webpack configuration intact), and upgrade components.

    To call it a nightmare is being nice. My advice to anyone is this: upgrade components one at a time, and test between each upgrade voraciously. Not just “oh, it installed,” but make sure everything is compatible with everything else.

    I ran into at least three issues that were “this package relies on an older version of this other package, but you installed the newer package” or, even better, “this package relies on a version of a plugin package that is inconsistent with the core version of the plugin’s base package.” Note, that wasn’t the error, just my finding based on a tremendous amount of spelunking.

    This morning, I literally gave up: I reverted back to a known set of working packages/versions, deleted node_modules, cleared my NPM cache, and started over.

  • Lab time is always time well spent.

    The push to get a website out for my wife brought to light my neglect of my authentication site, both in function and style. As one of the ongoing projects at work has been centered around Identity Server, a refresher course would help my cause. So over the last week, I dove into my Identity Server to get a better feel for the function and to push some much needed updates.

    Function First

    I had been running Identity Server 4 in my local environments for a while. I wanted a single point of authentication for my home lab, and since Identity Server has been in use at my company for a few years now, it made sense to continue that usage in my lab environment.

    I spent some time over the last few years expanding the Quickstart UI to allow for visual management of the Identity Server. This includes utilizing the Entity Framework integration for the configuration and operational stores and the ASP.NET Identity integration for user management. The system works exactly as expected: I can lock down APIs or UIs under single authentication service, and I can standardize my authentication and authorization code. It has resulted in way more time for fiddling with new stuff.

    The upgrade from Identity Server 4 to Identity Server 5 (now under Duende’s Identity Server) was very easy, thanks to a detailed upgrade guide from the folks at Duende. And, truthfully, I left it at that for a few months. Then I got the bug to use Google as an external identity provider. After all, what would be nicer than to not even have to worry about remembering account passwords for my internal server?

    When I pulled the latest quick start for Duende, though, a lot had changed. I spent at least a few hours moving changes in the authorization flow to my codebase. The journey was worth it, however: I now have a working authentication provider which is able to use Google as an identity provider.

    Style next

    As I was messing around with the functional side, I got pretty tired of the old CoreUI theme I had applied some years ago. I went looking for a Bootstrap 5 compatible theme, and found one in PlainAdmin. It is simple enough for my identity server, but a little more crisp than the old Core UI.

    As is the case with many a software project, the functional changes took a few hours, the visual changes, quite a bit longer. I had to shuffle the layout to better conform with PlainAdmin’s templates, and I had a hell of a time stripping out old scss to make way for the new scss. I am quite sure my app is still bloated with unnecessary styles, but cleaning that up will have to wait for another day.

    And finally a logo…

    What’s a refresh without a new logo?

    A little background: spyder007, in some form or another, has been my online nickname/handle since high school. It is far from unique, but it is a digital identity I still use. So Spydersoft seemed like a natural name for private projects.

    I started with a spider logo that I found with a good creative commons license. But I wanted something a little more, well, plain. So I did some searching and found a “web” design that I liked with a CC license. I edited it to be square, which had the unintended affect of making the web look like a cube… and I liked it.

    The new Spydersoft logo

    So with about a week’s worth of hobby-time tinkering, I was able to add Google as an identity provider and re-style my authentication site. As I’m the only person that sees that sight, it may seem a bit much, but the lessons learned, particularly around the identity server functional pieces, have already translated into valuable insights for the current project at work.

  • React in a weekend…

    Last week was a bit of a ride. My wife was thrust into her real estate career by a somewhat abrupt (but not totally unexpected) reduction in force at her company. We spent the middle of the week shopping to replace her company vehicle, which I do not recommend in the current market. I also offered to spend some time on standing up a small lead generation site for her so that she can establish a presence on the web for her real estate business.

    I spent maybe 10-12 hours getting something running. Could it have been done more quickly? Sure. But I am never one to resist the chance to setup deployment pipelines and API specifications. I figured the project would run longer than 5 days.

    Why only 5 days? Well, Coldwell Banker (her real estate broker) provides a LOT of tools for her, including a branded website with tie in’s to the MLS system. So I forwarded www.agentandalyn.com to her website and my site will be digitally trashed.

    Frameworks

    For familiarity, I chose the .Net 5 ASP.NET / React template as a starter. I have a number of API projects running so I was able to utilize some boilerplate code from those projects to configure Serilog to my internal ELK stack and basic authentication to my Identity Server. The above tutorial is a good start to getting things moving forward with the site.

    On the React app site, I immediately updated all the components to their latest versions. This included moving Bootstrap to version 5. Reactstrap is installed by default, but does not have support for Bootstrap 5. I could have dropped Reactstrap in favor of the RC version of React-Bootstrap, but I’m comfortable enough in my HTML styling, so I just used the base DOM elements and styled them with the Bootstrap styles.

    It probably took me an hour or so to take the template code and turn it into the base for the home page. And then I built a deployment pipeline…

    A Pipeline, you say..

    Yes. For what amounts to a small personal project, I built an Azure DevOps pipeline that builds the project and its associated Helm chart, publishes the resulting docker image and Helm chart to feeds in Proget, and initiates a release in Octopus Deploy.

    While this may seem like overkill, I actually have most of this down to a pretty simple process using some standard tools and templates.

    Helm Charts made easy

    For the Helm charts, I utilizes the common chart from k8s-at-home’s library-charts repository. This limits my Helm chart to some custom values in values.yaml file to define my image, services, ingress, and any other customizations I may want.

    I typically use a custom liveness and readiness probe that lets me hit a custom health endpoint served up using ASP.NET Core’s Custom Health Checks. This allows me some control to add more than just a ping check for the service.

    Azure DevOps Pipelines

    As mentioned in more than one previous post, I am thoroughly impressed with Azure DevOps Build Pipelines thus far. One of the nicer features is the ability to save common steps/jobs/stages to a separate repository, and then re-use those build templates in other pipelines.

    Using my own templates, I was able to construct a build pipeline that compiles, creates and publishes a docker image, creates and publishes a Helm chart, and creates and deploys an Octopus Release in a 110 line YAML file.

    Octopus Project / Release Pipeline

    I have been experimenting with different ways to deploy and manage deployments to Kubernetes. While not the fanciest, Octopus Deploy does the job. I am able to execute a single step to deploy the Helm chart to the cluster, and can override various values with Octopus variables, meaning I can use the same process to deploy to test, stage, and production.

    Wasted effort?

    So I spent a few days standing up a website that I am in the middle of deleting. Was it worth it? Actually, yes. I learned more about my processes and potential ways to make them more efficient. It also spiked my interest in putting some UIs on top of some of my API wrappers.

  • Packer.io : Making excess too easy

    As I was chatting with a former colleague the other day, I realized that I have been doing some pretty diverse work as part of my home lab. A quick scan of my posts in this category reveal a host of topics ranging from home automation to Python monitoring to Kubernetes administration. One of my colleague’s questions was something to the effect of “How do you have time to do all of this?”

    As I thought about it for a minute, I realized that all of my Kubernetes research would not have been possible if I had not first taken the opportunity to automate the process of provisioning Hyper-V VMs. In my Kubernetes experimentation, I have easily provisioned 35-40 Ubuntu VMs, and then promptly broken two-thirds of them through experimentation. Thinking about taking the time to install Ubuntu and provision it before I can start work, well, that would have been a non-starter.

    It started with a build…

    In my corporate career, we have been moving more towards Azure DevOps and away from TeamCity. To date, I am impressed with Azure DevOps. Pipelines-as-code appeals to my inner geek, and not having to maintain a server and build agents has its perks. I had visions of migrating from TeamCity to Azure DevOps, hoping I could take advantage of Microsoft’s generosity with small teams. Alas, Azure DevOps is free for small teams ONLY if you self host your build agents, which meant a small amount of machine maintenance.. I wanted to be able to self-host agents with the same software that Microsoft uses for their Github Actions/Azure DevOps agents. After reading through the Github Virtual Environments repository, I determined it was time to learn Packer.

    The build agents for Github/Azure Devops are provisioned using Packer. My initial hope was that I would just be able to clone that repository, run packer, and viola! It’s never that easy. The Packer projects in that repository are designed to provision VM images that run in Azure, not on Hyper-V. Provisioning Hyper-V machines is possible through Packer, but requires different template files and some tweaking of the provisioning scripts. Without getting too much into the weeds, Packer uses different builders for Azure and Hyper-V. So I had to grab all the provisioning scripts I wanted from the template files in the Virtual Environments repository, but configure a builder for Hyper-V. Thankfully, Nick Charlton provided a great starting point for automating Ubuntu 20.04 installs with Packer. From there, I was off to the races.

    Enabling my excess

    Through probably 40 hours of trial and error, I got to the point where I was building my own build agents and hooking them up to my Azure DevOps account. It should be noted that fully provisioning a build agent takes six to eight hours, so most of that 40 hours was “fire and forget.” With that success, I started to think: “Could I provision simpler Ubuntu servers and use those to experiment with Kubernetes?”

    The answer, in short, is “Of course!” I went about creating some Powershell scripts and Packer templates so that I could provision various levels of Ubuntu servers. I have shared those scripts, along with my build agent provisioning scripts, in my provisioning-projects repository on Github. With those scripts, I was off to the races, provisioning new machines at will. It is remarkable the risks you will take in a lab environment, knowing that you are only 20-30 minutes away from a clean machine should you mess something up.

    A note on IP management

    If you dig into the repository above, you may notice some scripts and code around provisioning a MAC address from a “Unifi IP Manager.” I created a small API wrapper that utilizes the Unifi Controller APIs to create clients with fixed IP addresses. The API generates a random, but valid, MAC Address for Hyper-V, then uses the Unifi Controller API to assign a fixed IP.

    That project isn’t quite ready for public consumption, but if you are interested, drop a comment on this post.

  • Who says the command line can’t be pretty?

    The computer, in many ways, is my digital office space. Just like that fern in your office, you need to tend to your digital space. What better way to water your digital fern than to revamp the look and feel of your command line?

    I extolled the virtues of the command line in my Windows Terminal post, and today, as I was catching up on my “Hanselman reading,” I came across an update to his “My Ultimate PowerShell prompt with Oh My Posh and the Windows Terminal” post that included new updates to make my command line shine.

    What’s New?

    Oh-My-Posh v3

    What started as a prompt theme engine for Powershell has grown into a theme engine for multiple shells, including ZSH and Bash. The v3 documentation was all I needed to upgrade from v2 and modify the powerline segments to personalize my prompt.

    Nerd Fonts

    That’s right, Nerd Fonts. Nerd Fonts are “iconic fonts” which build hundreds of popular icons into the font for use in the command line. As I was using Cascadia Code PL (Cascadia Code with Powerline glyphs), it felt only right to upgrade to the Caskaydia Code NF Nerd font.

    It should be noted that the Oh-My-Posh prompts are configured as part of your Windows Powershell prompt, meaning they show up in any window running Powershell. For me, this is three applications: Microsoft Windows Terminal, Visual Studio Code, and the Powershell Core command line application. It is important to set the font family correctly in all of these places.

    Microsoft Windows Terminal

    Follow Oh-My-Posh’s instructions for setting the default font face in Windows Terminal.

    Visual Studio Code

    For Visual Studio, you need to change the fontFamily property of the integrated terminal. The easiest way to do this is to open the settings JSON (Ctrl-Shift-p and search for Open Settings (JSON)) and make sure you have the following line:

    {
      "terminal.integrated.fontFamily": "CaskaydiaCove NF"
    }

    When I was editing my Oh-my-Posh profile, I realized that it might be helpful to be able to see the icons I was using in the prompt, so I also changed my editor font.

    {
      "editor.fontFamily": "'CaskaydiaCove NF', Consolas, 'Courier New', monospace"
    }

    You can use the Nerd Font cheat sheet to search for icons to use and copy/paste the icon value into your profile.

    Powershell Application

    With Windows Terminal, I rarely use the Windows Powershell application, but it soothes my digital OCD to have it working. To change that window’s font, right click in the window’s title and select Properties. Go to the Font tab, and choose CaskaydiaCove NF (or your installed Nerd Font) from the list. This will only change the properties for the current window. If you want to change the font for any new windows, right click in the window’s title bar and select Defaults, then follow the same steps to set the default font.

    Terminal Icons

    This one is fun. In the screenshot above, notice the icons next to different file types. This is accomplished with the Terminal-Icons Powershell Module. First, install the module using the following Powershell Command:

    Install-Module -Name Terminal-Icons -Repository PSGallery

    Then, add the Import-Module command to your Powershell Profile:

    Import-Module -Name Terminal-Icons

    Too Much?

    It could be said that spending about an hour installing and configuring my machine prompts is, well, a bit much. However, as I mentioned above, sometimes you need to refresh your digital work space.