Author: Matt

  • A blast from the past and a new path forward

    Over the last few years, the pandemic has thrown my eldest son’s college search for a bit of a loop. It’s difficult to talk about visiting college campuses when colleges are just trying to figure out how to keep there current students in the classroom. With that in mind, much of his search has been virtual.

    As campuses open up and graduation looms, though, we have had the opportunity to set up some visits with his top choice schools. One of them, to my great pride, is my alma mater, Allegheny College. So I spent my President’s Day, an unseasonably warm and sunny February day, walking the paths and hallways of Allegheny.

    It was a weird experience.

    There is no way that I can say that my experience at Allegheny defined who I am today. I am a product of 40+ years experience across a variety of schools, companies, organizations, and relationships, and to categorize the four years of my college experience as the defining years of my life would be unfair to those other experiences. But, my four years at Allegheny were a unique chapter of in my life, one that encouraged a level of self-awareness and helped me learn to interact with the world around me.

    For me, the college experience was less about the education and more about the life experience. That’s not to say I did not learn anything at Allegheny, far from it. But the situations and experiences that I was in forced me to learn more than just what was in my textbooks.

    What kinds of experiences? Carrying a job in residence life for two years as an advisor and director taught me a lot about team work, leadership, and dealing with people on a day to day basis. Fraternity life and Panhellenic/Interfraternity Council taught me a good deal about small group politics and the power of persuasion. Campus life, in general, gave me the opportunity to learn to be an adult in a much safer environment than the real world tends to offer an 18 year old high school graduate.

    College is not for everyone. Considering that news stories like this one pop up in my LinkedIn feed pretty regular is a testament to the change in perspective on a four year degree. What caught my eye from that article, however, is that there is some research to suggest that some schools are, in fact, better than others. It begs the question, is college right for my son, a prospective computer science student?

    Being back on campus with a prospective Computer Science student allowed me to get a look at what Allegheny’s CS department is doing to prepare its students for the outside world. I was impressed. While the requisite knowledge of a CS degree remains, they have augmented the curriculum to include more group work, assisted (paired) programming, and branched into new areas such as data analytics and software innovation. Additionally, they encourage responsible computer science practices with some assistance through the Mozilla Foundation’s Responsible Computer Science Challenge. This focus will certainly give students an advantage over more theory-heavy programs.

    As I got an overview of the CS curriculum, it occurred to me that I can, and should, be doing more to help guide the future of our industry. At work, I can do that through mentoring and knowledge sharing, but, as an alumnus, I can provide similar mentoring and knowledge sharing, as well as some much needed networking to young students. Why? I never want to be the smartest one in the room.

    I was never the smartest guy in the room. From the first person I hired, I was never the smartest guy in the room. And that’s a big deal. And if you’re going to be a leader – if you’re a leader and you’re the smartest guy in the world – in the room, you’ve got real problems.

    Jack Welch
  • Tech Tip – Markdown Linting in VS Code

    With a push to driving better documentation, it is worth remembering that Visual Studio Code has a variety of extensions that can help with linting/formatting of all types of files, including your README.md files.

    Markdown All in One and markdownlint are my current extensions of choice, and they have helped me clean up my README.md files in both personal and professional projects.

  • Not everything you read on the internet is true….

    I spend a lot of time searching the web for tutorials, walkthroughs, and examples. So much so, in fact, that “Google Search” could be listed as a top skill on my resume. With that in mind, though, it’s important to remember that not everything you read on the Internet is true, and to take care in how you approach things.

    A simple plan

    I was trying to create a new ASP.NET / .Net 6 application that I could use to test connectivity to various resources in some newly configured Kubernetes clusters. When I used the Visual Studio 2022 templates, I noticed that the new “minimal” styling was used in the template. This is my first opportunity to try the new minimal styling, so I looked to the web for help. I came across this tutorial on Medium.com.

    I followed the tutorial and, on my local machine, it worked like a charm. So I built a quick container image and started deploying into my Kubernetes cluster.

    Trouble brewing

    When I deployed into my cluster, I kept receiving errors about SQL connection errors. Specifically, that the server was not found. I added some logging, and the connection string seemed correct, but nothing was working.

    I thought it might be a DNS error within the cluster, so I spent at least 2 hours trying to determine if there was a DNS issues in my cluster. I even tried another cluster to see if it had to do with our custom subnet settings in the original cluster.

    After a while, I figured out the problem, and was about ready to quit my job. The article has a step to override the OnConfiguring method in the DbContext, like this:

    protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
            {
                var configuration = new ConfigurationBuilder()
                    .SetBasePath(Directory.GetCurrentDirectory())
                    .AddJsonFile("appsettings.json")
                    .Build();
    
                var connectionString = configuration.GetConnectionString("AppDb");
                optionsBuilder.UseSqlServer(connectionString);
            }

    I took this as necessary and put it in there. But what I glossed over was that this will run every time migrations happen. And the configuration is ONLY pulling from appsettings.json, not from environment variables or any other configuration sources.

    That “Oh……” moment

    At that point I realized that I had been troubleshooting the fact that this particular function was ignoring the connection string I was passing in using environment variables. To make matters worse, the logging I added that printed the connection string was using the full configuration (with environment variables), so it looked like the value was changing. In reality, it was the value from appSettings.json the whole time.

    The worst part: that override is completely unnecessary. I removed the entire function, and my code operates as normal.

    No Hard Feelings

    Let me be clear, though: as written, the article does a fine job of walking you through the process of setting up an ASP.NET / .Net 6 application in the minimal API styling. My issue was not recognizing how that OnConfiguring override would act in the container, and then bouncing around everywhere to figure it out.

    I will certainly be more careful about examining the tutorial code before traipsing around in Kubernetes command lines and DNS tools.

  • Can Yellowstone teach us about IT infrastructure management?

    It seems almost too fitting that, at a time when the popularity of gritty television like Yellowstone and 1883 is climbing, that I write to encourage you to stop taking on new pets and to start running a cattle ranch.

    Pet versus Cattle – The IT Version

    The “pet versus cattle” analogy is often used to describe different methodologies for IT management. You treat a pet by giving it a name like Zeus or Apollo. You give it a nice home home. You nurse it back to health when it’s sick. Cattle, on the other hand, get an ear tag with a number and are sent to roam the field until they are needed.

    Carrying this analogy into IT, I have seen my shared of pet servers. We build them up, nurse them back to health after a virus, upgrade them when needed, and do all the things a good pet owner would do. And when these go down, people notice. “Cattle” servers, on the other hand, are quickly provisioned and just as quickly replaced, often without any maintenance or downtime.

    Infrastructure as Code

    In its most basic definition, infrastructure as code is the idea that an infrastructure’s definition can be defined in some file (preferably source controlled). Using various tools, we can take this definition and create the necessary infrastructure components automatically.

    Why do we care if we can provision infrastructure from code? Treating our servers as cattle requires a much better management structure than “have Joe stand this up for us.” Sure, Joe is more than capable of creating and configuring all of the necessary components, but if we want to do it again, it requires more of Joe’s time.

    With the ability to define an infrastructure one time and deploy it many times, we gain the capacity to worry more about what is running on the infrastructure than the infrastructure itself.

    Putting ideas to action

    Changing my mindset from “pet” to “cattle” with virtual machines on my home servers has been tremendously valuable for me. As I mentioned in my post about Packer.io, you become much more risk tolerant when you know a new VM is 20 unattended minutes away.

    I have started to encourage our infrastructure team to accept nothing less than infrastructure as code for new projects. In order to lead by example, I have been working through creating Azure resources for one of our new projects using Terraform.

    Terraform: Star Trek or Nerd Tech?

    You may be asking: “Aren’t they both nerd tech?” And, well, you are correct. But when you have a group of people who grow up on science fiction and are responsible for keeping computers running, well, things get mixed up.

    Terraform the Hashicorp product is one of a number of tools which allow infrastructure engineers to automatically provision environments to host their products using various providers. I have been using its Azure Resource Manager provider, although there are more than a few available.

    While I cannot share the project code, here is what I was able to accomplish with Terraform:

    • Create and configure an Azure Kubernetes Service (AKS) instance
    • Create and configure an Azure SQL Server with multiple databases
    • Attach these instances to an existing virtual network subnet
    • Create and configure an Azure Key Vault
    • Create and configure a public IP address from an existing prefix.

    Through more than a few rounds of destroy and recreate, the project is in a state where it is ready to be deployed in multiple environments.

    Run a ranch

    So, perhaps, Yellowstone can teach us about IT Infrastructure management. A cold and calculated approach to infrastructure will lead to a homogenous environment that is easy to manage, scale, and replace.

    For what it’s worth, 1883 is simply a live-action version of Oregon Trail…

  • A little open source contribution

    The last month has been chock full of things I cannot really post about publicly, namely, performance reviews and security remediations. And while the work front has not been kind to public posts, I have taken some time to contribute back a bit more to the Magic Mirror project.

    Making ToDo Better

    Thomas Bachmann created MMM-MicrosoftToDo, a plugin for the Magic Mirror that pulls tasks from Microsoft’s ToDo application. Since I use that app for my day to day tasks, it would be nice to see my tasks up on the big screen, as it were.

    Unfortunately, the plugin used the old beta version of the APIs, as well as the old request module, which has been deprecated. So I took the opportunity to fork the repo and make some changes. I submitted a pull request to the owner, hopefully it makes its way into the main plugin. But, for now, if you want my changes, check them out here.

    Making StatusPage better

    I also took the time to improve on my StatusPage plugin, adding the ability to ignore certain components and removing components from the list when they are a part of an incident. I also created a small enhancement list for some future use.

    With the holidays and the rest of my “non-public” work taking up my time, I would not expect too much from me for the rest of the year. But I’ve been wrong before…

  • Git, you are messing with my flow!

    The variety of “flows” for developing using Git makes choosing the right one for your team difficult. When you throw true continuous integration and delivery into that, and add a requirement for immutable build objects, well…. you get a heaping mess.

    Infrastructure As Code

    Some of my recent work to help one of our teams has been creating a Terraform project and Azure DevOps pipeline based on a colleague’s work in standardizing Kubernetes cluster deployments in AKS. This work will eventually get its own post, but suffice to say that I have a repository which is a mix of Terraform files, Kubernetes manifests, Helm value files, and Azure DevOps pipeline definitions to execute them all.

    As I started looking into this, it occurred to me that there really is clear way to manage this project. For example, do I use a single pipeline definition with stages for each environment (dev/stage/production)? This would mean the Git repo would have one and only one branch (main), and each stage would need some checks (manual or automatic) to ensure rollout is controlled.

    Alternatively, I can have an Azure Pipeline for each environment. This would mean that each pipeline could trigger on its own branch. However, it also means that no standard Git Flow seems to work well with this.

    Code Flows

    Quite separately, as the team dove into creating new repositories, the question again came up around branching strategy and subsequent environment strategies for CI/CD. I am a vocal proponent of immutable build objects, but how each team chooses to get there is up to them.

    In some MS Teams channel discussion, there are pros and cons to nearly all of our current methods, and the team seems stuck on the best way to build and develop.

    The Internet is not helping

    Although it is a few years old, Scott Shipp’s War of the Git Flows article highlights the ongoing “flow wars.” One of the problems with Git is the ease with which all of these variations can be implemented. I am not blaming Git, per say, but because it is easy for nearly anyone to suggest a branching strategy, things get muddled quickly.

    What to do?

    Unfortunately, as with many things in software, there is no one right answer. The branch strategy you use depends on the requirements of not just the team, but the individual repository’s purpose. Additional requirements may come from the desired testing, integration, and deployment process.

    With that in mind, I am going to make two slightly radical suggestions

    1. Select a flow that is appropriate for the REPOSITORY, not your team.
    2. Start backwards. Working on identifying the requirements for your deployment process first. Answer questions like this:
    • How will artifacts, once created, be tested?
    • Will artifacts progress through lower environments?
    • Do you have to support old releases?
    • Where (what branch) can a release candidate artifacts come from? Or, what code will ultimately be applied to production?

    That last one is vital: defining on paper which branches will either be applied directly to production (in the case of Infrastructure as Code) or generate artifacts that can end up in production (in the case of application builds) will help outline the requirements for the project’s branching strategy.

    Once you have identified that, the next step is defining the team’s requirements WITHIN the above. In other words, try not to have the team hammer a square peg into a round hole. They have a branch or two that will generate release code, how they get their code into that branch should be as quick and direct as possible while supporting the collaboration necessary.

    What’s next

    What’s that mean for me? Well, for the infrastructure project, I am leaning towards a single pipeline, but I need to talk to the Infrastructure team to make sure they agree. As to the software team, I am going to encourage them to apply the process above for each repository in the project.

  • ISY and the magic network gnomes

    For nearly 2 years, I struggled mightily with communication issues between my ISY 994i and some of my docker images and servers. So much, in fact, that I had a fairly long running post in the Universal Devices forums dedicated to the topic.

    I figure it is worth a bit of a rehash here, if only to raise the issue in the hopes that some of my more network-experienced contacts can suggest a fix.

    The Beginning

    The initial post was essentially about my ASP.NET Core API (.net Core 2.2 at the time) not being able to communicate with the ISY’s REST API. You can read through the initial post for details, but, basically, it would hit it once, then timeout on subsequent requests.

    It would seem that some time between my original post and the administrator’s reply, I set the container’s networking to host and the problem went away.

    In retrospect, I had not been heavily using that API anyway, so it may have just been hidden a bit better by the host network. In any case, I ignored it for a year.

    The Return

    About twenty (that’s right, 20) months later, I started moving my stuff to Kubernetes, and the issue reared its ugly head. I spent a lot of time trying to get some debug information from the ISY, which only confused me more.

    As I dug more into when it was happening, it occurred to me that I could not reliably communicate with the ISY from any of the VMs on my HP Proliant server. Also, and, more puzzling, I could not do a port 80 retrieval from the server itself to the ISY. Oddly, though, I’m able to communicate with other hardware devices on the network (such as my MiLight Gateway) from the server and it’s VMs. Additionally, the ISY responds to pings, so it is able to be reached.

    Time for a new proxy

    Now, one of the VMs on my server was an Ubuntu VM that was serving as an NGINX reverse proxy. For various reasons, I wanted to move that from a virtual machine to a physical box. This, it would seem, would be a good time to see if a new proxy leads to different results.

    I had an old Raspberry Pi 3B+ lying around, and that seemed like the perfect candidate for a stand alone proxy. So I did a quick image of an SD card with Ubuntu 20, copied my Nginx configuration files from the VM to the Pi, and re-routed my firewall traffic to the proxy.

    Not only did that work, but it solved the issue of ISY connectivity. Routing traffic through the PI, I am able to communicate with the ISY reliably from my server, all of my VMs, and other PCs on the network.

    But, why?

    Well, that is the million dollar question, and, frankly, I have no clue. Perhaps it has to do with the NIC teaming on the server, or some oddity in the network configuration on the server. But I burned way too many hours on it to want to dig more into it.

    You may be asking, why a hardware proxy? I liked the reliability and smaller footprint of a dedicated Raspberry PI proxy, external to the server and any VMs. It made the networking diagram much simpler, as traffic now flows neatly from my gateway to the proxy and then to the target machine. It also allows me to control traffic to the server in a more granular fashion, rather than having ALL traffic pointed to a VM on the server, and then routed via proxy from there.

  • Making use of my office television with Magic Mirror

    I have a wall-mounted television in my office that, 99% of the time, sits idle. Sadly, the fully loaded RetroPie attached to it doesn’t get much Super Mario Bros action during the workday. But that idle Raspberry Pi had me thinking of ways to utilize that extra screen in my office. Since, well, 4 monitors is not enough.

    Magic Mirror

    At first, I contemplated writing my own application in .Net 5. But I really do not have the kind of time it would take to get something like that moving, and it seems counter-productive. I wanted something quick and easy, with no necessary input interface (it is a television, after all), and capable of displaying feed data quickly. That is when I stumbled on Magic Mirror 2.

    Magic Mirror is a Node-based app which uses Electron to display HTML on various platforms, including on the Raspberry Pi. It is popular, well-documented, and modular. It has default modules for weather and clock, as well as a third-party module for package tracking.

    Server Status… Yes please.

    There are several modules around displaying status, but nothing I saw that let me display the status from my statuspage.io page. And since statuspage.io has a public API, unique to each page, that doesn’t require an API key, it felt like a good first module to develop for myself.

    I spent a lot of time understanding the module aspect of MagicMirror, but, in the end, I have a pretty simplistic module that’ll display my statuspage.io status on the magic mirror.

    What is next?

    Well, there are two things I would really like to try:

    1. Work on integrating a floor plan of my house, complete with status from my home automation.
    2. Re-work MagicMirror as a React app.

    For the first one, well, there are some existing third-party components that I will test. That second one, well, that seems a slightly taller task. That might have to be a winter project.

  • Teaching in a Professional Setting

    Those who can, do; those who can’t, teach.”

    George Bernard Shaw

    As you can imagine, in a family heavy in educators, this phrase lands anywhere from irksome to derogatory. As I have worked to define my own expectations in my new role, it occurs to me that Shaw got this wrong, and we have to go back a few thousand years to correct it.

    Do we hate teaching?

    As you may have noticed via my LinkedIn, I have stepped into a new role as Chief Architect. The role has been vacant here for some time, which means I have some flexibility to refine the position and how it will assist the broader engineering staff moving forward.

    My current team is comprised strictly of highly experienced software architects who have, combined, spent at least fifty years working on software. As a leader, how can I best utilize this group to move our company forward? As I pondered this question, the Shaw quote stuck in my head: why such a disdain for teaching, and does that attitude prevent us from becoming better?

    Teaching makes us better

    Our current group of software architects spends a lot of time understanding the business problems and technical limitations of our current platforms and then working out solutions that allow us to deliver customer value quickly. In this cycle, however, we leave ourselves little time to fully understand some of the work that we are doing in order to educate others.

    I can say, with no known exceptions, that I my level of understanding of a topic is directly related to how many times I have had to educate someone on said topic. If I have never had to teach someone how my algorithm works, chances are I am not fully aware of what it is doing. I simply got it to work.

    For topics that I have presented to others, written blog posts about, or even just had to document in an outside medium, my level of understanding increases significantly.

    Teaching is a skill

    Strontium sums up The ‘Those who Can’t Do, Teach’ Fallacy very well. The ability to teach someone else a particular task or skill requires so more than simple knowledge of the task. Teachers must have an understanding of the task at hand. They must be able to answer challenging questions from students with more than “that’s just how it is.” And, perhaps most importantly, a teacher must learn to educate without indoctrinating.

    As technical leaders in software engineering, architects must be not only be able to know and put into practice patterns and practices that make the software better, but they must be able to educate others on the use of those patterns and practices. Additionally, they must have the understanding and ability to answer challenging questions.

    What is our return?

    In a professional environment, we often look at teaching as a sunk cost. When high level engineers and architects spend the additional time to understand and teach complex topics, they are not producing code that leads to better sales numbers and a healthier bottom line. At least not directly.

    Meanwhile, there is always mandatory training on HR compliance, security, and privacy. Why? If there is a breach in one of those areas, the company could be liable for millions in damages.

    How much would it cost the company if a well-designed system suffered from poor implementation due to lack of education on the design?

    Teach it!

    Part of the reason I wanted to restart my blog was to give me an outlet to document and explain some of the work that I have been doing. This blog gives me an outlet to have to explain what I have done, not just “make it work.”

    As you progress through your professional journey, in software or not, I encourage you to find a way to teach what you know. In the end, you will learn much more than you might think, and you will be better for it.

    As I mentioned above, I prefer Aristotle’s views on this matter over Shaw’s:

    Those who know do, those that understand teach.”

    Aristotle
  • Packages, pipelines, and environment variables…. oh my!

    I was clearly tempting the fates of package management when I called out NPM package management. Nuget was not to be outdone, and threw us a curveball in one of our Azure DevOps builds that is too good not to share.

    The Problem

    Our Azure Pipeline build was erroring out, however, it was erroring in different steps, and steps that seemingly had no connection to the changes that were made.

    Error 1 – Unable to install GitVersion.Tool

    We use GitVersion to version our builds and packages. We utilize the GitTools Azure DevOps extension to provide us with pipeline tasks to install and execute GitVersion. Most of our builds use step templates to centralize common steps, so these steps are executed every time, usually without error.

    However, in this particular branch, we were getting the following error:

    --------------------------
    Installing GitVersion.Tool version 5.x
    --------------------------
    Command: dotnet tool install GitVersion.Tool --tool-path /agent/_work/_temp --version 5.7.0
    /usr/bin/dotnet tool install GitVersion.Tool --tool-path /agent/_work/_temp --version 5.7.0
    The tool package could not be restored.
    /usr/share/dotnet/sdk/5.0.402/NuGet.targets(131,5): error : 'feature' is not a valid version string. (Parameter 'value') [/tmp/t31raumn.tj5/restore.csproj]

    Now, I would have concentrated more on the “‘feature’ is not a valid version string” error initially, however, I was distracted because, sometimes, we got past this error but into another error.

    Error 2 – Dotnet restore failed

    So, sometimes (not always), the pipeline would get past the installation of GitVersion and make it about three steps forward, into the dotnet restore step. Buried in that restore log was a similar error to the first one:

    error : 'feature' is not a valid version string. (Parameter 'value')

    Same error, different steps?

    So, we were seeing the same error, but in two distinct places: one in a step which installs the tool, and presumably has nothing to do with the code in the repository, and the other which is squarely around the dotnet restore command.

    A quick Google search yielded an issue in the dotnet core Github repository. At the tail end of that thread is this little tidbit from user fabioduque:

    The same thing happened to me, and it was due to me setting up the Environment Variable “VERSION”.
    Solved with with:
    set VERSION=

    Given that, I put a quick step in to print out the environment variables and determined the problem.

    Better the devil you know…

    As the saying goes: Better the devil you know than the devil you don’t. In this case, the devil we didn’t know about was an environment variable named VERSIONPREFIX. Now, we were not expressly setting that version in a script, but one of our variables was named versionPrefix. Azure DevOps makes pipeline variables available as environment variables, and it standardizes them into all caps, so we ended up with VERSIONPREFIX. Nuget, in versions after 3.4, provides for applying settings at runtime via environment variables. And since VERSIONPREFIX is applicable in dotnet / nuget, our variable was causing these random errors. I renamed the variable and viola, no more random errors.

    The Short, Short Version

    Be careful when naming variables in Azure Pipelines. They are made available as-named via environment variables. If some of the steps or commands that you use accept those environment variables as settings, you may be inadvertently affecting your builds.