Category: Software

  • React in a weekend…

    Last week was a bit of a ride. My wife was thrust into her real estate career by a somewhat abrupt (but not totally unexpected) reduction in force at her company. We spent the middle of the week shopping to replace her company vehicle, which I do not recommend in the current market. I also offered to spend some time on standing up a small lead generation site for her so that she can establish a presence on the web for her real estate business.

    I spent maybe 10-12 hours getting something running. Could it have been done more quickly? Sure. But I am never one to resist the chance to setup deployment pipelines and API specifications. I figured the project would run longer than 5 days.

    Why only 5 days? Well, Coldwell Banker (her real estate broker) provides a LOT of tools for her, including a branded website with tie in’s to the MLS system. So I forwarded www.agentandalyn.com to her website and my site will be digitally trashed.

    Frameworks

    For familiarity, I chose the .Net 5 ASP.NET / React template as a starter. I have a number of API projects running so I was able to utilize some boilerplate code from those projects to configure Serilog to my internal ELK stack and basic authentication to my Identity Server. The above tutorial is a good start to getting things moving forward with the site.

    On the React app site, I immediately updated all the components to their latest versions. This included moving Bootstrap to version 5. Reactstrap is installed by default, but does not have support for Bootstrap 5. I could have dropped Reactstrap in favor of the RC version of React-Bootstrap, but I’m comfortable enough in my HTML styling, so I just used the base DOM elements and styled them with the Bootstrap styles.

    It probably took me an hour or so to take the template code and turn it into the base for the home page. And then I built a deployment pipeline…

    A Pipeline, you say..

    Yes. For what amounts to a small personal project, I built an Azure DevOps pipeline that builds the project and its associated Helm chart, publishes the resulting docker image and Helm chart to feeds in Proget, and initiates a release in Octopus Deploy.

    While this may seem like overkill, I actually have most of this down to a pretty simple process using some standard tools and templates.

    Helm Charts made easy

    For the Helm charts, I utilizes the common chart from k8s-at-home’s library-charts repository. This limits my Helm chart to some custom values in values.yaml file to define my image, services, ingress, and any other customizations I may want.

    I typically use a custom liveness and readiness probe that lets me hit a custom health endpoint served up using ASP.NET Core’s Custom Health Checks. This allows me some control to add more than just a ping check for the service.

    Azure DevOps Pipelines

    As mentioned in more than one previous post, I am thoroughly impressed with Azure DevOps Build Pipelines thus far. One of the nicer features is the ability to save common steps/jobs/stages to a separate repository, and then re-use those build templates in other pipelines.

    Using my own templates, I was able to construct a build pipeline that compiles, creates and publishes a docker image, creates and publishes a Helm chart, and creates and deploys an Octopus Release in a 110 line YAML file.

    Octopus Project / Release Pipeline

    I have been experimenting with different ways to deploy and manage deployments to Kubernetes. While not the fanciest, Octopus Deploy does the job. I am able to execute a single step to deploy the Helm chart to the cluster, and can override various values with Octopus variables, meaning I can use the same process to deploy to test, stage, and production.

    Wasted effort?

    So I spent a few days standing up a website that I am in the middle of deleting. Was it worth it? Actually, yes. I learned more about my processes and potential ways to make them more efficient. It also spiked my interest in putting some UIs on top of some of my API wrappers.

  • Who says the command line can’t be pretty?

    The computer, in many ways, is my digital office space. Just like that fern in your office, you need to tend to your digital space. What better way to water your digital fern than to revamp the look and feel of your command line?

    I extolled the virtues of the command line in my Windows Terminal post, and today, as I was catching up on my “Hanselman reading,” I came across an update to his “My Ultimate PowerShell prompt with Oh My Posh and the Windows Terminal” post that included new updates to make my command line shine.

    What’s New?

    Oh-My-Posh v3

    What started as a prompt theme engine for Powershell has grown into a theme engine for multiple shells, including ZSH and Bash. The v3 documentation was all I needed to upgrade from v2 and modify the powerline segments to personalize my prompt.

    Nerd Fonts

    That’s right, Nerd Fonts. Nerd Fonts are “iconic fonts” which build hundreds of popular icons into the font for use in the command line. As I was using Cascadia Code PL (Cascadia Code with Powerline glyphs), it felt only right to upgrade to the Caskaydia Code NF Nerd font.

    It should be noted that the Oh-My-Posh prompts are configured as part of your Windows Powershell prompt, meaning they show up in any window running Powershell. For me, this is three applications: Microsoft Windows Terminal, Visual Studio Code, and the Powershell Core command line application. It is important to set the font family correctly in all of these places.

    Microsoft Windows Terminal

    Follow Oh-My-Posh’s instructions for setting the default font face in Windows Terminal.

    Visual Studio Code

    For Visual Studio, you need to change the fontFamily property of the integrated terminal. The easiest way to do this is to open the settings JSON (Ctrl-Shift-p and search for Open Settings (JSON)) and make sure you have the following line:

    {
      "terminal.integrated.fontFamily": "CaskaydiaCove NF"
    }

    When I was editing my Oh-my-Posh profile, I realized that it might be helpful to be able to see the icons I was using in the prompt, so I also changed my editor font.

    {
      "editor.fontFamily": "'CaskaydiaCove NF', Consolas, 'Courier New', monospace"
    }

    You can use the Nerd Font cheat sheet to search for icons to use and copy/paste the icon value into your profile.

    Powershell Application

    With Windows Terminal, I rarely use the Windows Powershell application, but it soothes my digital OCD to have it working. To change that window’s font, right click in the window’s title and select Properties. Go to the Font tab, and choose CaskaydiaCove NF (or your installed Nerd Font) from the list. This will only change the properties for the current window. If you want to change the font for any new windows, right click in the window’s title bar and select Defaults, then follow the same steps to set the default font.

    Terminal Icons

    This one is fun. In the screenshot above, notice the icons next to different file types. This is accomplished with the Terminal-Icons Powershell Module. First, install the module using the following Powershell Command:

    Install-Module -Name Terminal-Icons -Repository PSGallery

    Then, add the Import-Module command to your Powershell Profile:

    Import-Module -Name Terminal-Icons

    Too Much?

    It could be said that spending about an hour installing and configuring my machine prompts is, well, a bit much. However, as I mentioned above, sometimes you need to refresh your digital work space.

  • Inducing Panic in a Software Architect

    There are many, many ways to induce panic in people, and, by no means will I be cluing you in to all the different ways you can succeed in sending me into a tail spin. However, if there is one thing that anyone can do that immediately has me at a loss for words and looking for the exit, it is to ask this one question: “What do you do for a living?”

    It seems a fairly straight-forward question, and one that should have a relatively straight forward answer. If I said “I am a pilot,” then it can be assumed that I fly airplanes in one form or another. It might lead to a conversation about the different types of planes I fly, whether it’s cargo or people, etc. However, answering “I am a software architect” usually leads to one of two responses:

    1. “Oh..”, followed by a blank stare from the asker and an immediate topic change.
    2. “What’s that?”, followed by a blank stare from me as I try to encapsulate my various functions in a way that does not involve twelve Powerpoint slides and a scheduled break.

    In social situations, or, at the very least, the social situations in which I am involved, being asked what I do is usually an immediate buzz kill. While I am sure people are interested in what I do, there is no generic answer. And the specific answers are typically so dull and boring that people lose interest quickly.

    Every so often, though, I run across someone in the field, and the conversation turns more technical. Questions around favorite technology stacks, innovate work in CI/CD pipelines, or some sexy new machine learning algorithms are discussed. But for most, describing the types of IT Architects out there is confusing because, well, even we have trouble with it.

    IT Architect Types

    Redhat has a great post on the different types of IT architects. They outline different endpoints of the software spectrum and how different roles can be assigned based on these endpoints. From those endpoints they illustrate the different roles an architect can play, color-coordinated to the different orientations along the software spectrum.

    However, only the largest of companies can afford to confine their architects to a single circle in this diagram, and many of us where one or more “role hats” as we progress through our daily work.

    My architecture work to this point has been primarily developer-oriented. While I have experimented in some of operations-oriented areas, my knowledge and understanding lies primarily in the application realm. Prior to my transfer to an architecture role, I was an engineering manager. This previous role exposed me to much more of the business side of things, and my typical frustrations today are more about what we as architects can and should be doing to support the business side of things.

    So what do I do?

    In all honesty, I used to just say “software developer” or “software engineer.” Those titles are usually more generally understood, and I can be very generic about it. But as I work to progress in my own career, the need for me to be able to articulate my current position (and desired future positions) becomes critical.

    Today, I try to phrase my answers around being a leader in delivering software that helps customers do their job better. It is usually not as technical, and therefore not as boring, but does drive home the responsibilities of my position as a leader.

    How that answer translates to a cookout, well, that always remains to be seen.

  • Simple Site Monitoring with Raspberry PI and Python

    My off-hours time this week has been consumed by writing some Python scripts to help monitor uptime for some of my sites.

    Build or Buy?

    At this point in my career, “build or buy” is a question I ask more often than not. As a software engineer, there is no shortage of open source and commercial solutions for almost any imaginable task. Web Site monitoring is no different. Tools such as StatusCake, Pingdom, and LogicMonitor offer hosted platforms, while tools like Nagios and PRTG offer on-premise installations, there is so much to choose from, it’s hard to decide.

    I had a few simple requirements:

    • Simple at first, but expandable as needed.
    • Runs on my own network so that I can monitor sites that are not available outside of my firewall.
    • Since most of my servers are virtual machines consolidated on the one lab server, well, it does not make much sense to put it on that server. I needed something I could run on easily with little to no power.

    Build it is!

    I own a few Raspberry Pis, but the Model 3B and 4B are currently in use. The lone unused Pi is an old Model B (i.e., model 1B), so installing something like Nagios would be, well, not usable when it was all said and done. Given that the Raspberry Pi is at home with Python, I thought I would dust off my “language learning skills” and figure out how to make something useful.

    As I started, though, I remembered my free version of Atlassian’s Status Page. Although the free version is limited in the number of subscribers and no text subscriptions, for my usage, it’s perfect. And, near and dear to my developer heart, it has a very well defined set of APIs for managing statuses and incidents.

    So, with Python and some additional modules, I created a project that lets me do a quick request on a desired website. If the website is down, the Status Page status for the associated component is changed and an incident is created. If/when it comes back up, any open incidents associated with that component are closed.

    Viola!

    After a few evening hours tinkering, I have the scripts doing some basic work. For now, a cron job executes the script every 5 minutes, and if a site goes down it is reported to my statuspage site.

    Long term, I plan on adding support for more in-depth checks of my own projects, which utilized .Net’s HealthChecks namespace to report service health automatically. I may also look into setting up the scripts as a service running on the Pi.

    If you are interested, the code is shared on Github.

  • First Impressions: Windows Terminal

    From my early days on the Commodore 64 to my current work with Linux (/bin/bash) and Windows (powershell, mostly), I have spent a tremendous amount of time in command lines over the course of my life. So, when I stumbled across Windows Terminal, it seemed like a good opportunity to evaluate a new container for my favorite command lines.

    Microsoft Windows Terminal

    Windows Terminal is an open source application from Microsoft that touts itself as “… a modern terminal application for users of command-line tools and shells …”. It promotes features such multiple tabs, panes, Unicode and UTF-8 character support, a GPU accelerated text rendering engine, and the ability to create your own themes and customize text, colors, backgrounds, and shortcuts. [1]

    In my experience, it lives up to that billing. The application is easy to install (in particular with Chocolatey), quick to configure, and provides a wide range of features to make managing my command line windows much easier.

    Install and Initial Configuration

    I use Chocolatey pretty heavily to manage my installs, and thankfully, there is a package for Windows terminal:

    choco install microsoft-windows-terminal -y

    It is worth noting that the recommended installation method is actually through the Microsoft Windows Store, not Chocolatey. It is also worth nothing that uninstalling Windows Terminal from Chocolatey deletes your settings file, so if you want to switch, be sure to backup that settings file before uninstalling.

    When I first installed Windows Terminal, there was no User Interface for settings, which meant opening the Settings file and editing its JSON. The settings file is fairly intuitive and available settings are well documented, which made my initial setup pretty easy. Additionally, as all settings are stored in the JSON file, migrating settings from one machine to another is as simple as copying the file between machines. Starting with version 1.8, a Settings UI was added to help ease some of the setup.

    Additional Tools Setup

    As I perused the documentation, I came across the setup for Powerline, which provides custom command line prompts for Git repositories. I was immediately intrigued: I have been using posh-git for years, and Powerline extends posh-git and oh-my-posh and adds some custom glyphs for graphical interfaces. The installation tutorial is well done and complete, which is no surprise considering the source material comes from Mr. Hanselman.

    My home lab work has brought me squarely back into the realm of Linux and SSH, which was yet another reason I was looking for an updated terminal tool. While there is no explicit profile help for SSH, there is a good tutorial on configurating SSH profiles.

    Summary

    I have been using Windows Terminal now for around 4 months, and in that time, I have become more comfortable with it. I am still a novice when it comes to the various actions and shortcuts that it supports, which is why they are notably absent from this write-up. The general functionality and, in particular, the support for profiles and console coloring, allows for much better organization of what used to be 4-8 powershell console windows open at any one time on my PC. If you are a command line user, Windows Terminal is worth a look.

  • To define or document? Methods for generating API Specifications

    There is an inherent “chicken and egg” problem in API development. Do we define a specification before creating an API implementation (specification-first), or do we implement an API and generate a specification from that implementation (code-first)?

    Determining how to define and develop your APIs can have impacts on future consumption and alternative implementations, so it is important to evaluate the purpose of your API and identify the most effective method for defining your API.

    API (or Code) First

    In API- (or, code-) first development, developers setup a code project and begin coding endpoints and models immediately. Typcially, developers treat the specification as generated documentation of what they have already done. This method means the API definition is fluid as implementation occurs: oh, you forgot a property on an object or a whole path? No problem, just add the necessary code. Automated generation will take care of updating your API specification. Also, when you know that the API you are working on is going to be the only implementation of that interface, this solution makes the most sense, as it requires less upfront work from the development team and it is easier to change the API specification.

    Specification First

    On the other hand, specification-first development entails defining the API specification first. Developers define the paths, actions, responses, and object models before we write any code at all. From there, we can generate both client and server code. This requires more effort: developers must define all the necessary changes prior to the actual implementation on either the client or server side. This upfront effort generates a truly reusable specification, since the API specification is not generated from a single implementation of the API. This method is most useful when developing specifications for APIs that will have multiple implementations.

    What should you use?

    Whichever you want. I’m really not here to tout the benefits of either one. In my experience, the choice depends primarily on answering the following question: Will there be only one implementation of this API? If the answer is yes, then code-first would be my choice, simply because it does not require a definition process up front. If, however, you anticipate more than one implementation of a given API, it is wise to start with the specification first. Changes to the specification should be more deliberate, as they will affect a number of implementations.

    Tools to help

    No matter your selection, there are tools to aid you in both cases.

    For specification first development, the OpenAPI Generator is a great tool for generating consumer libraries and implementations. Once you create the API specification document, the OpenAPI Generator can generate a wide array of clients, servers, and other schemas. I have used the generator to create Axios-based typescript clients for user interfaces as well as model classes for server side development. I have only ever used the OpenAPI generator in a manual generation process: when the developer changes the specification, they must also regenerate the client and server code. This, however, is not a bad thing: specification changes typically must be very deliberate and take into account all existing implementations, so keeping the process manual forces that.

    In my API-first projects, I typically use the NSwag toolchain to both generate the specification from the API code as well as generate any clients that I may need. NSwag’s toolchain integrates nicely with .Net 5 projects and can be configured to generate specifications and additional libraries at build time, making it easy to deploy these changes automatically.

    It is worth nothing that both NSwag and the OpenAPI Generator can be configured to perform both of these methods, my examples above come simply from my own experience with each.

  • Designing for the public cloud without breaking the bank

    The ubiquity and convenience of cloud computing resources are a boon to companies who want to deliver digital services quickly. Companies can focus on delivering content to their consumers rather than investing in core competencies such as infrastructure provisioning, application hosting, and service maintenance. However, for those companies who have already made significant investment into these core competencies, is the cloud really better?

    The Push to (Public) Cloud

    The maturation of the Internet and the introduction of public cloud services has reduced the barrier of entry for companies who want to deliver in the digital space. What used to take a dedicated team of systems engineers and a full server/farm buildout can now be accomplished by a few well-trained folks and a cloud account. This allows companies, particularly those in their early growth phases, to focus more on delivering content rather than growing core technological competencies.

    Many companies with heavy investment in core competencies and private clouds are making decisions to move their products to the cloud. This move can offer established companies a chance to “act as a startup” by refining their focus and deliver content faster. However, when those companies look at their cloud bill, there can be an element of sticker shock. [1]

    Why Move

    For a company with a heavy investment in its own private cloud infrastructure, why would you move?

    Inadequate Skillsets for Innovative Technology

    While companies may have significant investments in IT resources, they may not have the capability or desire to train these resources to maintain and manage the latest technologies. For example, managing a production-level Kubernetes cluster requires skills that may not be present in the company today.

    Managed Services

    This is related to the first, but, in many cases, it is much easier to let the company that created the service host it themselves, rather than trying to host it on your own. Things like Elastic or Kafka can be hosted internally, but letting the experts run your instances as a managed service can save you money in the long run.

    Experimentation

    Cloud accounts can provide excellent sand box environments for senior engineers to prove their work quickly, focusing more on functionality than provisioning. Be careful, though, this can lead to skipping the steps between Proof of Concept and Production.

    Building for the future

    As architects/senior engineers, how can we reconcile this? While we want to iterate quickly and focus on delivering customer value, we must also be mindful of our systems’ design.

    Portability

    The folks at Andreessen Horowitz used the term “repatriation” to describe a move back to private clouds from public clouds. As you can imagine, this may be very easy or very difficult, depending on your architecture: repatriating a virtual machine is much easier than, say, repatriating an API Management service from the public cloud to a private offering.

    As engineers design solutions, it is important to think about the portability of the solution you are creating. Can the solution you are building be hosted anywhere (public or private cloud) or is it tailor made to a specific public cloud? The latter may allow for quick ramp up, but will lock you in to a single cloud with little you can do to improve your spend.

    Cost

    When designing a system for the cloud, you must think a lot harder about the pricing of the individual components. Every time you add another service, storage account, VM, or cluster on your design diagram, you are adding to the cost of your system.

    It is important to remember that some of these costs are consumption based, which means you will need to estimate your usage to get a true idea of the cost. I can think of nothing worst than a huge cloud bill on a new system because I forgot to estimate consumption charges

    SOLID Principles

    SOLID design principles are typically applied to object-oriented programming, however, some of those same principles should be applied to larger cloud architectures. In particular, Interface Segregation and Liskov substitution (or, more generally, design by contract) facilitate simpler changes to services by abstracting the implementation from the service definition. Objects that are easier to replace are easier to repatriate into private clouds when profit margins start to shrink.

    What is the Future?

    Do we need the public cloud? Most likely, yes. Initially, it will increase your time to market by removing some of the blockers and allowing your engineers to focus on content rather than core competencies. However, architects and systems engineers need to be aware of the trade offs in going to the cloud. While private clouds represent a large initial investment and monthly maintenance, public cloud costs, particularly those on consumption based pricing, can quickly spiral out of control if not carefully watched.

    If your company already has a good IT infrastructure and the knowledge in place to run their own private clouds, or if your cloud costs are slowly eating into your profit margins, it is probably a good time to consider repatriation of services into private infrastructure.

    As for the future: a mix of public cloud services and private infrastructure services (hybrid cloud) is most likely in my future, however, I believe the trend will be different depending on the maturity and technical competency of your staff.

    References

    1. “Cloud ‘sticker shock’ explored: We’re spending too much”
    2. “The Cost of Cloud, a Trillion Dollar Paradox”
  • Immutable Build Objects

    Before I can make a case for Immutable Deployable Artifacts (I’m going to use IDA for short), it is probably a good idea to define what I mean by that term.

    Regardless of your technology stack, most systems follow a similar process for building deployment artifacts: get code, fetch dependencies, build code, putting together the resulting deployable artifact, whether it is .Net DLLs, .jar files, or docker images. These things are referred to as “deployable artifacts.”

    To say that these are immutable, by definition, means that they cannot be changed. However, in this context, it means a little more than that: having immutable deployable artifacts means that these artifacts are built once, but deployed many times to progressive environments. In this manner, a single build artifact is deployed to QA, internally verified, promoted to staging, externally verified, and then promoted to production.

    Separation of Concerns – Build vs. Deploy

    First and foremost, we need to remember that build and deploy are uniquely separate concerns: Build is responsible for creating deployable artifacts, while deployment is responsible for the distribution and configuration of the artifacts in various environments.

    Tools like Jenkins muddy those waters, because they allow us to integrate build and deploy into single actions. This is not a bad thing, but we need to make sure we maintain logical separation between the actions, even if their physical separation is muddied.

    Gitflow

    I won’t go into the merits and pain points of GitFlow here. It’s suffice to say that the majority of our teams utilized GitFlow as a branching strategy.

    Gitflow and IDAs

    The typical CI/CD implementations of GitFlow, particularly at my current company, map branches to environments (develop to QA, release to staging, master to production). To effect an update to an environment, a pull request is generated from one branch to another, and based on a commit to the branch. A build is then generated and deployed. This effectively means that the “build” you tested against is not the same as the build you are pushing to production. This gives development managers (current and former, such as myself) cold sweats. Why? In order to have accurate builds between branches, we rely very heavily on the following factors not changing:

    • No changes were introduced to the “source branch” (master, in the case of production builds) outside of the merge request.
    • No downstream dependencies changed. Some builds will pull the latest package from a particular repository (nuget, npm, etc), so we are relying that no new versions of a third party dependency have been published OR that the new version is compatible.
    • Nothing has changed in the build process itself (including any changes to the build agent configuration).

    Even at my best as a development manager, there is no way I could guarantee all three of those things with every build. These variables open up our products to unnecessary risks in deployment, as we are effectively deploying an untested build artifact directly into production.

    Potential solutions

    There is no universal best practice, but, we can draw some conclusions based on others.

    Separate Branch Strategy from Deployment strategy

    The first step, regardless of your branching strategy, is to separate the hard link between branch names and deployment environments. While it is certainly helpful to tag build artifacts with the branch from which they came, there is no hard and fast rule that “builds from develop always and only go to QA, never to stage or release.” If the build artifact(s) pass the necessary tests, they should be available to higher environments through promotion.

    I’m not suggesting we shouldn’t limit branches which produce release candidate builds , but the notion that “production code MUST be built directly from master” is, quite frankly, dangerous.

    If you are using GitFlow, one example of this would be to generate your build artifacts from ONLY your release and hotfix branches. There are some consequences that you need to be aware of with this, and they are outlined far better than i can by Ben Teese.

    Simplify your branching strategy

    Alternatively, if your project is smaller or your development cycles are short enough that you support fix-forward only, you may consider moving to the simpler feature branch workflow. In this branching mechanism, build artifacts come from your master branch, and feature branches are used only for feature development.

    Change your deployment and promotion process

    Once you have build artifacts being generated, it’s up to your deployment process to deploy and configure those artifacts. This can be as automated or as manually as your team chooses. In an ideal scenario, if all of your testing is 100% automated, then your deployment process could be as simple as deploying the generated artifacts, running all of your tests, and then promoting those artifacts to production if all your tests pass. Worst case, if your testing is 100% manual, you will need to define a process for testing and promotion of build artifacts. Octopus Deploy can be helpful in this area by creating build pipelines and allow for promotion through user interaction.

    What about Continuous Integration?

    There is some, well, confusion around what continuous integration is. Definitions may differ, but the basics of CI are that we should automate the build and test process so that we can verify our code state, ideally after every change. What this means to you and your team, though, may differ. Does your team have the desire, capacity, and ability to run build and test on every feature branch? Or is your team more concerned with build and test on the develop branch as an indicator of code health.

    Generally speaking, CI is something that happens daily, as an automated process kicked off by developer commits. It happens BEFORE Continuous Deployment/Continuous Delivery, and is usually a way to short circuit the CD process to prevent us from deploying bad code.

    The changes above should have no bearing on your CI process in that whatever build/test processes you do now should continue.

    A note on Infrastructure as Code

    It’s worth mentioning that IaC projects such as terraform deployment projects should have a direct correlation between branch and environment. Since the code is meant to define an environment and no build artifacts, that link is logical and necessary.