Category: Architecture

  • Centralized Authentication: My Hotel California

    When I left my previous role, I figured I would have some time before the idea of a centralized identity server popped back up. As the song goes, “You can checkout any time you like, but you can never leave…”

    The Short Short Version

    This is going to sound like the start of a very bad “birds and bees” conversion…

    When software companies merge, the primary drivers tend to be expansion of market through additional functionality. In other words, Company A buys Company B because Company A wants to offer functionality to its customer’s that Company B already provides. Rather than writing that functionality, you just buy the company.

    Usually, that also works in the inverse: customers of Company B might want some of the functionality from the products in Company A. And with that, the magical “cross sell” opportunity is born.

    Unfortunately, but much like human babies, the magic of this birth is tempered pretty quickly by what comes next. Mismatched technical stacks, inconsistent data models, poorly modularized software… the list goes on. Customers don’t want to have to input the same data twice (or three, four, even five times), nor do they want to have to configure different systems. The magic of “cross sell” is that, when it’s sold, it “just works.” But that’s nearly never the case.

    Universal Authentication

    That said, there is one important question that all systems ask, and it becomes the first (and probably one of the largest) hurdle: WHO ARE YOU?

    When you start to talk about integrating different systems and services, the ability to determine universal authentication (who is trying to access your service) becomes the linchpin around which everything else can be built. But what’s “universal authentication”?

    Yea, I made that up. As I have looked at these systems, the directive is pretty simple: Universal Authentication is “everyone looks to a system which provides the same user ID for the same user.”

    Now.. That seems a bit, easy. But there is an important point here: I’m ONLY talking about authentication, not authorization. Authentication (who are you) is different from authorization (what are you allowed to do). Aligning on authentication should be simpler (should), but provides for long term transition to alignment on authorization.

    Why just Authentication?

    If there is a central authentication service, then all applications can look to that service to authenticate users. They can send their users (or services) to this central service and trust that it will authenticate the user and provide a token with which that user can operate the system.

    If other systems use the same service, they too can look to the central service. In most cases, if you as a user are already logged in to this service, it will just redirect you back , with a new token in hand for the new application. This leads to a streamlined user experience.

    You make it sound easy…

    It’s not. There is a reason why Authentication as a Service (AaaS) platforms are popular and so expensive.

    • They are the most attacked services out there. Get into one of these, and you have carte blanche over the system.
    • They are the most important in terms of uptime and disaster recovery. If AaaS is down, everyone is down.
    • Any non-trivial system will throw requirements in for interfacing with external IDPs, managing tenants (customers) as groups, and a host of other functional needs.

    And yet, here I am, having some of these same discussions again. Unfortunately, there is no one magic bullet, so if you came here looking for me to enlighten you… I apologize.

    What I will tell you is that the discussions I have been a part of generally have the same basic themes:

    • Build/Buy: The age old question. Generally, authentication is not something I would suggest you build yourself, unless that is your core competency and your business model. If you build, you will end up spending a lot of time and effort “keeping up with the Jones”: Adding new features based on customer requests.
    • Self-Host/AaaS: Remember what I said earlier: attack vectors and SLAs are difficult, as this is the most-attacked and most-used service you will own. There is also a question of liability. If you host, you are liable, but liability for attacks on an AaaS product vary.
    • Functionality: Tenants, SCIM, External IDPs, social logins… all discussions that could consume an entire post. Evaluate what you would like and how you can get there without a diamond-encrusted implementation.

    My Advice

    Tread carefully: wading into the waters of central authentication can be rewarding, but fraught with all the dangers of any sizable body of water.

  • Using Git Hooks on heterogenous repositories

    I have had great luck with using git hooks to perform tool executions before commits or pushes. Running a linter on staged changes before the code is committed and verifying that tests run before the code is pushed makes it easier for developers to write clean code.

    Doing this with heterogenous repositories, or repos which contain projects of different tech stacks, can be a bit daunting. The tools you want for one repository aren’t the tools you want for another.

    How to “Hook”?

    Hooks can be created directly in your repository following Git’s instructions. However, these scripts are seldom cross-OS compatible, so running your script will need some “help” in terms of compatibility. Additionally, the scripts themselves can be harder to find depending on your environment. VS Code, for example, hides the .git folder by default.

    Having used NPM in the past, Husky has always been at the forefront of my mind when it comes to tooling around Git hooks. It helps by providing some cross-platform compatibility and easier visibility, as all scripts are in the .husky folder in your repository. However, it requires some things that a pure .Net developer may not have (like NPM or some other package manager).

    In my current position, though, our front ends rely on either Angular or React Native, so the chance that our developers have NPM installed are 100%. With that in mind, I put some automated linting and building into our projects.

    Linting Different Projects

    For this article, assume I have a repository with the following outline:

    • docs/
      • General Markdown documentation
    • /source
      • frontend/ – .Net API project which hosts my SPA
      • ui/ – The SPA project (in my case, Angular)

    I like lint-staged as a tool to execute linting on staged files. Why only staged files? Generally, large projects are going to have a legacy of files with formatting issues. Going all in and formatting everything all at once may not be possible. But if you format as you make changes, eventually most everything should be formatted well.

    With the outline above, I want different tools to run based on which files need linted. For source/frontend, I want to use dotnet format, but for source/ui, I want to use ESLint and prettier.

    With lint-staged, you can configure individual folders using a configuration file. I was able to add a .lintstagedrc file in each folder, and specify the appropriate linter for the folder. for the .Net project:

    {
        "*.cs": "dotnet format --include"
    }

    And for the Angular project:

    {
        "*": ["prettier", "eslint --fix"]
    }

    Also, since I do have some documentation files, I added a .lintstagedrc file to the repository to run prettier on all my Markdown files.

    {
        "*.md": "prettier"
    }

    A Note on Settings

    Each linter has its own settings, so follow the instructions for whatever linter you may be running. Yes, I know, for the .Net project, I’m only running it on *.cs files. This may change in the future, but as of right now, I’m just getting to know what dotnet format does and how much I want to use it.

    Setting Up the Hooks

    The hooks are, in fact, very easy to configure: follow the instructions on getting started from Husky. The configured hooks for pre-commit and pre-push are below, respectively:

    npx lint-staged --relative
    dotnet build source/mySolution.sln

    The pre-commit hook utilizes lint-staged to execute the appropriate linter. The pre-push hook simply runs a build of the solution which, because of Microsoft’s .esproj project type, means I get an NPM build and a .Net build in the same step.

    What’s next?

    I will be updating the pre-push hook to include testing for both the Angular app and the .Net API. The goal is to provide our teams with a template to write their own tests, and have those be executed before they push their code. This level of automation will help our engineers produce cleaner code from the start, alleviating the need for massive cleanup efforts down the line.

  • Building a Radar, Part 2 – Another Proxy?

    A continuation of my series on building a non-trivial reference application, this post dives into some of the details around the backend for frontend pattern. See Part 1 for a quick recap.

    Extending the BFF

    In my first post, I outlined the basics of setting up a backend for frontend API in ASP.Net Core. The basic project hosts the SPA as static files, and provides a target for all calls coming from the SPA. This alleviates much of the configuration of the frontend and allows for additional security through server-rendered cookies.

    If we stopped there, then the BFF API would contain endpoints for everything call our SPA makes, even if it just made a call to a backend service. That would be terribly inefficient and a lot of boilerplate coding. Now, having used Duende’s Identity Server for a while, I knew that they have coded a BFF library that takes care of proxying calls to backend services, even attaching the access tokens along with the call.

    I was looking for a was to accomplish this without the Duende library, and that is when I came across Kalle Marjokorpi’s post which describes using YARP as an alternative to the Duende libraries. The basics were pretty easy: install YARP, configure it using the appsettings.json file, and configure the proxy. I went so far as to create an extension method to encapsulate the YARP configuration into one place. Locally, this all worked quite well… locally.

    What’s going on in production?

    The image built and deployed quite well. I was able to log in and navigate the application, getting data from the backend service.

    However, at some point, the access token that was encoded into the cookie expired. And this caused all hell to break loose. The cookie was still good, so the backend for frontend assumes that the user is authenticated. But the access token is expired, so proxied calls fail. I have not put a refresh token in place, so I’m a bit stuck at the moment.

    On my todo list is to add a refresh token to the cookie. This should allow the backend the ability to refresh the access token before proxying a call to the backend service.

    What to do now?

    As I mentioned, this work is primarily to use as a reference application for future work. Right now, the application is somewhat trivial. The goal is to build out true microservices for some of the functionality in the application.

    My first target is the change tracking. Right now, the application is detecting changes and storing those changes in the application database. I would like to migrate the storage of that data to a service, and utilize MassTransit and/or NServiceBus to facilitate sending change data to that service. This will help me to define some standards for messaging in the reference architecture.

  • Distributing C# Code Standards with a Nuget Package

    My deep dive into Nuget packages made me ask a new question: Can I use Nuget packages to deploy code standards for C#?

    It’s Good To Have Standards

    Generally, the goal of coding standards is to produce a uniform code base. My go-to explanation of what coding standards are is this: “If you look at the code, you should not be able to tell who wrote it.”

    Linting is, to some degree, an extension of formatting. Linting applies a level of analysis to your code, identifying potential runtime problems. Linting rules typically derive from various best practices which can (and do) change over time, especially for languages in active development. Thing of it this way: C++ has not changed much in the last 20 years, but C#, JavaScript, and, by association, TypeScript, have all seen active changes in recent history.

    There are many tools for each language that offer linting and formatting. I have developed some favorites:

    • Formatting
      • dotnet format – C#
      • prettier – Javascript/TypeScript
      • black – Python
    • Linting
      • Sonar – support for various languages
      • eslint – Javascript/TypeScript
      • flake8 – Python

    Regardless of your tool of choice, being able to deploy standards across multiple projects has been difficult. If your projects live in the same repository, you can store settings files in a common location, but for projects across multiple repositories, the task becomes harder.

    Nuget to the Rescue!

    I spent a good amount of time in the last week relearning Nuget. In this journey, it occurred to me that I could create a Nuget package that only contained references and settings for our C# standards. So I figured I’d give it a shot. Turns out, it was easier than I anticipated.

    There are 3 files in the project:

    • The csproj file – MyStandards.CSharp.csproj
    • main.editorconfig
    • MyStandards.CSharp.props

    I put the content of the CSProj and Props files below. main.editorconfig will get copied into the project where this package is referenced as .editorconfig. You can read all about customizing .editorconfig here.

    MyStandards.CSharp.csproj

    <Project Sdk="Microsoft.NET.Sdk">
    
    	<PropertyGroup>
    		<TargetFramework>netstandard2.1</TargetFramework>
    		<Nullable>enable</Nullable>
    		<IncludeBuildOutput>false</IncludeBuildOutput>
    		<DevelopmentDependency>true</DevelopmentDependency>
    	</PropertyGroup>
    
    	<ItemGroup>
    		<Content Include="main.editorconfig">
    			<CopyToOutputDirectory>Always</CopyToOutputDirectory>
    			<PackagePath>build</PackagePath>
    		</Content>
    		<Content Include="MyStandards.CSharp.props">
    			<CopyToOutputDirectory>Always</CopyToOutputDirectory>
    			<PackagePath>build</PackagePath>
    		</Content>
    	</ItemGroup>
    
    	<ItemGroup>
    		<PackageReference Include="SonarAnalyzer.CSharp" Version="9.15.0.81779" PrivateAssets="None" />
    	</ItemGroup>
    
    </Project>
    

    MyStandards.CSharp.props

    <Project>
    	<Target Name="CopyFiles" BeforeTargets="Build">
    		<ItemGroup>
    			<SourceFile Include="$(MSBuildThisFileDirectory)main.editorconfig"></SourceFile>
    			<DestFile Include="$(ProjectDir)\.editorconfig"></DestFile>
    		</ItemGroup>
    		<Copy SourceFiles="@(SourceFile)" DestinationFiles="@(DestFile)"></Copy>
    	</Target>
    </Project>
    

    What does it do?

    When this project is built as a Nuget package, it contains the .props and main.editorconfig files in the build folder of the package. Additionally, it has a dependency on SonarAnalyzer.CSharp.

    Make it a Development Dependency

    Note that IncludeBuildOutput is set to false, and DevelopmentDependency is set to true. This tells Nuget to not include the empty MyStandards.CSharp.dll file that gets built, and to package this Nuget file as a development only dependency.

    When referenced by another project, the reference will be added with the following defaults:

    <PackageReference Include="MyStandards.CSharp" Version="1.0.0">
      <PrivateAssets>all</PrivateAssets>
      <IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
    </PackageReference>

    So, the consuming package won’t include a dependency for my standards, meaning no “chain” of references.

    Set the Settings

    The .props file does the yeoman’s work: it copies main.editorconfig into the target project’s folder as .editorconfig. Notice that this happens before every build, which prevents individual projects from overriding the .editorconfig file.

    Referencing Additional Development Tools

    As I mentioned, I’m a fan of the Sonar toolset for linting. As such, I like to add their built-in analyzers wherever I can. This includes adding a reference to SonarAnalyzer.CSharp to get some built-in analysis.

    Since SonarAnalyzer.CSharp is a development dependency, when installed, it is not included in the package. It would be nice if my standards package would carry this over to projects which use it. To do this, I set PrivateAssets="None" in the PackageReference, and removed the default settings.

    The Road Ahead

    With my C# standards wrapped nicely in a Nuget package, my next steps will be to automate the build pipeline for the packages so that changes can be tracked.

    For my next trick, I would like to dig in to eslint and prettier, to see if there is a way to create my own extensions for those tools to distribute our standards for JavaScript/TypeScript. However, that task may have to wait a bit, as I have some build pipelines to create.

  • Using Architectural Decision Records

    Recently, I was exposed to Architectural Decision Records (ADRs) as a way to document software architecture decisions quickly and effectively. The more I’ve learned, the more I like.

    Building Evolutionary Architectures

    Architecture, software or otherwise, is typically a tedious and time-consuming process. We must design to meet existing requirements, but have to anticipate potential future requirements without creating an overly complex (i.e. expensive) system. This is typically accomplished through a variety of patterns which aim to decouple components and make them easily replaceable.

    Replace it?!? Everything ages, even software. If I have learned one thing, it is that the code you write today “should” not exist in its same form in the future. All code needs to change and evolve as the platforms and frameworks we use change and evolve. Building Evolutionary Architectures is a great read for any software engineer, but I would suggest it to be required reading for any software architect.

    How architecture is documented and communicated has evolved in the last 30 years. The IEEE published an excellent white paper outlining how early architecture practices have evolved into these ADRs.

    But what IS it?

    Architectural Decision Records (ADRs) are, quite simply, records of decisions made that affect the architecture of the system. ADRs are simple text documents with a concise format that anyone (architect, engineer, or otherwise) can consume quickly. ADRs are stored next to the code (in the same repository), so they are subject to the same peer review process. Additionally, ADRs add the ability to track decisions and changes over time.

    Let’s consider a simple example, taken straight from the adr-manager tool used to create ADRs in a GitHub repository. The context/problem is pretty simple:

    We want to record architectural decisions made in this project. Which format and structure should these records follow?

    The document then outlines some potential options for tracking architectural decisions. In the end, the document states that MADR 2.1.2 will be used, and outlines the rationale behind the decision.

    It may seem trivial, but putting this document in the repository, accessible to all, gives great visibility to the decision. Changes, if any, are subject to peer review.

    Now, in this case, say 6 months down the road the team decides that they hate MADR 2.1.2 and want to use Y-Statements instead. That’s easy: create a new ADR that supersedes the old one. In the new ADR, the same content should exist: what’s the problem, what are our options, and define the final decision and rationale. Link the two so that it’s easy to see related ADRs, and you are ready to go.

    Tools of the Trade

    There is an ADR GitHub organization that is focused on standardizing some of the nomenclature around ADRs. The page includes links to several articles and blog posts dedicated to describing ADRs and how to implement and use them within your organization. Additionally, the organization has started to collect and improve upon some of the tooling for supporting an ADR process. One that I found beneficial is ADR-Manager.

    ADR-Manager is a simple website that lets you interact with your GitHub repositories (using your own GitHub credentials) to create and edit ADRs. Through your browser, you connect to your repositories and view/edit ADR documents. It generates MADR-styled files within your repository which can be committed to branches with appropriate comments.

    Make it so…

    As I work to get my feet under me in my new role, the idea of starting to use ADRs has gained some traction. As we continue to scale, having these decision records readily available for new teams will be important to maintain consistency across the platform.

    As I continue to work through my home projects, I will use ADRs to document decisions I make in those projects. While no one may read them, it’s a good habit to build.

  • Pack, pack, pack, they call him the Packer….

    Through sheer happenstance I came across a posting for The Jaggerz playing near me and was taken back to my first time hearing “The Rapper.” I happened to go to school with one of the member’s kids, which made it all the more fun to reminisce.

    But I digress. I spent time a while back getting Packer running at home to take care of some of my machine provisioning. At work, I have been looking for an automated mechanism to keep some of our build agents up to date, so I revisited this and came up with a plan involving Packer and Terraform.

    The Problem

    My current problem centers around the need to update our machine images weekly, but still using Terraform to manage our infrastructure. In the case of Azure DevOps, we can provision VM Scale Sets and assign those Scale Sets to an Azure DevOps agent pool. But, when I want to update that image, I can do it two different ways:

    1. Using Azure CLI, I can update the Scale Set directly.
    2. I can modify the Terraform repository to update the image and then re-run Terraform.

    Now, #1 sounds easy, right? Run command and I’m done. But it then defeats the purpose of Terraform, which is to maintain infrastructure as code. So, I started down path #2.

    Packer Revisit

    I previously used Packer to provision Hyper-V VMs, but the provisioner for azure-rm is pretty similar. I was able to configure a simple windows based VM and get the only application I needed installed with a Powershell script.

    One app? On a build agent? Yes, this is a very particular agent, and I didn’t want to install it everywhere, so I created a single agent image with the necessary software.

    Mind you, I have been using the runner-images Packer projects to build my Ubuntu agent at home, and we use them to build both Windows and Ubuntu images at work, so, by comparison, my project is wee tiny. But it gives me a good platform to test. So I put a small repository together with a basic template and a Powershell script to install my application, and it was time to build.

    Creating the Build Pipeline

    My build process should be, for all intents and purposes, one step that runs the packer build command, which will create the image in Azure. I found the PackerBuild@1 task, and thought my job was done. It would seem that the Azure DevOps task hasn’t kept up with the times, either that, or Packer’s CLI needs help.

    I wanted to use the PackerBuild@1 task to take advantage of the service connection. I figured, if I could run the task with a service connection, I wouldn’t have to store service principal credential in a variable library. As it turns out… well, I would have to do that anyway.

    When I tried to run the task, I got an error that “packer fix only supports json.” My template is in HCL format, and everything I have seen suggests that Packer would rather move to HCL. Not to be beaten, I looked at the code for the task to see if I could skip the fix step.

    Not only could I not skip that step, but when I dug into the task, I noticed that I wouldn’t be able to use the service connection parameter with a custom template. So with that, my dreams of using a fancy task went out the door.

    Plan B? Use Packer’s ability to grab environment variables as default values and set the environment variables in a Powershell script before I run the Packer build. It is not super pretty, but it works.

    - pwsh: | 
        $env:ARM_CLIENT_ID = "$(azure-client-id)"
        $env:ARM_CLIENT_SECRET = "$(azure-client-secret)"
        $env:ARM_SUBSCRIPTION_ID = "$(azure-subscription-id)"
        $env:ARM_TENANT_ID = "$(azure-tenant-id)"
        Invoke-Expression "& packer build --var-file values.pkrvars.hcl -var vm_name=vm-image-$(Build.BuildNumber) windows2022.pkr.hcl"
      displayName: Build Packer

    On To Terraform!

    The next step was terraforming the VM Scale Set. If you are familiar with Terraform, the VM Scale Set resource in the AzureRM provider is pretty easy to use. I used the Windows VM Scale Set, as my agents will be Windows based. The only “trick” is finding the image you created, but, thankfully, that can be done by name using a data block.

    data "azurerm_image" "image" {
      name                = var.image_name
      resource_group_name = data.azurerm_resource_group.vmss_group.name
    }

    From there, just set source_image_id to data.azurerm_image.image.id, and you’re good. Why look this up by name? It makes automation very easy.

    Gluing the two together

    So I have a pipeline that builds an image, and I have another pipeline that executes the Terraform plan/apply steps. The latter is triggered on a commit to main in the Terraform repository, so how can I trigger a new build?

    All I really need to do is “reach in” to the Terraform repository, update the variable file with the new image name, and commit it. This can be automated, and I spent a lot of time doing just that as part of implementing our GitOps workflow. In fact, as I type this, I realize that I probably owe a post or two on how exactly we have done that. But, using some scripted git commands, it is pretty easy.

    So, my Packer build pipeline will checkout the Terraform repository, change the image name in the variable file, and commit. This is where the image name is important: Packer spit out the Azure Image ID (at least, not that I saw), so having a known name makes it easy for me to just tell Terraform to use the new image name, and it uses that to look up the value.

    What’s next?

    This was admittedly pretty easy, but only because I have been using Packer and Terraform for some time now. The learning curve is steep, but as I look across our portfolio, I can see areas where these types of practices can help us by allowing us to build fresh machine images on a regular cadence, and stop treating our servers as pets. I hope to document some of this for our internal teams and start driving them down a path of better deployment.

  • Update: Creating an Nginx-based web server image – React Edition

    This is a short update to Creating a simple Nginx-based web server image which took me about an hour to figure out and 10 seconds to fix…..

    404, Ad-Rock’s out the door

    Yes, I know it’s “four on the floor, Ad-Rock’s at the door.” While working on hosting one of my project React apps in a Docker image, I noticed that the application loaded fine, but I was getting 404 errors after navigating to a sub-page (like /clients) and then hitting refresh.

    I checked the container logs, and lo and behold, there were 404 errors for those paths.

    Letting React Router do its thing

    As it turns out, my original Nginx configuration was missing a key line:

    server { 
      listen 8080;
      server_name localhost;
      port_in_redirect off;
      
      location / {
        root /usr/share/nginx/html;
        index index.html index.htm;
    
        # The line below is required for react-router
        try_files $uri $uri/ /index.html;
      }
      error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   /usr/share/nginx/html;
        }
    }

    That little try_files line made sure to push unknown paths back to index.html, where react-router would handle them.

    And with that line, the 404s disappeared and the React app was running as expected.

  • Configuring React SPAs at Runtime

    Configuring a SPA is a tricky affair. I found some tools to make it a little bit easier, but it should still be used with a fair amount of caution.

    The App

    I built a small React UI to view some additional information that I am storing in my Unifi Controller for network devices. Using the notes field on the Unifi device, I store some additional fields in JSON format in order for other applications to use. It is nothing wild, but allows me to have some additional detail on my network devices.

    In true API-first fashion, any user-friendly interface is an afterthought… Since most of my interaction with the service is through Powershell scripts, I did not bother to create the UI.

    However, I got a little tired of firing up Postman to edit a few things, so I spun up a React SPA for the task.

    Hosting the SPA

    I opted to host the SPA in its own container running Nginx to host the files. Sure, I could have used thrown the SPA inside of the API and hosted it using static files, which is a perfectly reasonable and efficient method. My long-term plan is to create a new “backend for frontend” API project that hosts this SPA and provides appropriate proxying to various backend services, including my Unifi API. But I want to get this out, so a quick Nginx container it is.

    I previously posted about creating a simple web server image using Nginx. Those instructions (and an important update for React) served me well to get my SPA running, but how can I configure the application at runtime? I want to build the image once and deploy it any number of times, so having to rebuild just to change a URL seems crazy.

    Enter react-runtime-config

    Through some searching, I found the react-runtime-config library. This library lets me set configuration values either in local storage, in a configuration file, or in the application as a default value. The library’s documentation is solid and enough to get get you started.

    But, wait, how do I use this to inject my settings??? ConfigMaps! Justin Polidori describes how to use Kubernetes ConfigMaps and volume mounts to replace the config.js file in the container with one from the Kubernetes ConfigMap.

    It took a little finagling since I am using a library chart for my Helm templating, but the steps were something like this:

    1. Configure the React app using react-runtime-config. I added a config.js file to the public folder, and made sure my app was picking settings from that file.
    2. Create a ConfigMap with my window.* settings.
    3. Mount that ConfigMap in my container as /path/to/public/config.js

    Viola! I can now control some of the settings of my React App dynamically.

    Caveat Emptor!

    I cannot stress this enough: THIS METHOD SHOULD NOT BE USED FOR SECRET OR SENSITIVE INFORMATION. Full stop.

    Generally, the problem with SPAs, whether they are React, Angular, or pick your favorite framework, is that they live on the client in plain text. Hit F12 in your favorite browser, and you see the application code.

    Hosting settings like this means the settings for my application are available just by navigating to /config.js. Therefore, it is vital that these settings are not in any way sensitive values. In my case, I am only storing a few public URLs and a Client ID, none of which are sensitive values.

    The Backend for Frontend pattern allows for more security and control in general. I plan on moving to this when I create a BFF API project for my template.

  • Building Software Longevity

    The “Ship of Theseus” thought experiment is an interesting way to start fights with historians, but in software, replacing old parts with new parts is required for building software longevity. Designing software in ways that every piece can be replaced is vital to building software for the future.

    The Question

    The Wikipedia article presents the experiment as follows:

    According to legend, Theseus, the mythical Greek founder-king of Athens, rescued the children of Athens from King Minos after slaying the minotaur and then escaped onto a ship going to Delos. Each year, the Athenians commemorated this legend by taking the ship on a pilgrimage to Delos to honor Apollo. A question was raised by ancient philosophers: After several centuries of maintenance, if each individual part of the Ship of Theseus was replaced, one at a time, was it still the same ship?

    Ship of Theseus – Wikipedia

    The Software Equivalent

    Consider Microsoft Word. Released in 1983, Word is approaching its 40th anniversary. And, while I do not have access into its internal workings, I am willing to bet that most, if not all of the 1983 code has since been replaced by updated modules. So, while it is still called Word, its parts are much newer than the original 1983 iteration. I am sure if I sat here long enough, I could identify other applications with similar histories. The branding does not change, the core functionality does not change, only the code changes.

    Like the wood on the Ship of Theseus, software rots. And it rots fast. Frameworks and languages evolve quickly to take advantage of hardware updates, and the software that uses those must do the same.

    Design for Replacement

    We use the term technical debt to categorize the conscious choices we make to prioritize time to market over perfect code. It is worthwhile to consider software has a “half life” or “depreciation factor” as well: while your code may work today, chances are, without appropriate maintenance, it will rot into something that is no longer able to serve the needs of your customers.

    If I had a “one sized fits all” solution to software rot, I would probably be a millionaire. The truth is, product managers, engineers, and architects must be aware of software rot. Not only must we invest the time into fixing the rot, but we must design our products to allow every piece of the application to be replaced, even as the software still stands.

    This is especially critical in Software as a Service (SaaS) offerings, where we do not have the luxury of large downtimes for upgrades or replacements. To our customers, we should operate continuously, with upgrades happening almost invisibly. This requires the foresight to build replaceable components and the commitment to fixing and replacing components regularly. If you cannot replace components of your software, there will come a day where your software will no longer function for your customers.

  • Tech Tips – Adding Linting to C# Projects

    Among the Javascript/Typescript community, ESlint and Prettier are very popular ways to enforce some standards and formatting within your code. In trying to find similar functionality for C#, I did not find anything as ubiquitous as ESLint/Prettier, but there are some front runners.

    Roslyn Analyzers and Dotnet Format

    John Reilly has a great post on enabling Roslyn Analyzers in your .Net applications. He also posted some instructions on using the dotnet format tool as a “Prettier for C#” tool.

    I will not bore you by re-hashing his posts, but following those posts allowed me to apply some basic formatting and linting rules to my projects. Additionally, the Roslyn Analyzers can be made to generate build warnings and errors, so any build worth its salt (builds that fail with warnings) will be free of undesirable code.

    SonarLint

    I was not really content to stop there, and a quick Google search led me to an interesting article around linting options for C#. One of those was SonarLint. While SonarLint bills itself as an IDE plugin, it has a Roslyn Analyzer package (SonarAnalyzer.CSharp) that can be added and configured in a similar fashion to the built-in Roslyn Analyzers.

    Following the instructions in the article, I installed SonarAnalyzer and configured it alongside the base Roslyn Analyzers. It produced a few more warnings, particularly around some best practices from Sonar that go beyond what the Microsoft standards apply.

    SonarQube, my old friend

    Getting into SonarLint brought be back to SonarQube. What seems like forever ago, but really was only a few years ago, SonarQube was something of a go-to tool in my position. We had hoped to gather a portfolio-wide view of our bugs, vulnerabilities, and code smells. For one reason or another, we abandoned that particular tool set.

    After putting SonarLint in place, I was interested in jumping back in, at least in my home lab, to see what kind of information I could get out of Sonar. I found the Kubernetes instructions and got to work setting up a quick instance on my production instance, alongside my Proget instance.

    Once installed, I have to say, the application has done well to improve the user experience. Tying in to my Azure DevOps instance was quick and easy, with very good in-application tutorials for that configuration. I setup a project based on the pipeline for my test application, made my pipeline changes, and waited for results…

    Failed! I kept getting errors about not being allowed to set the branch name in the Community edition. That is fair, and for my projects, I only really need analysis on the main branch, so I setup analysis to only happen on builds of main. Failed again!

    There seems to be a known issue around this, but thanks to the SonarSource community, I found a workaround for my pipeline. With that in place, I had my code analysis in place, but, well, what do I do with it? Well, I can add quality gates to fail builds based on missing code coverage, tweak my rule sets, and have a “portfolio wide” view of my private projects.

    Setting the Standard

    For any open source C# projects, simply building the linting/formatting into the build/commit process might be enough. If project maintainers are so inclined, they can add their projects to SonarCloud and get the benefits of SonarQube (including adding quality gates).

    For enterprise customers, the move to a paid tier depends on how much visibility you want in your code base. Sonar can be an expensive endeavor, but provides a lot of quality and tech debt tracking that you may find useful. My suggestion? Start with a trial or the community version, and see if you like it before you start requesting budget.

    Either way, setting standards for formatting and analysis on your C# projects make contributions across teams much easier and safer. I suggest you try it!