• Using Git Hooks on heterogenous repositories

    I have had great luck with using git hooks to perform tool executions before commits or pushes. Running a linter on staged changes before the code is committed and verifying that tests run before the code is pushed makes it easier for developers to write clean code.

    Doing this with heterogenous repositories, or repos which contain projects of different tech stacks, can be a bit daunting. The tools you want for one repository aren’t the tools you want for another.

    How to “Hook”?

    Hooks can be created directly in your repository following Git’s instructions. However, these scripts are seldom cross-OS compatible, so running your script will need some “help” in terms of compatibility. Additionally, the scripts themselves can be harder to find depending on your environment. VS Code, for example, hides the .git folder by default.

    Having used NPM in the past, Husky has always been at the forefront of my mind when it comes to tooling around Git hooks. It helps by providing some cross-platform compatibility and easier visibility, as all scripts are in the .husky folder in your repository. However, it requires some things that a pure .Net developer may not have (like NPM or some other package manager).

    In my current position, though, our front ends rely on either Angular or React Native, so the chance that our developers have NPM installed are 100%. With that in mind, I put some automated linting and building into our projects.

    Linting Different Projects

    For this article, assume I have a repository with the following outline:

    • docs/
      • General Markdown documentation
    • /source
      • frontend/ – .Net API project which hosts my SPA
      • ui/ – The SPA project (in my case, Angular)

    I like lint-staged as a tool to execute linting on staged files. Why only staged files? Generally, large projects are going to have a legacy of files with formatting issues. Going all in and formatting everything all at once may not be possible. But if you format as you make changes, eventually most everything should be formatted well.

    With the outline above, I want different tools to run based on which files need linted. For source/frontend, I want to use dotnet format, but for source/ui, I want to use ESLint and prettier.

    With lint-staged, you can configure individual folders using a configuration file. I was able to add a .lintstagedrc file in each folder, and specify the appropriate linter for the folder. for the .Net project:

    {
        "*.cs": "dotnet format --include"
    }

    And for the Angular project:

    {
        "*": ["prettier", "eslint --fix"]
    }

    Also, since I do have some documentation files, I added a .lintstagedrc file to the repository to run prettier on all my Markdown files.

    {
        "*.md": "prettier"
    }

    A Note on Settings

    Each linter has its own settings, so follow the instructions for whatever linter you may be running. Yes, I know, for the .Net project, I’m only running it on *.cs files. This may change in the future, but as of right now, I’m just getting to know what dotnet format does and how much I want to use it.

    Setting Up the Hooks

    The hooks are, in fact, very easy to configure: follow the instructions on getting started from Husky. The configured hooks for pre-commit and pre-push are below, respectively:

    npx lint-staged --relative
    dotnet build source/mySolution.sln

    The pre-commit hook utilizes lint-staged to execute the appropriate linter. The pre-push hook simply runs a build of the solution which, because of Microsoft’s .esproj project type, means I get an NPM build and a .Net build in the same step.

    What’s next?

    I will be updating the pre-push hook to include testing for both the Angular app and the .Net API. The goal is to provide our teams with a template to write their own tests, and have those be executed before they push their code. This level of automation will help our engineers produce cleaner code from the start, alleviating the need for massive cleanup efforts down the line.

  • Spoolman for Filament Management

    “You don’t know what you go ’til it’s gone” is a great song line, but a terrible inventory management approach. As I start to stock up on filament for the 3D printer, it occurred to me that I need a way to track my inventory.

    The Community Comes Through

    I searched around for different filament management solutions and landed on Spoolman. It seemed a pretty solid fit for what I needed. The owner also configured builds for container images, so it was fairly easy to configure a custom chart to run an instance on my internal tools cluster.

    The client UI is pretty easy to use, and the ability to add extra fields to the different modules makes the solution very extensible. I was immediately impressed and started entering information about vendors, filaments, and spools.

    Enhancing the Solution

    Since I am using a Bambu Labs printer and Bambu Studio, I do not have the ability to integrate Bambu into Spoolman to report filament usage. I searched around, but it does not seem that the Bambu reports such usage.

    My current plan for managing filament is by weight the spool when I open it, and then weighing it again after each use. That difference is the amount of filament I have used. But, to calculate the amount remaining, I need to know the weight of an empty spool. Assuming most manufacturers use the same spools, that shouldn’t be too hard to figure out long term.

    Spoolman is not quite set up for that type of usage. Weight and spool weight is set at the filament level and cannot be overridden at the spool level. Most spools will not be exactly 1000g of filament, so the need to track initial weight at the spool level is critical. Additionally, I want to support partial spools, including re-spooling.

    So, using all the Python I have learned recently, I took a crack at updating the API and UI to support this very scenario. In a “do no harm” type of situation, I made sure that I had all the integration tests running correctly, then went about adding the new fields and some of the new default functionality. After I had the updated functionality in place, I added a few new integration test to verify my work.

    Oddly, as I started working it, I found 4 feature requests in that were related to the changes I was suggesting. It took me a few nights, but I generated a pull request for the changes.

    And Now, We Wait…

    With my PR in place, I wait. The beauty of open source is that anyone can contribute, but the owners have the final say. This also means the owners need to respond, and most owners aren’t doing this as a full time job. So sometimes, there isn’t anything to do but wait.

    I’m hopeful that my changes will be accepted, but for now, I’m using Spoolman as-is, and just doing some of the “math” myself. It is definitely helping me keep track of my filament, and I’m keeping an eye on possible integrations with the Bambu ecosystem.

  • A new To(y)ol

    I have wanted to pick up a Bambu Labs P1S for a while now. I saved up enough to finally pull the trigger, and after a few days of use, I could not be more pleased with my decision.

    Why Bambu?

    There are literally dozens of 3D printers out there, and choosing can be difficult. I certainly spent a great deal of time mulling over the various options. But, as with most things, the best advice was from people who use them. Luckily, I have not one, but two resources at my disposal.

    An old colleague of mine, Justin, is usually the first to such things. I typically joke that I usually do the same things that Justin does, I am just usually lagging behind in both time and scale. IE, Justin does it bigger and better. He and I chat frequently, and his input was invaluable. With regard to 3D printers, the one comment he made still resonates:

    I want to design stuff and print it, not tinker endlessly with the printer itself.

    Justin had an Ender (I do not recall the model) for a bit, but never really messed with it too much. After he picked up a P1P, the design and printing started to flow. He had nothing but good things to say about most things Bambu, save, perhaps, the community… We’ll get to that in a minute.

    Additionally, I discussed different printers with the proprietor of Pittsburgh3DPrints.com. He has a number of different brands, and services them all, and he recommended the Bambu as a great first printer, for many of the same reasons. What reasons?

    1. Ease of Use – From “cut box” to print was honestly 15 minutes. The P1S is extremely user-friendly in terms of getting to printing.
    2. Feature Packed – Sure, the price tag for the full P1S combo is a little higher than most printers. But you get, out of box, the ability to print most filaments, including PLA/PETG/ABS/ASA, as well as 4 color multi-color prints.
    3. Expandable – Additional AMS units get you up to 16 color prints.
    4. Ecosystem – Bambu has been really trying to get makerworld.com going, and they have had some success. The makerworld tie-in to Bambu Studio makes printing other’s designs quick and easy.

    First Impressions

    As I mentioned above, unboxing was easy with the included Quick Setup guide and their unboxing video. The first thing I printed was the included model for the scraper handle and holder. I do not recall the exact print time, but it was under an hour, and I had my first print!

    The next two prints were some fidget toys for my kids. I can tell you these took 48 minutes each, as both kids were anxiously waiting every single one of those minutes. One feature the P1S has is the ability to capture time lapse videos for your prints. Here is the one for one of the fidget rings.

    Now, I do laugh, because the running joke with those who own a 3D printer is that you spend most of your time printing stuff for the printer, which is highly meta and also seems like a gimmick to buy more filament. The P1S ejects waste out of the back into, well, thin air. Many folks have designed various waste collection solutions, most affectionately knows as “poop chutes.” I found one that I liked and set about slicing it for printing.

    Oops…

    This is where I ran into a little issue. I tried to start one of the prints (for the bucket) from the Bambu app. Instead of slicing on my PC and sending to the printer, I used the profile published. That, well, failed. For whatever reason, the bed temperature setting was set to 35°C, instead of the 55°C that the Bambu Studio slicer sets.

    I’m not sure if the profile had a cool bed setting or what, but that through off adhesion of the print to the bed, and I ended up with a large mess. I re-started the print from the Bambu Studio, and had no problems. Printing the three pieces of the chute took about 9 hours, which represents my longest print to date.

    Next up?

    I have a few things on my list to print. Most center around some organizational aspects of my office, and looking to use the Gridfinity system to make things neat. My wife asked for a few bases for some signage that she uses for work. This requires some design on my part, so it is a nice challenge for me.

    Both my son and one of our neighbors have expressed some interest in design and printing, so I look forward to passing on some of what I’ve learned to new designers looking to bring their designs to reality.

  • Foyer Upgrade

    Not everything I do is “nerdy.” I enjoy making things with my hands, and my wife has an eye for design and a sadistic love of painting. Combined, we like to spend some time redesigning rooms around the house. We save a ton doing the work ourselves, and for me it is a great break from the keyboard.

    The Idea

    My wife has been eyeing up our foyer for quite some time. Our foyer is a two story entry featuring stairs leading to our second floor, doors to each of our office spaces, and a hallway back to our kitchen and living room area. Off of the foyer is a half bath.

    Foyer View from Front Door

    As part of our kitchen renovation a few years ago, we replaced the vinyl flooring with LVT, and had no plans to change that. However, we wanted to open up the stairway with a different railing and add some applied moldings to the walls.

    The Plan

    Timing on this was a little odd. I was well aware of the work, and we did not want to have a construction zone around Christmas, so we split the work into two phases.

    Phase 1 was removing the half-wall on the steps and replacing it with a newel post and railing. It also included removing the existing carpet, painting the stairs, and installing carpet treads.

    Phase 2 would be the installation of the applied moldings, some new light fixtures, and a lot of painting.

    Phase 1

    Demo took about a day. We ripped out the old carpet, cut the half wall down to the existing riser, and removed the quarter round and baseboards. We also removed the old air return grates, as we ordered some new ones and would not be re-using the old ones.

    The installation of the newel and banister was not incredibly difficult, but took a while to ensure everything was accurately cut and secured. I built out a good deal of blocking at the end of the stairs to ensure the newel was solidly anchored.

    After everything was installed, there were a few days of drywall patching to get everything back to acceptable. The steps took a few coats of floor paint, and a new set of stair treads. They are secured with adhesive, but easily replaceable, which I am sure will be helpful in the future.

    All in all, Phase 1 took about 8 weeks. Mind you, this is 8 weeks of work some evenings and weekends, working around sports and social calendars for the kids and ourselves. If you were working on this full time, you could probably get it done in 5-8 working days. We finished Phase 1 just in time for the new banister to get some holiday decorations!

    Phase 1 Complete

    Phase 2

    Phase 2 was, for me, the more dreadful of the two. The plan was to take applied moldings all the way to top of the half wall in the loft, and create a framed look with the moldings. This required 1″x4″ boards (we used pre-primed MDF) and 11/16″ concave molding, and a lot of it.

    Before we installed the molding, we hired a painter to come in and put a fresh coat on the ceiling and the upper walls of the foyer. While she loves to paint, getting up on a ladder to hit the 18′ ceilings and cut in to the wall did not appeal to my wife, and I agreed.

    The process for the 1″x4″ boards was pretty simple: frame each wall, running horizontal boards the length of the wall, then vertical boards on each end. In between, run boards horizontally to divide the wall into three, then add vertical boards to complete the rectangles. Around the door frames, we used 1″x4″ boards to build out the existing molding, giving the doors a larger look.

    After the 1″x4″ installation, I went back through and installed the 11/16″ concave molding inside of each square, creating a bit of a picture frame look. This was, well, a lot of cutting and placing. In particular, the triangles on the walls with the steps were challenging because of the angles that needed cut. I did more geometry in the last few months than I have in a while.

    After everything was buttoned up, my wife took on the task of painting all the installed trim. All it took was time, and a lot of it: I think that task alone took her about 4 days. With the painting complete, I installed the last of the quarter round and she added some design touches.

    The last install was to rent a 16′ step ladder to change out the chandelier. I had done this once before, and was not looking forward to doing it again. The combination of wrestling the ladder into the foyer, setting it up, and then climbing up and down a few dozen times makes the process somewhat cumbersome.

    Phase 2 took about 9-10 weeks, with the same caveat that it was not full time work. Full time, you are probably looking at a good 8-10 business days. But the end result was well worth it.

    After Pictures

    Overall, we are happy with the results. The style matches my wife’s office nicely, and transitions well into the kitchen and living area.

  • Cleaning out the cupboard

    I have been spending a little time in my server cabinet downstairs, trying to organize some things. I took what I thought would be a quick step in consolidation. It was not as quick as I had hoped.

    POE Troubles

    When I got into the cabinet, I realized I had 3 POE injectors in there, powering my three Unifi Access Points. Two of them are the UAP-AC-LR, and the third is a UAP-AC-M. My desire was simple: replace 3 POE injectors with a 5 port PoE switch.

    So, I did what I thought would be a pretty simple process:

    1. Order the switch
    2. Using the MAC, assign it a static IP in my current Unifi Gateway DHCP.
    3. Plug in the switch.
    4. Take the cable coming out of the POE injector and plug it into the switch.

    And that SHOULD be it: devices boot up and I remove the POE injectors. And, for two of the three devices, it worked fine.

    There’s always one

    One of the UAP-AC-LR endpoints simply would not turn on. I thought maybe it was the cable. So I checked the different cables, but still nothing. I swapped out the cables and nothing changed: the one UAP-AC-LRs and the UAP-AC-M worked, but the other UAP-AC-LR did not work.

    I consulted the Oracle and came to realize that I had an old UAP-AC-LR, which only supports a 24v Passive PoE, not the 48v standard that my switch supports. Obviously, the newer UAP-AC-LR and the UAP-AC-M have support 802.3at (or at least a legacy protocols for 48v), but my oldest UAP-AC-LR simply doesn’t turn on.

    The Choice

    There are two solutions, one more expensive than another:

    1. Find an indoor PoE Converter (INS-3AF-I-G) that can convert the 48V coming from my new switch to the 24v that the device needs.
    2. Upgrade! Buy a U6 Pro to replace my old long range access point.

    I like the latter, as it would give me WiFi 6 support and start my upgrade in that area. However, I’m not ready for the price tag at the moment. I was able to find the converter for about $25, and that includes shipping and tax. So I opted for the more economical route in order to get rid of that last PoE injector.

  • Building a new home for my proxy server

    With my BananaPi up and running again, it’s time to put it back in the server cabinet. But it’s a little bit of a mess down there, and I decided my new 3D modeling skills could help me build a new home for the proxy.

    Find the Model

    When creating a case for things, having a 3D model of the thing you are creating becomes crucial. Sometimes, you have to model it yourself, but I have found that grabcad.com has a plethora of models available.

    A quick search yielded a great model of the Banana PI. This one is so detailed that it has all of the individual components modeled. All I really needed/wanted was the mounting hole locations and the external ports, but this one is useful for much more. It was so detailed, in fact, that I may have added a little extra detail just because I could.

    General Design

    This case is extremely simple. The Banana Pi M5 (BPi from here on out) serves as my reverse proxy server, so all it really needs is power and a network cable. However to ensure the case was more useful, I added openings for most of the components. I say most because I fully enclosed the side with the GPIO ports. I never use the GPIO pins on this board, so there was really no need to open those up.

    For this particular case, the BPi will be mounted on the left rack, so I oriented the tabs and the board in such a way that the power/HDMI ports were facing inside the rack, not outside. This also means that the network and USB ports are in the back, which works for my use case.

    A right-mount case with power to the left would put the USB ports at the front of the rack. However, I only have one BPi, and it is going on the left, so I will not be putting that one together.

    Two Tops

    With the basic design in place, I exported the simple top, and got a little creative.

    Cool It Down…

    My BPi kit came with a few heatsinks and a 24mm fan. Considering the proxy is a 24×7 machine, and it is handling a good bit of traffic, I figured it best to keep that fan in place. So I threw a cut-out in for the fan and its mounting screws.

    Light it up!

    On the side where the SD card goes, I closed off everything except the SD card itself. This includes the LEDs. As I was going through the design, I thought that it might be nice to be able to peek into the server rack and see the power/activity LEDs. And, I mean, that rack already looks like a weird Christmas tree, what are a few more lights.

    I had to do a bit of research to actual to find the actual name for the little plastic pieces that can carry LED lights a distance. They are called “light pipes.” I found some 3mm light pipes on Amazon, and thought that would be a good add to the build.

    The detail of the BPi model I found made this task REALLY easy: I was able to locate the center of the LED and project it onto the case top. A few 3mm holes later, and the top is ready to accept light pipes.

    Put it all together

    I sent my design over to Pittsburgh3DPrints.com, which happens to be about two miles from my house. A couple days later, I had a PLA print of the model. As this is pretty much sitting in my server cabinet all day, PLA is perfect for this print.

    Oddly enough, the trick to this one was to be able to turn off the BPi to install it. I had previously setup a temporary reverse proxy as I was messing with the BPi, so I routed all the traffic from the BPi to the temp proxy, and the shutdown the BPi.

    Some Trimming Required

    As I was designing this case, I went with a best-guess for tolerances. I was a little off. The USB and audio jack cutouts needed to be taller to allow the BPi to be installed in the case. Additionally, the stands were too thick and the screw fan holes too thin. I modified these designs in drawings, however, for the printed model, I just made them a little larger with an Exact-o blade.

    I heat-set a female M3 insert into the case body. I removed the fan from the old case top and attached it to my new case top. After putting the BPi into place in the case bottom, I attached the fan wires to the GPIO ports to get power. I put the case top on, placing the tabs near the USB ports first. Screwed in an M3 bolt and dropped three light pipes into the case top. They protruded a little, so I cut them to sit flush while still transmitting light.

    Finished Product

    BPi in assembled case
    Case Components

    Overall I am happy with the print. From a design perspective, having a printer here would have alleviated some of the trimming, as I could have test printed some smaller parts before committing.

    I posted this print to Makerworld and Printables.com. Check out the full build there!

  • Updated Site Monitoring

    What seemed like forever ago, I put together a small project for simple site monitoring. My md-to-conf work enhanced my Python skills, and I thought it would be a good time to update the monitoring project.

    Housekeeping!

    First things first: I transferred the repository from my personal GitHub account to the spydersoft-consulting organization. Why? Separation of concerns, mostly. Since I fork open source repositories into my personal, I do not want the open source projects I am publishing to be mixed in with those forks.

    After that, I went through the process of converting my source to a package with GitHub Actions to build and publish to PyPi.org. I also added testing, formatting, and linting, copying settings and actions from the md_to_conf project.

    Oh, SonarQube

    Adding the linting with SonarQube added a LOT of new warnings and errors. Everything from long lines to bad variable names. Since my build process does not succeed if those types of things are found, I went through the process of fixing all those warnings.

    The variable naming ones were a little difficult, as some of my classes mapped to the configuration file serialization. That meant that I had to change my configuration files as well as the code. I went through a few iterations, as I missed some.

    I also had to add a few tests, just so that the tests and coverage scripts get run. Could I have omitted the tests entirely? Sure. But a few tests to read some sample configuration files never hurt anyone.

    Complete!

    I got everything renamed and building pretty quickly, and added my PyPi.org API token to the repository for the actions. I quickly provisioned a new analysis project in SonarCloud, and merged everything into main. Created a new GitHub release, which triggered a new publish to PyPi.org.

    Setting up the Raspberry Pi

    The last step was to get rid of the code on the Raspberry Pi, and use pip to install the package. This was relatively easy, with a few caveats.

    1. Use pip3 install instead of pip – Forgot the old Pi has both Python 2 and 3 installed.
    2. Fix the config files – I had to change my configuration file to reflect the variable name changes.
    3. Change the cron job – This one needs a little more explanation

    For the last one, when changing the cron job, I had to point specifically to /usr/local/bin/pi-monitor, since that’s where pip installed it. My new cron job looks like this:

    SHELL=/bin/bash
    
    */5 * * * * pi cd /home/pi && /usr/local/bin/pi-monitor -c monitor.config.json 2>&1 | /usr/bin/logger -t PIMONITOR

    That runs the application and logs everything to syslog with the PIMONITOR tag.

    Did this take longer than I expected? Yea, a little. Is it nice to have another open source project in my portfolio. Absolutely. Check it out if you are interested!

  • An epic journey…

    I got all the things I needed to diagnose my BananaPi M5 issues. And I took a very long, windy road to a very simple solution. But I learned an awful lot in the process.

    Reconstructing the BananaPi M5

    I got tired of poking around the BananaPi M5, and decided I wanted to start from scratch. The boot order of the BananaPi means that, in order to format the EMMC and start from scratch, I needed some hardware.

    I ordered a USB to Serial debug cable so that I could connect to the BananaPi (BPi from here on out), interrupt the boot sequence, and use uboot to wipe the disk (or at least the MBR). That would force the BPi to use the SD as a boot drive. From there, I would follow the same steps I did in provisioning the BPi the first time around.

    For reference, with the cable I bought, I was able to connect the debug using Putty with the following settings:

    Your COM port will probably be different: open the Device Manager to find yours.

    I also had to be a little careful about wiring: When I first hooked it up, I connected the transmit cable (white) to the Tx pin, and the receive cable (green) to the Rx pin. That gave me nothing. Then I realized that I had to swap the pins: The transmit cable (white) goes to the Rx pin, and the receive cable (green) goes to the Tx pin. Once swapped, the terminal lit up.

    I hit the reset button on the BPi, and as soon as I could, I hit Ctrl-C. This took me into the uboot console. I then followed these steps to erase the first 1000 blocks. From there, I had a “cleanish” BPi. To fully wipe the EMMC, I booted an SD card that had the BPI Ubuntu image, and wiped the entire disk:

    dd if=/dev/zero of=/dev/mmcblk0 bs=1M

    Where /dev/mmcblk0 is the address of the EMMC drive. This writes all zeros to the EMMC, and cleaned it up nicely.

    New install, same problem

    After following the steps to install Ubuntu 20.04 to the EMMC, I did an apt upgrade and a do-release-upgrade to get up to 22.04.3. And the SAME network issue reared its ugly head. Back at it with fresh eyes, I determined that something changed in the network configuration, and the cloud-init setup that had worked for this particular BPI image is no longer valid.

    What were the symptoms? I combed through logs, but the easiest identifier was, when running networkctl, eth0 was reporting as unmanaged.

    So, I did two things: First, disable the network configuration in cloud-init by changing /etc/cloud/cloud.cfg.d/99-fake_cloud.cfg to the following:

    datasource_list: [ NoCloud, None ]
    datasource:
      NoCloud:
        fs_label: BPI-BOOT
    network: { config : disable }

    Second, configure netplan by editing /etc/netplan/50-cloud-init.yaml:

    network:
        ethernets:
            eth0:
                dhcp4: true
                dhcp-identifier: mac
        version: 2

    After that, I ran netplan generate and netplan apply, and the interface now showed as managed when executing networkctl. More importantly, after a reboot, the BPi initialized the network and everything is up and running.

    Backup and Scripting

    This will be the second proxy I’ve configured in under 2 months, so, well, now is the time to write the steps down and automate if possible.

    Before I did anything, I created a bash script to copy important files off of the proxy and onto my NAS. This includes:

    • Nginx configuration files
    • Custom rsyslog file for sending logs to loki
    • Grafana Agent configuration file
    • Files for certbot/cloudflare certificate generation
    • The backup script itself.

    With those files on the NAS, I scripted out restoration of the proxy to the fresh BPi. I will plan a little downtime to make the switch: while the switchover won’t be noticeable to the outside world, some of the internal networking takes a few minutes to swap over, and I would hate to have a streaming show go down in the middle of viewing…. I would certainly take flak for that.

  • Terraform Azure DevOps

    As a continuation of my efforts to use Terraform to manage my Azure Active Directory instance, I moved my Azure DevOps instance to a Terraform project, and cleaned a lot up in the process.

    New Project, same pattern

    As I mentioned in my last post, I setup my repository to support multiple Terraform projects. So starting up an Azure DevOps Terraform project was as simple as creating a new folder in the terraform folder and setting up the basics.

    As with my Azure AD project, I’m using the S3 backend. For providers, this project only needs the Azure DevOps and Hashicorp Vault providers.

    The process was very similar to Azure AD: create resources in the project, and use terraform import to import existing resources to be managed by the project. In this case, I tried to be as methodical as possible, following the following pattern:

    1. Import a project.
    2. Import the project’s service connections.
    3. Import the project’s variable libraries.
    4. Import the project’s build pipelines.

    This order ensured that I was bringing objects into the project in an order where I could then reference them for their child projects.

    Handling Secrets

    When I got to service connections and libraries, it occurred to me that I needed to pull secrets out of my Hashicorp Vault instance to make this work smoothly. This is where the Vault provider came in handy: using the data resource type in Terraform, I could pull secrets out of my key vault and have them available for my project.

    Not only does this keep secrets out of the files (which is why I can share them all in Github), but it also means that cycling these secrets is as simple as changing the secret in Vault and then re-running the Terraform apply. While I am not yet using this to its fullest extent, I have some ambitions to cycle these secrets automatically on a weekly basis.

    Github Authentication

    One thing I ran into was the authentication between Azure DevOps and Github. The ADO UI likes to use the built-in “Github app” authentication. Meaning, when you click on the Edit button in a pipeline, ADO defaults to asking Github for “app” permissions. This also happens if you manually create a new pipeline in the User Interface. This automatically creates a service connection in the project.

    You cannot create this service connection in a Terraform project, but you can let Terraform see it as a managed resource. To do that:

    1. Find the created service connection in your Azure DevOps project.
    2. Create a new azuredevops_serviceendpoint_github resource in your Terraform Project with no authentication block. Here is mine for reference.
    3. Import the service connection to the newly created Terraform Resource.
    4. Make sure description is explicitly set to a blank string: ""

    That last step got me: If you don’t explicitly set that value to blank, the provider tried to set the description as “Managed by Terraform”. When doing that, it attempts to validate the change, and since we have no authentication block, it fails.

    What are those?!?

    An interesting side effect to this effort is seeing all the junk that exists in my Azure DevOps projects. I say “junk,” but I mean unused variable libraries and service connections. This triggered my need for digital tidiness, so rather than importing, I deleted.

    I even went so far as to review some of the areas where service connections were passed into a pipeline, but never actually used. I ended up modifying a number of my Azure DevOps pipeline templates (and documenting them) to stop requiring connections that they ultimately were not using.

    It’s not done until it is automated!

    This is all great, but the point of Terraform is to keep my infrastructure in the state I intend it to be. This means automating the application of this project. I created a template pipeline in my repository that I could easily extend for new projects.

    I have a task on my to-do list to automate the execution of the Terraform plan on a daily basis and notify me if there are unexpected changes. This will serve as an alert that my infrastructure has changed, potentially unintentionally. For now, though, I will execute the Terraform plan/apply manually on a weekly basis.

  • Building a Radar, Part 2 – Another Proxy?

    A continuation of my series on building a non-trivial reference application, this post dives into some of the details around the backend for frontend pattern. See Part 1 for a quick recap.

    Extending the BFF

    In my first post, I outlined the basics of setting up a backend for frontend API in ASP.Net Core. The basic project hosts the SPA as static files, and provides a target for all calls coming from the SPA. This alleviates much of the configuration of the frontend and allows for additional security through server-rendered cookies.

    If we stopped there, then the BFF API would contain endpoints for everything call our SPA makes, even if it just made a call to a backend service. That would be terribly inefficient and a lot of boilerplate coding. Now, having used Duende’s Identity Server for a while, I knew that they have coded a BFF library that takes care of proxying calls to backend services, even attaching the access tokens along with the call.

    I was looking for a was to accomplish this without the Duende library, and that is when I came across Kalle Marjokorpi’s post which describes using YARP as an alternative to the Duende libraries. The basics were pretty easy: install YARP, configure it using the appsettings.json file, and configure the proxy. I went so far as to create an extension method to encapsulate the YARP configuration into one place. Locally, this all worked quite well… locally.

    What’s going on in production?

    The image built and deployed quite well. I was able to log in and navigate the application, getting data from the backend service.

    However, at some point, the access token that was encoded into the cookie expired. And this caused all hell to break loose. The cookie was still good, so the backend for frontend assumes that the user is authenticated. But the access token is expired, so proxied calls fail. I have not put a refresh token in place, so I’m a bit stuck at the moment.

    On my todo list is to add a refresh token to the cookie. This should allow the backend the ability to refresh the access token before proxying a call to the backend service.

    What to do now?

    As I mentioned, this work is primarily to use as a reference application for future work. Right now, the application is somewhat trivial. The goal is to build out true microservices for some of the functionality in the application.

    My first target is the change tracking. Right now, the application is detecting changes and storing those changes in the application database. I would like to migrate the storage of that data to a service, and utilize MassTransit and/or NServiceBus to facilitate sending change data to that service. This will help me to define some standards for messaging in the reference architecture.