• A new To(y)ol

    I have wanted to pick up a Bambu Labs P1S for a while now. I saved up enough to finally pull the trigger, and after a few days of use, I could not be more pleased with my decision.

    Why Bambu?

    There are literally dozens of 3D printers out there, and choosing can be difficult. I certainly spent a great deal of time mulling over the various options. But, as with most things, the best advice was from people who use them. Luckily, I have not one, but two resources at my disposal.

    An old colleague of mine, Justin, is usually the first to such things. I typically joke that I usually do the same things that Justin does, I am just usually lagging behind in both time and scale. IE, Justin does it bigger and better. He and I chat frequently, and his input was invaluable. With regard to 3D printers, the one comment he made still resonates:

    I want to design stuff and print it, not tinker endlessly with the printer itself.

    Justin had an Ender (I do not recall the model) for a bit, but never really messed with it too much. After he picked up a P1P, the design and printing started to flow. He had nothing but good things to say about most things Bambu, save, perhaps, the community… We’ll get to that in a minute.

    Additionally, I discussed different printers with the proprietor of Pittsburgh3DPrints.com. He has a number of different brands, and services them all, and he recommended the Bambu as a great first printer, for many of the same reasons. What reasons?

    1. Ease of Use – From “cut box” to print was honestly 15 minutes. The P1S is extremely user-friendly in terms of getting to printing.
    2. Feature Packed – Sure, the price tag for the full P1S combo is a little higher than most printers. But you get, out of box, the ability to print most filaments, including PLA/PETG/ABS/ASA, as well as 4 color multi-color prints.
    3. Expandable – Additional AMS units get you up to 16 color prints.
    4. Ecosystem – Bambu has been really trying to get makerworld.com going, and they have had some success. The makerworld tie-in to Bambu Studio makes printing other’s designs quick and easy.

    First Impressions

    As I mentioned above, unboxing was easy with the included Quick Setup guide and their unboxing video. The first thing I printed was the included model for the scraper handle and holder. I do not recall the exact print time, but it was under an hour, and I had my first print!

    The next two prints were some fidget toys for my kids. I can tell you these took 48 minutes each, as both kids were anxiously waiting every single one of those minutes. One feature the P1S has is the ability to capture time lapse videos for your prints. Here is the one for one of the fidget rings.

    Now, I do laugh, because the running joke with those who own a 3D printer is that you spend most of your time printing stuff for the printer, which is highly meta and also seems like a gimmick to buy more filament. The P1S ejects waste out of the back into, well, thin air. Many folks have designed various waste collection solutions, most affectionately knows as “poop chutes.” I found one that I liked and set about slicing it for printing.

    Oops…

    This is where I ran into a little issue. I tried to start one of the prints (for the bucket) from the Bambu app. Instead of slicing on my PC and sending to the printer, I used the profile published. That, well, failed. For whatever reason, the bed temperature setting was set to 35°C, instead of the 55°C that the Bambu Studio slicer sets.

    I’m not sure if the profile had a cool bed setting or what, but that through off adhesion of the print to the bed, and I ended up with a large mess. I re-started the print from the Bambu Studio, and had no problems. Printing the three pieces of the chute took about 9 hours, which represents my longest print to date.

    Next up?

    I have a few things on my list to print. Most center around some organizational aspects of my office, and looking to use the Gridfinity system to make things neat. My wife asked for a few bases for some signage that she uses for work. This requires some design on my part, so it is a nice challenge for me.

    Both my son and one of our neighbors have expressed some interest in design and printing, so I look forward to passing on some of what I’ve learned to new designers looking to bring their designs to reality.

  • Foyer Upgrade

    Not everything I do is “nerdy.” I enjoy making things with my hands, and my wife has an eye for design and a sadistic love of painting. Combined, we like to spend some time redesigning rooms around the house. We save a ton doing the work ourselves, and for me it is a great break from the keyboard.

    The Idea

    My wife has been eyeing up our foyer for quite some time. Our foyer is a two story entry featuring stairs leading to our second floor, doors to each of our office spaces, and a hallway back to our kitchen and living room area. Off of the foyer is a half bath.

    Foyer View from Front Door

    As part of our kitchen renovation a few years ago, we replaced the vinyl flooring with LVT, and had no plans to change that. However, we wanted to open up the stairway with a different railing and add some applied moldings to the walls.

    The Plan

    Timing on this was a little odd. I was well aware of the work, and we did not want to have a construction zone around Christmas, so we split the work into two phases.

    Phase 1 was removing the half-wall on the steps and replacing it with a newel post and railing. It also included removing the existing carpet, painting the stairs, and installing carpet treads.

    Phase 2 would be the installation of the applied moldings, some new light fixtures, and a lot of painting.

    Phase 1

    Demo took about a day. We ripped out the old carpet, cut the half wall down to the existing riser, and removed the quarter round and baseboards. We also removed the old air return grates, as we ordered some new ones and would not be re-using the old ones.

    The installation of the newel and banister was not incredibly difficult, but took a while to ensure everything was accurately cut and secured. I built out a good deal of blocking at the end of the stairs to ensure the newel was solidly anchored.

    After everything was installed, there were a few days of drywall patching to get everything back to acceptable. The steps took a few coats of floor paint, and a new set of stair treads. They are secured with adhesive, but easily replaceable, which I am sure will be helpful in the future.

    All in all, Phase 1 took about 8 weeks. Mind you, this is 8 weeks of work some evenings and weekends, working around sports and social calendars for the kids and ourselves. If you were working on this full time, you could probably get it done in 5-8 working days. We finished Phase 1 just in time for the new banister to get some holiday decorations!

    Phase 1 Complete

    Phase 2

    Phase 2 was, for me, the more dreadful of the two. The plan was to take applied moldings all the way to top of the half wall in the loft, and create a framed look with the moldings. This required 1″x4″ boards (we used pre-primed MDF) and 11/16″ concave molding, and a lot of it.

    Before we installed the molding, we hired a painter to come in and put a fresh coat on the ceiling and the upper walls of the foyer. While she loves to paint, getting up on a ladder to hit the 18′ ceilings and cut in to the wall did not appeal to my wife, and I agreed.

    The process for the 1″x4″ boards was pretty simple: frame each wall, running horizontal boards the length of the wall, then vertical boards on each end. In between, run boards horizontally to divide the wall into three, then add vertical boards to complete the rectangles. Around the door frames, we used 1″x4″ boards to build out the existing molding, giving the doors a larger look.

    After the 1″x4″ installation, I went back through and installed the 11/16″ concave molding inside of each square, creating a bit of a picture frame look. This was, well, a lot of cutting and placing. In particular, the triangles on the walls with the steps were challenging because of the angles that needed cut. I did more geometry in the last few months than I have in a while.

    After everything was buttoned up, my wife took on the task of painting all the installed trim. All it took was time, and a lot of it: I think that task alone took her about 4 days. With the painting complete, I installed the last of the quarter round and she added some design touches.

    The last install was to rent a 16′ step ladder to change out the chandelier. I had done this once before, and was not looking forward to doing it again. The combination of wrestling the ladder into the foyer, setting it up, and then climbing up and down a few dozen times makes the process somewhat cumbersome.

    Phase 2 took about 9-10 weeks, with the same caveat that it was not full time work. Full time, you are probably looking at a good 8-10 business days. But the end result was well worth it.

    After Pictures

    Overall, we are happy with the results. The style matches my wife’s office nicely, and transitions well into the kitchen and living area.

  • Cleaning out the cupboard

    I have been spending a little time in my server cabinet downstairs, trying to organize some things. I took what I thought would be a quick step in consolidation. It was not as quick as I had hoped.

    POE Troubles

    When I got into the cabinet, I realized I had 3 POE injectors in there, powering my three Unifi Access Points. Two of them are the UAP-AC-LR, and the third is a UAP-AC-M. My desire was simple: replace 3 POE injectors with a 5 port PoE switch.

    So, I did what I thought would be a pretty simple process:

    1. Order the switch
    2. Using the MAC, assign it a static IP in my current Unifi Gateway DHCP.
    3. Plug in the switch.
    4. Take the cable coming out of the POE injector and plug it into the switch.

    And that SHOULD be it: devices boot up and I remove the POE injectors. And, for two of the three devices, it worked fine.

    There’s always one

    One of the UAP-AC-LR endpoints simply would not turn on. I thought maybe it was the cable. So I checked the different cables, but still nothing. I swapped out the cables and nothing changed: the one UAP-AC-LRs and the UAP-AC-M worked, but the other UAP-AC-LR did not work.

    I consulted the Oracle and came to realize that I had an old UAP-AC-LR, which only supports a 24v Passive PoE, not the 48v standard that my switch supports. Obviously, the newer UAP-AC-LR and the UAP-AC-M have support 802.3at (or at least a legacy protocols for 48v), but my oldest UAP-AC-LR simply doesn’t turn on.

    The Choice

    There are two solutions, one more expensive than another:

    1. Find an indoor PoE Converter (INS-3AF-I-G) that can convert the 48V coming from my new switch to the 24v that the device needs.
    2. Upgrade! Buy a U6 Pro to replace my old long range access point.

    I like the latter, as it would give me WiFi 6 support and start my upgrade in that area. However, I’m not ready for the price tag at the moment. I was able to find the converter for about $25, and that includes shipping and tax. So I opted for the more economical route in order to get rid of that last PoE injector.

  • Building a new home for my proxy server

    With my BananaPi up and running again, it’s time to put it back in the server cabinet. But it’s a little bit of a mess down there, and I decided my new 3D modeling skills could help me build a new home for the proxy.

    Find the Model

    When creating a case for things, having a 3D model of the thing you are creating becomes crucial. Sometimes, you have to model it yourself, but I have found that grabcad.com has a plethora of models available.

    A quick search yielded a great model of the Banana PI. This one is so detailed that it has all of the individual components modeled. All I really needed/wanted was the mounting hole locations and the external ports, but this one is useful for much more. It was so detailed, in fact, that I may have added a little extra detail just because I could.

    General Design

    This case is extremely simple. The Banana Pi M5 (BPi from here on out) serves as my reverse proxy server, so all it really needs is power and a network cable. However to ensure the case was more useful, I added openings for most of the components. I say most because I fully enclosed the side with the GPIO ports. I never use the GPIO pins on this board, so there was really no need to open those up.

    For this particular case, the BPi will be mounted on the left rack, so I oriented the tabs and the board in such a way that the power/HDMI ports were facing inside the rack, not outside. This also means that the network and USB ports are in the back, which works for my use case.

    A right-mount case with power to the left would put the USB ports at the front of the rack. However, I only have one BPi, and it is going on the left, so I will not be putting that one together.

    Two Tops

    With the basic design in place, I exported the simple top, and got a little creative.

    Cool It Down…

    My BPi kit came with a few heatsinks and a 24mm fan. Considering the proxy is a 24×7 machine, and it is handling a good bit of traffic, I figured it best to keep that fan in place. So I threw a cut-out in for the fan and its mounting screws.

    Light it up!

    On the side where the SD card goes, I closed off everything except the SD card itself. This includes the LEDs. As I was going through the design, I thought that it might be nice to be able to peek into the server rack and see the power/activity LEDs. And, I mean, that rack already looks like a weird Christmas tree, what are a few more lights.

    I had to do a bit of research to actual to find the actual name for the little plastic pieces that can carry LED lights a distance. They are called “light pipes.” I found some 3mm light pipes on Amazon, and thought that would be a good add to the build.

    The detail of the BPi model I found made this task REALLY easy: I was able to locate the center of the LED and project it onto the case top. A few 3mm holes later, and the top is ready to accept light pipes.

    Put it all together

    I sent my design over to Pittsburgh3DPrints.com, which happens to be about two miles from my house. A couple days later, I had a PLA print of the model. As this is pretty much sitting in my server cabinet all day, PLA is perfect for this print.

    Oddly enough, the trick to this one was to be able to turn off the BPi to install it. I had previously setup a temporary reverse proxy as I was messing with the BPi, so I routed all the traffic from the BPi to the temp proxy, and the shutdown the BPi.

    Some Trimming Required

    As I was designing this case, I went with a best-guess for tolerances. I was a little off. The USB and audio jack cutouts needed to be taller to allow the BPi to be installed in the case. Additionally, the stands were too thick and the screw fan holes too thin. I modified these designs in drawings, however, for the printed model, I just made them a little larger with an Exact-o blade.

    I heat-set a female M3 insert into the case body. I removed the fan from the old case top and attached it to my new case top. After putting the BPi into place in the case bottom, I attached the fan wires to the GPIO ports to get power. I put the case top on, placing the tabs near the USB ports first. Screwed in an M3 bolt and dropped three light pipes into the case top. They protruded a little, so I cut them to sit flush while still transmitting light.

    Finished Product

    BPi in assembled case
    Case Components

    Overall I am happy with the print. From a design perspective, having a printer here would have alleviated some of the trimming, as I could have test printed some smaller parts before committing.

    I posted this print to Makerworld and Printables.com. Check out the full build there!

  • Updated Site Monitoring

    What seemed like forever ago, I put together a small project for simple site monitoring. My md-to-conf work enhanced my Python skills, and I thought it would be a good time to update the monitoring project.

    Housekeeping!

    First things first: I transferred the repository from my personal GitHub account to the spydersoft-consulting organization. Why? Separation of concerns, mostly. Since I fork open source repositories into my personal, I do not want the open source projects I am publishing to be mixed in with those forks.

    After that, I went through the process of converting my source to a package with GitHub Actions to build and publish to PyPi.org. I also added testing, formatting, and linting, copying settings and actions from the md_to_conf project.

    Oh, SonarQube

    Adding the linting with SonarQube added a LOT of new warnings and errors. Everything from long lines to bad variable names. Since my build process does not succeed if those types of things are found, I went through the process of fixing all those warnings.

    The variable naming ones were a little difficult, as some of my classes mapped to the configuration file serialization. That meant that I had to change my configuration files as well as the code. I went through a few iterations, as I missed some.

    I also had to add a few tests, just so that the tests and coverage scripts get run. Could I have omitted the tests entirely? Sure. But a few tests to read some sample configuration files never hurt anyone.

    Complete!

    I got everything renamed and building pretty quickly, and added my PyPi.org API token to the repository for the actions. I quickly provisioned a new analysis project in SonarCloud, and merged everything into main. Created a new GitHub release, which triggered a new publish to PyPi.org.

    Setting up the Raspberry Pi

    The last step was to get rid of the code on the Raspberry Pi, and use pip to install the package. This was relatively easy, with a few caveats.

    1. Use pip3 install instead of pip – Forgot the old Pi has both Python 2 and 3 installed.
    2. Fix the config files – I had to change my configuration file to reflect the variable name changes.
    3. Change the cron job – This one needs a little more explanation

    For the last one, when changing the cron job, I had to point specifically to /usr/local/bin/pi-monitor, since that’s where pip installed it. My new cron job looks like this:

    SHELL=/bin/bash
    
    */5 * * * * pi cd /home/pi && /usr/local/bin/pi-monitor -c monitor.config.json 2>&1 | /usr/bin/logger -t PIMONITOR

    That runs the application and logs everything to syslog with the PIMONITOR tag.

    Did this take longer than I expected? Yea, a little. Is it nice to have another open source project in my portfolio. Absolutely. Check it out if you are interested!

  • An epic journey…

    I got all the things I needed to diagnose my BananaPi M5 issues. And I took a very long, windy road to a very simple solution. But I learned an awful lot in the process.

    Reconstructing the BananaPi M5

    I got tired of poking around the BananaPi M5, and decided I wanted to start from scratch. The boot order of the BananaPi means that, in order to format the EMMC and start from scratch, I needed some hardware.

    I ordered a USB to Serial debug cable so that I could connect to the BananaPi (BPi from here on out), interrupt the boot sequence, and use uboot to wipe the disk (or at least the MBR). That would force the BPi to use the SD as a boot drive. From there, I would follow the same steps I did in provisioning the BPi the first time around.

    For reference, with the cable I bought, I was able to connect the debug using Putty with the following settings:

    Your COM port will probably be different: open the Device Manager to find yours.

    I also had to be a little careful about wiring: When I first hooked it up, I connected the transmit cable (white) to the Tx pin, and the receive cable (green) to the Rx pin. That gave me nothing. Then I realized that I had to swap the pins: The transmit cable (white) goes to the Rx pin, and the receive cable (green) goes to the Tx pin. Once swapped, the terminal lit up.

    I hit the reset button on the BPi, and as soon as I could, I hit Ctrl-C. This took me into the uboot console. I then followed these steps to erase the first 1000 blocks. From there, I had a “cleanish” BPi. To fully wipe the EMMC, I booted an SD card that had the BPI Ubuntu image, and wiped the entire disk:

    dd if=/dev/zero of=/dev/mmcblk0 bs=1M

    Where /dev/mmcblk0 is the address of the EMMC drive. This writes all zeros to the EMMC, and cleaned it up nicely.

    New install, same problem

    After following the steps to install Ubuntu 20.04 to the EMMC, I did an apt upgrade and a do-release-upgrade to get up to 22.04.3. And the SAME network issue reared its ugly head. Back at it with fresh eyes, I determined that something changed in the network configuration, and the cloud-init setup that had worked for this particular BPI image is no longer valid.

    What were the symptoms? I combed through logs, but the easiest identifier was, when running networkctl, eth0 was reporting as unmanaged.

    So, I did two things: First, disable the network configuration in cloud-init by changing /etc/cloud/cloud.cfg.d/99-fake_cloud.cfg to the following:

    datasource_list: [ NoCloud, None ]
    datasource:
      NoCloud:
        fs_label: BPI-BOOT
    network: { config : disable }

    Second, configure netplan by editing /etc/netplan/50-cloud-init.yaml:

    network:
        ethernets:
            eth0:
                dhcp4: true
                dhcp-identifier: mac
        version: 2

    After that, I ran netplan generate and netplan apply, and the interface now showed as managed when executing networkctl. More importantly, after a reboot, the BPi initialized the network and everything is up and running.

    Backup and Scripting

    This will be the second proxy I’ve configured in under 2 months, so, well, now is the time to write the steps down and automate if possible.

    Before I did anything, I created a bash script to copy important files off of the proxy and onto my NAS. This includes:

    • Nginx configuration files
    • Custom rsyslog file for sending logs to loki
    • Grafana Agent configuration file
    • Files for certbot/cloudflare certificate generation
    • The backup script itself.

    With those files on the NAS, I scripted out restoration of the proxy to the fresh BPi. I will plan a little downtime to make the switch: while the switchover won’t be noticeable to the outside world, some of the internal networking takes a few minutes to swap over, and I would hate to have a streaming show go down in the middle of viewing…. I would certainly take flak for that.

  • Terraform Azure DevOps

    As a continuation of my efforts to use Terraform to manage my Azure Active Directory instance, I moved my Azure DevOps instance to a Terraform project, and cleaned a lot up in the process.

    New Project, same pattern

    As I mentioned in my last post, I setup my repository to support multiple Terraform projects. So starting up an Azure DevOps Terraform project was as simple as creating a new folder in the terraform folder and setting up the basics.

    As with my Azure AD project, I’m using the S3 backend. For providers, this project only needs the Azure DevOps and Hashicorp Vault providers.

    The process was very similar to Azure AD: create resources in the project, and use terraform import to import existing resources to be managed by the project. In this case, I tried to be as methodical as possible, following the following pattern:

    1. Import a project.
    2. Import the project’s service connections.
    3. Import the project’s variable libraries.
    4. Import the project’s build pipelines.

    This order ensured that I was bringing objects into the project in an order where I could then reference them for their child projects.

    Handling Secrets

    When I got to service connections and libraries, it occurred to me that I needed to pull secrets out of my Hashicorp Vault instance to make this work smoothly. This is where the Vault provider came in handy: using the data resource type in Terraform, I could pull secrets out of my key vault and have them available for my project.

    Not only does this keep secrets out of the files (which is why I can share them all in Github), but it also means that cycling these secrets is as simple as changing the secret in Vault and then re-running the Terraform apply. While I am not yet using this to its fullest extent, I have some ambitions to cycle these secrets automatically on a weekly basis.

    Github Authentication

    One thing I ran into was the authentication between Azure DevOps and Github. The ADO UI likes to use the built-in “Github app” authentication. Meaning, when you click on the Edit button in a pipeline, ADO defaults to asking Github for “app” permissions. This also happens if you manually create a new pipeline in the User Interface. This automatically creates a service connection in the project.

    You cannot create this service connection in a Terraform project, but you can let Terraform see it as a managed resource. To do that:

    1. Find the created service connection in your Azure DevOps project.
    2. Create a new azuredevops_serviceendpoint_github resource in your Terraform Project with no authentication block. Here is mine for reference.
    3. Import the service connection to the newly created Terraform Resource.
    4. Make sure description is explicitly set to a blank string: ""

    That last step got me: If you don’t explicitly set that value to blank, the provider tried to set the description as “Managed by Terraform”. When doing that, it attempts to validate the change, and since we have no authentication block, it fails.

    What are those?!?

    An interesting side effect to this effort is seeing all the junk that exists in my Azure DevOps projects. I say “junk,” but I mean unused variable libraries and service connections. This triggered my need for digital tidiness, so rather than importing, I deleted.

    I even went so far as to review some of the areas where service connections were passed into a pipeline, but never actually used. I ended up modifying a number of my Azure DevOps pipeline templates (and documenting them) to stop requiring connections that they ultimately were not using.

    It’s not done until it is automated!

    This is all great, but the point of Terraform is to keep my infrastructure in the state I intend it to be. This means automating the application of this project. I created a template pipeline in my repository that I could easily extend for new projects.

    I have a task on my to-do list to automate the execution of the Terraform plan on a daily basis and notify me if there are unexpected changes. This will serve as an alert that my infrastructure has changed, potentially unintentionally. For now, though, I will execute the Terraform plan/apply manually on a weekly basis.

  • Building a Radar, Part 2 – Another Proxy?

    A continuation of my series on building a non-trivial reference application, this post dives into some of the details around the backend for frontend pattern. See Part 1 for a quick recap.

    Extending the BFF

    In my first post, I outlined the basics of setting up a backend for frontend API in ASP.Net Core. The basic project hosts the SPA as static files, and provides a target for all calls coming from the SPA. This alleviates much of the configuration of the frontend and allows for additional security through server-rendered cookies.

    If we stopped there, then the BFF API would contain endpoints for everything call our SPA makes, even if it just made a call to a backend service. That would be terribly inefficient and a lot of boilerplate coding. Now, having used Duende’s Identity Server for a while, I knew that they have coded a BFF library that takes care of proxying calls to backend services, even attaching the access tokens along with the call.

    I was looking for a was to accomplish this without the Duende library, and that is when I came across Kalle Marjokorpi’s post which describes using YARP as an alternative to the Duende libraries. The basics were pretty easy: install YARP, configure it using the appsettings.json file, and configure the proxy. I went so far as to create an extension method to encapsulate the YARP configuration into one place. Locally, this all worked quite well… locally.

    What’s going on in production?

    The image built and deployed quite well. I was able to log in and navigate the application, getting data from the backend service.

    However, at some point, the access token that was encoded into the cookie expired. And this caused all hell to break loose. The cookie was still good, so the backend for frontend assumes that the user is authenticated. But the access token is expired, so proxied calls fail. I have not put a refresh token in place, so I’m a bit stuck at the moment.

    On my todo list is to add a refresh token to the cookie. This should allow the backend the ability to refresh the access token before proxying a call to the backend service.

    What to do now?

    As I mentioned, this work is primarily to use as a reference application for future work. Right now, the application is somewhat trivial. The goal is to build out true microservices for some of the functionality in the application.

    My first target is the change tracking. Right now, the application is detecting changes and storing those changes in the application database. I would like to migrate the storage of that data to a service, and utilize MassTransit and/or NServiceBus to facilitate sending change data to that service. This will help me to define some standards for messaging in the reference architecture.

  • Terraform Azure AD

    Over the last week or so, I realized that while I bang the drum of infrastructure as code very loudly, I have not been practicing it at home. I took some steps to reconcile that over the weekend.

    The Goal

    I have a fairly meager home presence in Azure. Primarily, I use a free version of Azure Active Directory (now Entra ID) to allow for some single sign-on capabilities in external applications like Grafana, MinIO, and ArgoCD. The setup for this differs greatly among the applications, but common to all of these is the need to create applications in Azure AD.

    My goal is simple: automate provisioning of this Azure AD account so that I can manage these applications in code. My stretch goal was to get any secrets created as part of this process into my Hashicorp Vault instance.

    Getting Started

    The plan, in one word, is Terraform. Terraform has a number of providers, including both the azuread and vault providers. Additionally, since I have some experience in Terraform, I figured it would be a quick trip.

    I started by installing all the necessary tools (specifically, the Vault CLI, the Azure CLI, and the Terraform CLI) in my WSL instance of Ubuntu. Why there instead of Powershell? Most of the tutorials and such lean towards the bash syntax, so it was a bit easier to roll through the tutorials without having to convert bash into powershell.

    I used my ops-automation repository as the source for this, and started by creating a new folder structure to hold my projects. As I anticipated more Terraform projects to come up, I created a base terraform directory, and then an azuread directory under that.

    Picking a Backend

    Terraform relies on state storage. They use the term backend to describe this storage. By default, Terraform uses a local file backend provider. This is great for development, but knowing that I wanted to get things running in Azure DevOps immediately, I decided that I should configure a backend that I can use from my machine as well as from my pipelines.

    As I have been using MinIO pretty heavily for storage, it made the most sense to configure MinIO as the backend, using the S3 backend to do this. It was “fairly” straightforward, as soon as I turned off all the nonsense:

    terraform {
      backend "s3" {
        skip_requesting_account_id  = true
        skip_credentials_validation = true
        skip_metadata_api_check     = true
        skip_region_validation      = true
        use_path_style              = true
        bucket                      = "terraform"
        key                         = "azuread/terraform.tfstate"
        region                      = "us-east-1"
      }
    }

    There are some obvious things missing: I am setting environment variables for values I would like to treat as secret, or, at least not public.

    • MinIO Endpoint -> AWS_ENDPOINT_URL_S3 environment variable instead of endpoints.s3
    • Access Key -> AWS_ACCESS_KEY_ID environment variable instead of access_key
    • Secret Key -> AWS_SECRET_ACCESS_KEY environment variable instead of secret_key

    These settings allow me to use the same storage for both my local machine and the Azure Pipeline.

    Configuration Azure AD

    Likewise, I needed to configure the azuread provider. I followed the steps in the documentation, choosing the environment variable route again. I configured a service principal in Azure and gave it the necessary access to manage my directory.

    Using environment variables allows me to set these from variables in Azure DevOps, meaning my secrets are stored in ADO (or Vault, or both…. more on that in another post).

    Importing Existing Resources

    I have a few resources that already exist in my Azure AD instance, enough that I didn’t want to re-create them and then re-configure everything which uses them. Luckily, most Terraform providers allow for importing existing resources. Thankfully, most of the resources I have support this feature.

    Importing is fairly simple: you create the simplest definition of a resource that you can, and then run a terraform import variant to import that resource into your project’s state. Importing an Azure AD Application, for example, looks like this:

    terraform import azuread_application.myapp /applications/<object-id>

    It is worth noting that the provider is looking for the object-id, not the client ID. The provider documentation has information as to which ID each resource uses for import.

    More importantly, Applications and Service Principals are different resources in Azure AD, even though they are pretty much a one to one. To import a Service Principal, you run a similar command:

    terraform import azuread_service_principal.myprincipal <sp-id>

    But where is the service principal’s ID? I had to go to the Azure CLI to get that info:

    az ad sp list --display myappname

    From this JSON, I grabbed the id value and used that to import.

    From here, I ran a terraform plan to see what was going to be changed. I took a look at the differences, and even added some properties to the terraform files to maintain consistency between the app and the existing state. I ended up with a solid project full of Terraform files that reflected my current state.

    Automating with Azure DevOps

    There are a few extensions available to add Terraform tasks to Azure DevOps. Sadly, most rely on “standard” configurations for authentication against the backends. Since I’m using an S3 compatible backend, but not S3, I had difficulty getting those extensions to function correctly.

    As the Terraform CLI is installed on my build agent, though, I only needed to run my commands from a script. I created an ADO template pipeline (planning for expansion) and extended it to create the pipeline.

    All of the environment variables in the template are reflected in the variable groups defined in the extension. If a variable is not defined, it’s simply blank. That’s why you will see the AZDO_ environment variables in the template, but not in the variable groups for the Azure AD provisioning.

    Stretch: Adding Hashicorp Vault

    Adding HC Vault support was somewhat trivial, but another exercise in authentication. I wanted to use AppRole authentication for this, so I followed the vault provider’s instructions and added additional configuration to my provider. Note that this setup requires additional variables that now need to be set whenever I do a plan or import.

    Once that was done, I had access to read and write values in Vault. I started by storing my application passwords in a new key vault. This allows me to have application passwords that rotate weekly, which is a nice security feature. Unfortunately, the rest of my infrastructure isn’t quite setup to handle such change. At least, not yet.

  • Automating Grafana Backups

    After a few data loss events, I took the time to automate my Grafana backups.

    A bit of instability

    It has been almost a year since I moved to a MySQL backend for Grafana. In that year, I’ve gotten a corrupted MySQL database twice now, forcing me to restore from a backup. I’m not sure if it is due to my setup or bad luck, but twice in less than a year is too much.

    In my previous post, I mentioned the Grafana backup utility as a way to preserve this data. My short-sightedness prevented me from automating those backups, however, so I suffered some data loss. After the most recent event, I revisited the backup tool.

    Keep your friends close…

    My first thought was to simply write a quick Azure DevOps pipeline to pull the tool down, run a backup, and copy it to my SAN. I would have also had to have included some scripting to clean up old backups.

    As I read through the grafana-backup-tool documents, though, I came across examples of running the tool as a Job in Kubernetes via a CronJob. This presented a very unique opportunity: configure the backup job as part of the Helm chart.

    What would that look like? Well, I do not install any external charts directly. They are configured as dependencies for charts of my own. Now, usually, that just means a simple values file that sets the properties on the dependency. In the case of Grafana, though, I’ve already used this functionality to add two dependent charts (Grafana and MySQL) to create one larger application.

    This setup also allows me to add additional templates to the Helm chart to create my own resources. I added two new resources to this chart:

    1. grafana-backup-cron – A definition for the cronjob, using the ysde/grafana-backup-tool image.
    2. grafana-backup-secret-es – An ExternalSecret definition to pull secrets from Hashicorp Vault and create a Secret for the job.

    Since this is all built as part of the Grafana application, the secrets for Grafana were already available. I went so far as to add a section in the values file for the backup. This allowed me to enable/disable the backup and update the image tag easily.

    Where to store it?

    The other enhancement I noticed in the backup tool was the ability to store files in S3 compatible storage. In fact, their example showed how to connect to a MinIO instance. As fate would have it, I have a MinIO instance running on my SAN already.

    So I configured a new bucket in my MinIO instance, added a new access key, and configured those secrets in Vault. After committing those changes and synchronizing in ArgoCD, the new resources were there and ready.

    Can I test it?

    Yes I can. Google, once again, pointed me to a way to create a Job from a CronJob:

    kubectl create job --from=cronjob/<cronjob-name> <job-name> -n <namespace-name>

    I ran the above command to create a test job. And, viola, I have backup files in MinIO!

    Cleaning up

    Unfortunately, there doesn’t seem to be a retention setting in the backup tool. It looks like I’m going to have to write some code to clean up my Grafana backups bucket, especially since I have daily backups scheduled. Either that, or look at this issue and see if I can add it to the tool. Maybe I’ll brush off my Python skills…