Category: Open Source

  • Tech Tip – You should probably lock that up…

    I have been running in to some odd issues with ArgoCD not updating some of my charts, despite the Git repository having an updated chart version. As it turns out, my configuration and lack of Chart.lock files seems to have been contributing to this inconsistency.

    My GitOps Setup

    I have a few repositories that I use as source repositories for Argo. The contain mix of my own resource definition files, which are raw manifest files, and external Helm charts. The external Helm charts use an umbrella chart to allow me the ability to add supporting resources (like secrets). My Grafana chart is a great example of it.

    Prior to this, I was not including the Chart.lock file in the repository. This made it easier to update the version in the Chart.yaml file without having to run a helm dependency update to update the lock file. I have been running this setup for at least a year, and I never really noticed much problem until recently. There were a few times where things would not update, but nothing systemic.

    And then it got worse

    More recently, however, I noticed that the updates weren’t taking. I saw the issue with both the Loki and Grafana charts: The version was updated, but Argo was looking at the old version.

    I tried hard refreshes on the Applications in Argo, but nothing seemed to clear that cache. I poked around in the logs and noticed that Argo runs helm dependency build, not helm dependency update. That got me thinking “What’s the difference?”

    As it turns out, build operates using the Chart.lock file if it exists, otherwise it acts like upgrade. upgrade uses the Chart.yaml file to install the latest.

    Since I was not committing my Chart.lock file, it stands to reason that somewhere in Argo there is a cached copy of a Chart.lock file that was generated by helm dependency build. Even though my Chart.yaml was updated, Argo was using the old lock file.

    Testing my hypothesis

    I committed a lock file 😂! Seriously, I ran helm dependency update locally to generate a new lock file for my Loki installation and committed it to the repository. And, even though that’s the only file that changed, like magic, Loki determined it needed an update.

    So I need to lock it up. But, why? Well, the lock file exists to ensure that subsequent builds use the exact version you specify, similar to npm and yarn. Just like npm and yarn, helm requires a command to be run to update libraries or dependencies.

    By not committing my lock file, the possibility exists that I could get a different version than I intended or, even worse, get a spoofed version of my package. The lock file maintains a level of supply chain security.

    Now what?

    Step 1 is to commit the missing lock files.

    At both work and home I have Powershell scripts and pipelines that look for potential updates to external packages and create pull requests to get those updates applied. So step 2 is to alter those scripts to run helm dependency update when the Chart.yaml is updated, which will update the Chart.lock and alleviate the issue.

    I am also going to dig into ArgoCD a little bit to see where these generated Chart.lock values could be cached. In testing, the only way around it was to delete the entire ApplicationSet, so I’m thinking that the ApplicationSet controller may be hiding some data.

  • Rollback saved my blog!

    As I was upgrading WordPress from 6.2.2 to 6.3.0, I ran into a spot of trouble. Thankfully, ArgoCD rollback was there to save me.

    It’s a minor upgrade…

    I use the Bitnami WordPress chart as the template source for Argo to deploy my blog to one of my Kubernetes clusters. Usually, an upgrade is literally 1, 2, 3:

    1. Get the latest chart version for the WordPress Bitnami chart. I have a Powershell script for that.
    2. Commit the change to my ops repo.
    3. Go into ArgoCD and hit Sync

    That last one caused some problems. Everything seemed to synchronize, but the WordPress pod stopped at the connect to database section. I tried restarting the pod, but nothing.

    Now, the old pod was still running. So, rather than mess with it, I used Argo’s rollback functionality to roll the WordPress application back to it’s previous commit.

    What happened?

    I’m not sure. You are able to upgrade WordPress from the admin panel, but, well, that comes at a potential cost: If you upgrade the database as part of the WordPress upgrade, but then you “lose” the pod, well, you lose the application upgrade but not the database upgrade, and you are left in a weird state.

    So, first, I took a backup. Then, I started poking around in trying to run an upgrade. That’s when I ran into this error:

    Unknown command "FLUSHDB"

    I use the WordPress Redis Object Cache to get that little “spring” in my step. It seemed to be failing on the FLUSHDB command. At that point, I was stuck in a state where the application code was upgraded but the database was not. So I restarted the deployment and got back to 6.2.2 for both application code and database.

    Disabling the Redis Cache

    I tried to disable the Redis plugin, and got the same FLUSHDB error. As it turns out, the default Bitnami Redis chart disables these commands, but it would seem that the WordPress plugin still wants them.

    So, I enabled the commands in my Redis instance (a quick change in the values files) and then disable the Redis Cache plugin. After that, I was able to upgrade to WordPress 6.3 through the UI.

    From THERE, I clicked Sync in ArgoCD, which brought my application pods up to 6.3 to match my database. Then I re-enabled the Redis Plugin.

    Some research ahead

    I am going to check with the maintainers of the Redis Object Cache plugin. If they are relying on commands that are disabled by default, it most likely caused some issues in my WordPress upgrade.

    For now, however, I can sleep under the warm blanket of Argo roll backs!

  • Publishing Markdown to Confluence

    For what seems like a long time now, I have been using RittmanMead’s md_to_conf project to automatically publish documentation from Markdown files into Confluence Cloud. I am on something of a documentation kick, and what started as a quick fix ultimately turned into a new project.

    It all started with the word “legacy”…

    In publishing our Markdown files, I noticed that the pages all had the legacy editor text in Confluence Cloud. I wanted to move our pages to the updated editor, and my gut reaction was “well, maybe it is because the md_to_conf project was using the v1 APIs.

    The RittmanMead project is great, but it has not changed in about a year. Now, granted, once things work, I wouldn’t expect much change.

    So I forked the project and started changing some API calls. The issue is, well, I just did not know when to stop. My object-oriented tendencies took over, and I ended up way past the point of no return:

    • Split converter and client code into separate Python classes to simplify the main module.
    • Converted the entire project to a Python module and built a wheel for simplified installation and execution.
    • Added Flake8/Black for Linting
    • Added a GitHub workflow for building and publishing.
    • Added analysis steps to analyze the code in SonarCloud.io.

    It is worth noting that, at the end of the day, the editor value was already supported in RittmanMead/md_to_conf: You just have to set the version argument to 2 when running. I found that out about an hour or so into my journey, but, by that time, I was committed.

    Making a break for it

    At this point, a few things had happened:

    1. My code had diverged greatly from the RittmanMead project.
    2. Most likely, the functionality and purpose for which the original project was written had changed.
    3. I broke support for Confluence Server.
    4. I have some plans for additional features for the module, including the ability to pull labels from the Markdown.

    With that in mind, I had GitHub “break” the fork connection between my repository and the RittmanMead repository.

    Let me be VERY clear: the README will ALWAYS reflect the source of this repository. This project would not be possible without the contributors to the RittmanMead script, and whatever happens in my respository is building on their fine work. But I have designs on a more formal package and process, as well as my own functional roadmap, so a split makes the most sense.

    Introducing md-to-conf

    With that in mind, I give you md-to-conf (PyPi / GitHub)! I will be adding some issues for feature enhancements and work on them as I can, although, my first order of business will most likely be some basic testing to make sure I don’t break anything as I work.

    If you have a desire to contribute, please see the contribution guideline and have at it!

  • The Battle of the Package Managers

    I dove back into React over the past few weeks, and was trying to figure out whether to use NPM or Yarn for package management. NPM has always seemed slow, and in the few times I tried Yarn, it seemed much faster. So I thought I would put them through their paces.

    The Projects

    I was able to test on a few different projects, some at home and some at work. All were React 18 with some standard functionality (testing, linting, etc), although I did vary between applications using Vite and component libraries that used webpack. While most of our work projects use NPM, I did want to try with Yarn in that environment, and I ended up moving my home environment to Yarn for the test.

    The TLDR; version of this is: Yarn is great and fast, but I had so much trouble with authorizing scoped feeds with a Proget NPM feed that I ditched Yarn at work in favor of our NPM standard. At home, where I utilize public packages, it’s not an issue, so I’ll continue using Yarn at home.

    Migrating to Yarn

    NPM to Yarn 1.x is easy: the commands are pretty much fully compatible, node_modules is still used, and the authentication is pretty much the same. Migrating from Yarn 1 to “modern Yarn” is a little more involved.

    However, the migration overall, was easy, at least at home, where I was not dealing with custom registries. At work, I had to use a .yarnrc.yml file to setup some configurations for NPM registries

    Notable Differences

    Modern Yarn has some different syntaxes, but, overall, is pretty close to its predecessor. It’s notably faster, and if you convert to PNP pacakge management, your node_modules folder goes away.

    The package managers are still “somewhat” interchangeable, save for any “npm” commands you may have in custom scripts in your packages.json file. That said, I would NEVER advise you to use different package managers on the same project.

    Yarn is much faster than NPM at pretty much every task. Also, the interactive upgrade plugin makes updating packages a breeze. But, I ran into an authentication problem I could not get past.

    The Auth Problem

    We use Proget for our various feeds. It provides a single repository for packages and container images. For our NPM packages, we have scoped them to our company name.

    In configuring Yarn for these scoped repositories, I was never able to get the authentication working so that I could add a package from our private feeds. The error message was something to the effect of Invalid authentication (as an anonymous user). All my searching yielded no good solutions, in spite of hard-coding a valid auth token in the .yarnrc.yml file.

    Now, I have been having some “weirder” issues with NPM authentication as well, so I am wondering if it is machine specific. I have NOT yet tested at home, which I will get to. However, my work projects have other deadlines, and I wasn’t about to burn cycles on getting auth to work. So, at work, I backed out of Yarn for the time being.

    What to do??

    As I mentioned above, some more research is required. I’d like to setup a private feed at home, just to prove that there is either something wrong with my work machine OR something wrong with Yarn connecting to Proget. I’m thinking it’s the former, but, until I can get some time to test, I’ll go with what I know.

    That said, if it IS just a local issue, I will make an effort to move to Yarn. I believe the speed improvements are worth it alone, but there are some additional benefits that make it a good choice for package management.

  • A small open source contribution never hurt anyone

    Over the weekend, I started using the react-runtime-config library to configure one of my React apps. While it works great, the last activity on the library was over two years ago, so I decided to fork the repository and publish the library under my own scope. This led me down a very eccentric path and opened a number of new doors.

    Step One: Automate

    I wanted to create an automated build and release process for the library. The original repository uses TravisCI for builds, but I much prefer having the builds within Github for everyone to see.

    The build and publish processes are pretty straight forward: I implemented a build pipeline which is triggered on any commit and runs a build and test. The publish pipeline is triggered on creating a release in Github, and runs the same build/test, but updates the package version to the release version and then publishes the package to npmjs.org under the @spydersoft scope.

    Sure, I could have stopped there… but, well, there was a ball of yarn in the corner I had to play with.

    Step 2: Move to Yarn

    NPM is a beast. I have worked on projects which take 5-10 minutes to run an install. Even my little test UI project took about three minutes to run an npm install and npm build.

    The “yarn v. npm” war is not something I’d like to delve into in this blog. If you want more detail, Ashutosh Krishna recently posted a pretty objective review of both over on Knowledge Hut. Before going all in on Yarn with my new library, I tested Yarn on my small UI project.

    I started by deleting my package-lock.json and node_modules folder. Then, I ran yarn install to get things running. By default, Yarn was using Yarn 1.x, so I still got a node_modules folder, but a yarn.lock file instead of package-lock.json. I modified my CI pipelines, and I was up and running.

    On my build machine, the yarn install command ran in 31 seconds, compared to 38 seconds for npm install on the same project. yarn build took 34 seconds, compared to 2 minutes and 20 seconds for npm build.

    Upgrading to modern Yarn

    In my research, I noted that there are two distinct flavors of yarn: what they term “modern versions” of Yarn, which is v2.x and above, and v1.x. As I mentioned, the yarn command on my machine defaults to the 1.x version. Not wanting to be left behind, I decided to migrate to the new version.

    The documentation was pretty straight forward, and I was able to get everything running. I am NOT yet using the Plug and Play installation strategy, but I wanted to take advantage of what the latest versions have to offer.

    First Impressions

    I am not an “everyday npm user,” so I cannot speak to all the edge cases which might occur. Generally, I am limited to the standard install/build/test commands that are used was part of web development. While I needed to learn some new commands, like yarn add instead of npm install, the transition was not overly difficult.

    In a team environment, however, moving to Yarn would require some coordination between the team to ensure everyone knows when changes would be made and avoid situations where both package managers are used.

    With my small UI project converted, I was impressed enough to move the react-runtime-config repository and pipelines to use Yarn 3.

    Step 3: Planned improvements

    I burned my allotted hobby time on steps 1 and 2, so most of what is left is a to-do list for react-runtime-config.

    Documentation

    I am a huge fan of generated docs, so I would like to get some started for my version of the library. The current readmes are great, but I also like to have the technical documentation for those who want it.

    External Configuration Injection

    This one is still very much in a planning stage. My original thought was to allow the library to make a call to an API to get configuration values, however, I do not want to add any unnecessary overhead to the library. It may be best to allow for a hook.

    I would also want to be able to store those values in local storage, but still have them be updatable. This type of configuration will support applications hosted within a “backend for frontend” API, and allow that API to pass configuration values as needed.

    Wrapping up

    I felt like I made some progress over the weekend, if only for my own projects. Moving react-runtime-config allowed me to make some general upgrades to the library (new dependency updates) and sets the stage for some additional work. My renewed interest in all things Node also stimulated a move from Create React App to Vite, which I will document further in an upcoming post.

  • Replacing ISY with Home Assistant – Part 3 – Movin’ In!

    This is the continuation of a short series on transitioning away from the ISY using Home Assistant.

    Having successfully gotten my new Home Assistant instance running, move in day was upon me. I did not have a set plan, but things were pretty simple.

    But first, HACS

    The Home Assistant Community Store (HACS) is a custom component for Home Assistant that enables UI management of other custom components. I have a few integrations that utilize custom components, namely Orbit B-Hyve and GE Home (SmartHQ).

    In my old HA instance, I had simply copied those folders in to the custom_components folder under my config directory, but HACS gives me the ability to manage these components from the UI, instead of via SSH. I followed the setup and configuration instructions to the letter, and was able to install the above custom components with ease.

    The Easy Stuff

    With HACS installed, I could tackle all the “non-major.” I am classifying major as my Insteon and Z-Wave devices, since those require some heavier lifting. There were lots of little integrations with external services that I could pretty quickly setup in the new instance and remove from the old. This included things like:

    • Orbit B-Hyve: I have an irrigation system in the backyard for some potted plants, and I put an Orbit Smart Hose timer on it. The B-Hyve app lets me set the schedule, so I don’t really need to automate that every day, but I do have it setup to enable the rain delay via NodeRED.
    • MyQ: I have a Chamberlain garage door open which is connected to MyQ, so this gives me the status of the door and the ability to open/close it.
    • GE Home: Not sure that I need to be able to see what my oven is doing, but I can.
    • Rheem Econet: I can monitor my hot water heater and set the temperature. It is mostly interesting to watch usage, and it is currently the only thing that allows me to track its power consumption.
    • Ring: This lets me get some information from my Ring doorbell, including its battery percentage.
    • Synology: The Synology integrate lets me monitor all of my drives and cameras. There is not much to control, per say, but it collects a lot of data points that I then scrape into Prometheus for alerting.
    • Unifi: I run the Unifi Controller for my home network, and this integration gives me an entity for all the devices on my network. Again, I do not use much of the control aspect, but I definitely use the data being collected.

    Were these all easy? Definitely. I was able to configure all of these integrations on the new instance and then delete them from the old without conflict.

    Now it’s time for some heavy lifting.

    Z-Wave Migration

    I only have 6 Z-Wave devices, but all were on the Z-Wave network controlled by the ISY. To my knowledge, there is no easy migration. I set up the Z-Wave JS add-on in Home Assistant, selecting my Z-Wave antenna from the USB list. Once that was done, I had to drop each device off of the ISY and then re-add it to the new Home Assistant instance.

    Those steps were basically as follows:

    1. Pick a device to remove.
    2. Select “Remove a Z-Wave Device” from the Z-Wave Menu in the ISY.
    3. While it is waiting, put the device in “enroll/un-enroll” mode. It’s different for every device. On my Mineston plugs, it was ‘click the power button three times quickly.’
    4. Wait for the ISY to detect the removal.
    5. In Home Assistant, under the Z-Wave integration, click Devices. Click the Add Device button, and it will listen for devices.
    6. Put the device in “enroll/un-enroll” mode again.
    7. If prompted, enter the device pin. Some devices require them, some do not. Of my 6 devices, three had pins, three did not.
    8. Home Assistant should detect the device and add it.
    9. Repeat steps 1 through 8 for all your Z-Wave devices.

    As I said, I only have 6 devices, so it was not nearly as painful. If you have a lot of Z-Wave devices, this process will take you some time.

    Insteon Migration

    Truthfully, I expected this to be very painful. It wasn’t that bad. I mentioned in my transition planning post that I grabbed an XML list of all my nodes in the ISY. This is my reference for all my Insteon devices.

    I disconnected the ISY from the PLM and connected it to the Raspberry Pi. I added the Insteon integration, and entered the device address (in my case, it showed up as /dev/ttyUSB1). At that point, the Insteon integration went about finding all my devices. They showed up with their device name and address, and the exercise was to look up the address in my reference and rename the device in Home Assistant.

    Since scenes are written to the devices themselves, my scenes came over too. Once I renamed the devices, I could set the scene names to a friendly name.

    NodeRED automation

    After flipping the URL in my new Home Assistant instance to be my old URL, I went into NodeRED to see the damage. I had to make a few changes to get things working:

    1. I had to generate a new long-lived token in Home Assistant, and update NodeRED with the new token.
    2. Since devices changed, I had to touch every action and make sure I had the right devices selected. Not terrible, just a bit tedious.

    ALEXA!

    I use the ISY Portal for integration with Amazon Alexa, and, well, my family have gotten used to doing some things with Alexa. Nabu Casa provides Home Assistant Cloud to fill this gap.

    It is not worth much space here, other than to say their documentation on installation and configuration was spot on, so check it out if you need integration with Amazon Alexa or Google Assistant.

    Success!!!

    My ISY is shut down, and my Home Assistant is running the house, including the Insteon and Z-Wave devices.

    I did notice that, on reboot, the USB address of the Z-Wave and PLM device swapped. I hope that isn’t a recurring thing. The solution was to re-configure the Insteon and Z-Wave integrations with the new address. Not hard, I just hope it is not a pattern.

    My NodeRED integrations are much more stable. Previously, NodeRED was calling Home Assistant, which was trying to use the ISY to control the devices. This was fraught with errors, mostly because the ISY’s APIs can be dodgy. With Home Assistant calling the shots directly, it’s much more responsive.

    I have to work on some of my scenes and automations for Insteon: While I had previously moved most of my programs out of the ISY and into NodeRED, there were a few stragglers that I need to setup on NodeRED. But that will take about 20 minutes.

    At this point, I’m going to call this venture successful. That said, I will now focus my attention on my LED strips. I have about 6 different LED strips with some form of MiLight/MiBoxer controller. I hate them. So I will be exploring alternatives. Who knows, maybe my exploration will generate another post.

  • Replacing ISY with Home Assistant – Part 2 – A New Home

    This is the continuation of a short series on transitioning away from the ISY using Home Assistant.

    Getting Started, again

    As I mentioned in my previous post, my plan is to run my new instance of Home Assistant in parallel with my old instance and transfer functionality in pieces. This should allow me to minimize downtime, and through the magic of reverse proxy, I will end up with the new instance living at the same URL as the old instance.

    Part of the challenge of getting started is simply getting the Raspberry Pi setup in my desired configuration. I bought an Argon One M.2 case and an M.2 SSD card to avoid running Home Assistant on an SD Card. However, that requires a bit of prework, particularly for my older Pi.

    New Use, New Case

    I ordered the Argon One M.2 case after a short search. I was looking for a solution that allowed me to mount and connect an M.2 SSD. In this sense, there were far too many options. There are a number of “bare board” solutions, including one from Geekworm and another from Startech.com. The price points were similar, hovering around $25 per board. However, the bare board required me to buy a new case, and most of the “tall” cases required for both the Pi and the bare board ran another $15-$25, so I was looking at around $35-$45 for a new board and case.

    My Amazon searches kept bringing up the Argon One case, so I looked into it. It provided both the case and the SSD support, and added some thermal management and a sleek pinout extension. And, at about $47, the price point was similar to what I was going to spend on a board and new case, so I grabbed that case. Hands on, I was not disappointed: the case is solid and had a good guide for installation packaged with it.

    Always read ahead…

    When it comes to instructions, I tend to read ahead. Before I put everything in the case, I wanted to make sure I was going to be ready before I buttoned it up. As I read through the waveshare.com guide for getting the Argon One case running with a boot to SSD, I noticed steps 16 and 17.

    The guide walked through the process of using the SD Card Copier to move the image from the SD card to the SSD card. However, I am planning on using the Home Assistant OS image, which means I’ll need to image the SSD from my machine with that image. Which means I have to get the SSD connected to my machine…

    Yet another Franken-cable

    I do not have a USB adapter for SSD cards, because I do not flash them often enough to care. So how do I use the Raspberry Pi Imager to flash Home Assistant OS onto my SSD? With a Franken-cable!

    I installed the M.2 SSD in the Argon One’s base, but did not put the PI on it. Using the bare base, I installed the “male to male” USB U adapter in the M.2 base, and used a USB extension cable to attach the other end of the U Adapter to my PC. It showed up as an Argon SSD, and I was able to flash the SSD with the Home Assistant OS.

    Updated Install Steps

    So, putting all this together, I did the following to get Home Assistant running on Raspberry Pi / Argon One SSD:

    1. Install the Raspberry Pi in the Argon One case, but do not attach the base with the SSD.
    2. From this guide, follow steps 1-15 as written. Then shutdown the system and take out the SSD.
    3. Install the SSD in Argon One base, and attach it to your PC using the USB Male to Male U adapter (included with the Argon) and a USB extension cable.
    4. Write the Home Assistant OS for RPI4 to the SSD using the Raspberry Pi Imager utility.
    5. Put the Argon One case together, and use the U adapter to connect the SSD to the RPI.
    6. Power on the RPI

    At this point, Home Assistant should boot for the first time and begin its setup process.

    Argon Add-ons

    Now, the Argon case has a built-in fan and fan controller. When using Raspbian, you can install the controller software. Home Assistant OS is different, but thankfully, Adam Outler wrote add-ons to allow Home Assistant to control the Argon fan.

    I followed the instructions, but then realized that I needed to enable I2C in order to get it to work. Adam to the rescue: Adam wrote a HassOS configurator add on for both I2C and Serial support. I installed the I2C configurator and ran it according to its instructions.

    Running on Empty

    My new Home Assistant instance is running. It is not doing anything, but it is running. Next steps will be to start migrating my various integrations from one instance to another.

  • Replacing ISY with Home Assistant – Part 1a – Transition Planning

    This is the continuation of a short series on transitioning away from the ISY using Home Assistant.

    The Experiment

    I mentioned in my previous post that I had ordered some parts to run an experiment with my PowerLinc Modem (PLM). I needed to determine that I could use my existing PLM (an Insteon 2413S, which is the serial version) with Home Assistant’s Insteon plugin.

    Following this highly-detailed post from P. Lutus, I ordered the following parts:

    • A USB Serial Adapter -> I used this one, but I think any USB to DB9 adapter will work.
    • A DB9 to RJ45 Modular Adapter -> The StarTech.com one seems to be popular in most posts, and it was easy to use.

    While I waited for these parts, I grabbed the Raspberry Pi 4 Model B that I have tasked for this and got to work installing a test copy of Home Assistant on it. Before you even ask, I have had this Raspberry Pi 4 for a few years now, prior to the shortages. It has served many purposes, but its most recent task was as a driver for MagicMirror on my office television. However, since I transitioned to a Banana Pi M5 for my network reverse proxy, well, I had a spare Raspberry Pi 3 Model B hanging around. So I moved MagicMirror to the RPi 3, and I have a spare RPi 4 ready to be my new Home Assistant.

    Parts Arrived!

    Once my parts arrived, I assembled the “Frankencable” necessary to connect my PLM to the Raspberry Pi. It goes PLM -> Standard Cat 5 (or Cat 6) Ethernet Cable -> RJ45 to DB9 Adapter -> Serial to USB Adapter -> USB Port on the Pi.

    With regard to the RJ45 to DB9 Adapter, you do need the pinout. Thankfully, Universal Devices provides one as part of their Serial PLM Kit. You could absolutely order their kit and it would work. But their Kit is $26: I was able to get the Serial to USB adapter for $11.99, and the DB9 to RJ45 for $3.35, and I have enough ethernet cable lying around to wire a new house, so I got away for under $20.

    Before I started, I grabbed an output of all of my Insteon devices from my ISY. Now, P. Lutus’ post indicates using the ISY’s web interface to grab those addresses, but if you are comfortable with Postman or have another favorite program for making Web API calls, you can get an XML document with everything. The curl command is

    curl http://<isy ip address>/rest/nodes -u "<isy user>:<isy password>"

    I used Postman to make the call and stored the XML in a file for reference.

    With everything plugged in, I added the Insteon integration to my test Home Assistant installation, selected the PLM Serial option, and filled in the device address. That last one took some digging, as I had to figure out which device to use. The easiest way to do it is to plug in the cable, then use dmesg to determine where in /dev the device is mounted. This linuxhint.com post gives you a few options for finding out more about your USB devices on Linux systems.

    At that point, the integration took some time to discover my devices. As mentioned in P. Lutus’ post, it will take some time to discover everything, and the battery-operated devices will not be found automatically. However, all of my switches came in, and each device address was available.

    What about Z-Wave?

    I have a few Z-Wave devices that also use the ISY as a hub. To move off of the ISY completely, I need a Z-Wave alternative. A colleague of mine runs Z-Wave at home to control EVERYTHING, and does so with a Z-Wave antenna and Home Assistant. I put my trust in his work and did not feel the need to experiment with the Z-Wave aspect. I just ordered an antenna.

    With that out of the way, I declared my experiment a success, and starting working on a transition plan.

    Raspberry Pi à la Mode

    Raspberry Pi is great on its own, but everything is better with some ice cream! In this case, my “ice cream” is a new case and a new M.2 SSD. Why? Home Assistant is chatty, and will be busy running a lot of my Home Automation. And literally every channel I’ve seen on the Home Assistant Discord server says “do not run on an SD card!”

    The case above is an expandable system that not only lets me add an M.2 SATA drive to the Pi, but also adds some thermal management for the platform in general. Sure, it also adds and HDMI daughter board, but considering I’ll be running Home Assistant OS, dual screen displays of the command line does not seem like a wise display choice.

    With those parts on order, I have started to plan my transition. It should be pretty easy, but there are a number of steps involved. I will be running two instances of Home Assistant in parallel for a time, just to make sure I can still turn the lights on and off with the Amazon Echo… If I don’t, my kids might have a fit.

    1. Get a good inventory of what I have defined in the ISY. I will want a good reference.
    2. Get a new instance of Home Assistant running from the SSD on the RPI 4. I know there will be some extra steps to get this all working, so look for that in the next post.
    3. Check out voice control with Home Assistant Cloud. Before I move everything over, I want to verify the Home Assistant Cloud functionality. I am currently paying for the ISY portal, so I’ll be switching from one to the other for the virtual assistant integration.
    4. Migrate my Z-Wave devices. Why Z-Wave first? I only have a few of those (about 6), and they are running less “vital” things, like lamps and landscape lighting. Getting the Z-Wave transferred will allow me to test all of my automation before moving the Insteon Devices
    5. Migrate my Insteon Devices. This should be straight forward, although I’ll have to re-configure any scenes and automations in Node Red.

    Node Red??

    For most of my automation, I am using a separate installation of Node Red and the Home Assistant palette.

    Node Red provides a great drag-and-drop experience for automation, and allows for some pretty unique and complex flows. I started moving away from the ISY programs over the last year. The only issue with it has been that the ISY’s API connectivity is spotty, meaning Home Assistant sometimes has difficulty talking to the ISY. Since Node Red goes through Home Assistant to get to the ISY, sometimes the programs look like they’ve run correctly when, in fact, they have not.

    I am hoping that removing the ISY will provide a much better experience with this automation.

    Next Steps

    When my parts arrive, I will start into my plan. Look for the next in the series in a week or so!

  • Collecting WCF Telemetry with Application Insights

    I got pulled back in to diagnosing performance issues on an old friend, and it lead me into some dead code with a chance for resuscitation.

    Trouble with the WCF

    One of our applications has a set of WCF Services that serve the client. We had not instrumented them manually for performance metrics, although we found that System.ServiceModel is well decorated for trace, so adding a trace listener gives us a bevy of information. However, it’s all in *.svclog files (using an XML Trace Listener), so there is still a lot to do in order to find out what’s wrong. A colleague asked if Application Insights could work. And that suggested started me down a path.

    I found the lab

    I reached out to some contacts at Microsoft, and they pointed me to the Application Insights SDK Labs. In that lab is a library that instruments WCF services and applications. The problem: it hasn’t been updated in about 5 years.

    I figured, our application is based on technology about that old, I suppose I could try it. So I followed the instructions, and I started getting telemetry in my Application Insights instance! However, I did notice a few things:

    1. The library as published relies on an old Application Insights version (2.5.0, I believe). The Application Insights libraries are up to 2.21, which means we may not see everything we can.
    2. The library is based on .Net Framework 4.5, not 4.6.2, which is what we are using.

    So I did what any sensible developer does…. fork it!

    I’ll do it myself!

    I forked the repository, created a branch, and got to work upgrading. Since this is just for work (right now), I did not bother to setup CI/CD yet, and took some liberties in assuming we would be running .Net Framework 4.6.2 for a while.

    I had to wade through the repository setup a bit: There is a lot of configuration around Nuget and versioning, and, frankly, I wasn’t looking to figure that out right now. However, I did manage to get new library built, including updating the references to Microsoft.Application insights.

    Again, in the interest of time and test, I manually pushed the package I built to our internal Nuget feed. I’m going to get this pushed to a development environment so that I can get some better telemetry than just the few calls that get made in my local environment.

    Next Steps?

    If this were an active project, I would probably have made a little more effort to do things the way they were originally structured and “play in the sandbox.” However, with no contribution rules or guidelines and no visible build pipeline, I am on my own with this one.

    If this turns out to be reasonably useful, I will probably take a look at the rest of the projects in that repository. I may also tap my contacts at Microsoft for some potential “next steps” with this one: while I do not relish the thought of owning a public archive, perhaps I could at least get a copy of their build pipeline so that I can replicate it on my end. Maybe a GitHub Actions pipeline with a push to Nuget?? Who knows.

  • Maturing my Grafana setup

    I may have lost some dashboards and configuration recently, and it got me thinking about how to mature my Grafana setup for better persistence.

    Initial Setup

    When I first got Grafana running, it was based on the packaged Grafana Helm chart. As such, my Grafana instance was using SQLite database file stored in the persistent volume. This limits me to a single Grafana pod, since the volume is not setup to shared across pods. Additionally, that SQL database file is to the lifecycle of the claim associated with the volume.

    And, well, at home, this is not a huge deal because of how the lab is setup for persistent volume claims. Since I use the nfs-subdir-external-provisioner, PVCs in my clusters automatically generate a subfolder in my NFS share. When the PVC is deleted, the subdir gets renamed with an archive- prefix, so I can usually dig through the folder to find the old database file.

    However, using the default Azure persistence, Azure Disks are provisioned. When a PVC gets deleted, so to does the disk, or, well, I think it does. I have not had the opportunity to dig in to the Azure Disk PVC provisioning to understand how that data is handled when PVCs go away. It is sufficient to say that I lost our Grafana settings because of this.

    The New Setup

    The new plan is to utilize MySQL to store my Grafana dashboards and data stores. The configuration seems simple enough: add the appropriate entries in the grafana.ini file. I already know how to expand secrets, so getting the database secrets into the configuration was easy using the grafana.ini section of the Helm chart.

    For my home setup, I felt it was ok to run MySQL as another dependent chart for Grafana. Now, from the outside, you should be saying “But Matt, that only moves your persistence issues from the Grafana chart to the MySQL chart!” That is absolutely true. But, well, I have a pretty solid backup plan for those NFS shares, so for a home lab that should be fine. Plus I figured out how to backup and restore Grafana (see below).

    The real reason is that, for the instance I am running in Azure at work, I want to provision an Azure MySQL instance. This will allow me to have much better backup retention that inside the cluster, but the configuration at work will match the configuration at home. Home lab in action!

    Want to check out my home lab configuration? Check out my ops-internal infrastructure repository.

    Backup and Restore for Grafana

    As part of this move, I did not want to lose the settings I had in Grafana. This mean finding a backup/restore procedure that worked. An internet search lead me to the Grafana Backup Tool. The tool provides backup and restore capabilities through Grafana’s APIs.

    That said, it is written in Python, so my recent foray into Python coding served me well to get this tool up and running. Once I generated an API Key, I was off and running.

    There really isn’t much to it: after configuring the URL and API Token, I ran a backup to get a .tar.gz file with my Grafana contents. Did I test the backup? No. It’s the home lab, worst that could happen is I have to re-import some dashboards and re-create some others.

    After that, I updated my Grafana instance to include the MySQL instance and updated Grafana’s configuration to use the new MySQL service. As expected, all my dashboards and data sources disappeared.

    I ran the restore function using my backup, refreshed Grafana in my browser, and I was back up and running! Testing, schmesting….

    What’s Next?

    I am going to take my newfound learnings and apply them at work:

    1. Get a new MySQL instance provisioned.
    2. Backup Grafana.
    3. Re-configure Grafana to use the new MySQL instance.
    4. Restore Grafana.

    Given the ease with which the home lab went, I cannot imagine I will run into much issue.