Category: Technology

  • Stacks on Stacks!

    I have Redis installed at home as a simple caching tool. Redis Stack adds on to Redis OSS with some new features that I am eager to start learning. But, well, I have to install it first.

    Charting a Course

    I have been using the Bitnami Redis chart to install Redis on my home K8 cluster. The chart itself provides the necessary configuration flexibility for replicas and security. However, Bitnami does not maintain a similar chart for redis-stack or redis-stack-server.

    There are some published Helm charts from Redis, however, they lack the built-in flexibility and configurability that the Bitnami charts provide. The Bitnami chart is so flexible, I wondered if it was possible to use it with the redis-stack-server image. A quick search showed I was not the only person with this idea.

    New Image

    Gerk Elznik posted last year about deploying Redis Stack using Bitnami’s Redis chart. Based on this post, I made attempted to customize the Bitnami chart to use the redis-stack-server image. Gerk’s post indicated that a new script was needed to successfully start the image. That seemed like an awful lot of work, and, well, I really didn’t want to do that.

    In the comments of Gerk’s post, Kamal Raj posted a link to his version of the Bitnami Redis Helm chart, modified for Redis Stack. This seemed closer to what I wanted: a few tweaks and off to the races.

    In reviewing Kamal’s changes, I noticed that everything he changed could be overridden in the values.yaml file. So I made a few changes to my values file:

    1. Added repository and tag in the redis.image section, pointing the chart to the redis-stack-server image.
    2. Updated the command for both redis.master and redis.replica to reflect Kamal’s changes.

    I ran a quick template, and everything looked to generate correctly, so I committed the changes and let ArgoCD take over.

    Nope….

    ArgoCD synchronized the stateful set as expected, but the pod didn’t start. The error in the K8 events was about “command not found.” So I started digging into the “official” Helm Chart for the redis-stack-server image.

    That chart is very simple, which means it was pretty easy to see that there was no special command for startup. So, I started to wonder if I really needed to override the command, or simply use the redis-stack-server in place of the default image.

    So I commented out the custom overrides to the command settings for both master and replica, and committed those changes. Lo and behold, ArgoCD synced and the pod started up great!

    What Matters Is, Does It Work?

    Excuse me for stealing from Celebrity Jeopardy, but “Gussy it up however you want, Trebek, what matters is, does it work?” For that, I needed a Redis client.

    Up to this point, most of my interactions with Redis have simply been through the redis-cli that’s installed on the image. I use kubectl to get into the pod and run redis-cli in the pod to see what keys are in the instance.

    Sure, that works fine, but as I start to dig into to Redis a bit more, I need a client that lets me visualize the database a little better. As I was researching Redis Stack, I came across RedisInsight, and thought it was worth a shot.

    After installing RedisInsight, I set up port forwarding on my local machine into the Kubernetes service. This allows me to connect directly to the Redis instance without creating a long term service on Node Port or some other forwarding mechanism. Since I only need access to the Redis server within the cluster, this helps me secure it.

    I got connected, and the instance shows. But, no modules….

    More Hacking Required

    As it turns out, the Bitnami Redis chart changes the startup command to a script within the chart. This allows some of the flexibility, but comes at the cost of not using the entrypoint scripts that are in the image. Specifically, the entrypoint script for redis-stack-server, which uses the command line to load the modules.

    Now what? Well, there’s more than one way to skin a cat (to use an arcane and cruel sounding metaphor). Reading through the Redis documentation, you can also load modules through the configuration. Since the Bitnami Redis chart allows you to add to the configuration using the values.yaml file, that’s where I ended up. I added the following to my values.yaml file:

    master:
        configuration: | 
          loadmodule /opt/redis-stack/lib/redisearch.so MAXSEARCHRESULTS 10000 MAXAGGREGATERESULTS 10000
          loadmodule /opt/redis-stack/lib/redistimeseries.so
          loadmodule /opt/redis-stack/lib/rejson.so
          loadmodule /opt/redis-stack/lib/redisbloom.so
    

    With those changes, I now see the appropriate modules running.

    Lots Left To Do

    As I mentioned, this seems pretty “hacky” to me. Right now, I have it running, but only in standalone mode. I haven’t had the need to run a full Redis cluster, but I’m SURE that some additional configuration will be required to apply this to running a Redis Stack cluster. Additionally, I could not get the Redis Gears module loaded, but I did get Search, JSON, Time Series, and Bloom installed.

    For now, that’s all I need. Perhaps if I find I need Gears, or I want to run a Redis cluster, I’ll have to revisit this. But, for now, it works. The full configuration can be found in my non-production infrastructure repository. I’m sure I’ll move to production, but everything that happens here happens in non-production first, so keep tabs on that if you’d like to know more.

  • A little maintenance task

    As I mentioned in my previous post, I ran into what I believe are some GPU issues with my Dell 7510. So, like any self-respecting nerd (is that an oxymoron?), I ordered a replacement part from parts-people.com and got to work.

    Prep Work

    As with most things these days, you can almost always find some instructions on the internet. I found a tutorial from UFixTek on YouTube that covers a full cleanup and re-paste. The only additional step in my repair was to replace the GPU with a new one.

    With that, I setup a workstation and got to it. Over the past few years I have acquired some tools that make this type of work super helpful:

    • Precision Screwdriver Set – I have an older version of this Husky set. I cannot tell you how many times it’s saved me when doing small electronics work.
    • Pry Tool Set – I ordered this set about 4 years ago, but it hasn’t changed much. The rollup case is nice.
    • Small Slotted screwdriver – The Husky set is great, but for screws that are deep set, sometimes you need a standard screwdriver. I honestly don’t remember where I got mine, it’s the red handled one in the photos below.
    • Exact-o Knife – Always handy.
    • Cutting/Work mat – I currently use a Fiskars cutting mat from Michaels. It protects the desktop and the piece, and the grid patter is nice for parts organization.
    • Compressed Air Duster – I LOVE this thing. I use it for any number of electronics cleaning tasks, including my keyboard. It’s also powerful enough to use as an inflator tool for small inflatables.
    • Rubbing alcohol for cleaning
    • Lint free paper towels
    • new GPU
    • Thermal Paste

    Teardown

    Following the tutorial, I started disassembly, being careful to organize the screws as I went along. I used a few small Post-It tags to label the screws in case I forgot. I removed the M.2 drive, although, in retrospect, I do not think it was necessary.

    Battery, hard drive, and cover removed

    Laptops have pretty tight tolerances and a number of ribbon cables to connect everything together. The tutorial breaks down where they are, but it’s important to keep those in mind as you tear down. If you miss disconnecting one, you run the risk of tearing it.

    Keyboard and palm rest removed

    Clean up!

    Once I got down to the heatsink assembly, I removed it (and the fans) from the laptop. I gave it the same scrubbing as shown in the tutorial, except I did not have to clean my old GPU since I was installing a new one. I cannot tell you how much thermal paste was on this. It was obscene.

    I took care to really clean out the fan assemblies, including the fins. There was about 5 years of dust built up in there, and there was a noticeable reduction in airflow. I’m sure this didn’t help my thermal issues.

    Re-paste and Re-assemble!

    With everything sufficiently cleaned up and blown out, I applied an appropriate amount of thermal paste to the GPU and CPU, put the heatsink assembly back, and reversed the process to re-assemble. Again, it’s important to make note of all the connections: missing a ribbon cable or connection here will lead to unnecessary disassembly just to get it hooked back up.

    And now, we test…

    “But does it work?” There’s only one way to find out. I turned it back on, and, well, the screen came up, so that is a victory. Although, there is an integrated graphics chip… so maybe not as big a victory as I would anticipate.

    Windows 10 booted fine, and before I plugged in additional displays, I check device manager. The card was detected and stated it was functioning correctly. I plugged in my external displays (one HDMI, one mini DisplayPort -> HDMI), and they detected normally and switched over.

    I fired up FurMark to run some GPU tests and see if the new AMD works. Now, I had used FurMark on the old GPU and was unable to lock up my laptop like Fusion 360 was doing. So, running FurMark again is not a sure test, but worth running anyway.

    One thing I immediately noticed is that FurMark was reporting GPU temperatures, something that was not happening with my old GPU. That’s a good sign, right? After letting FurMark run the stress test for a while, I figured it was time to fire up Fusion 360 and try to hang my laptop.

    As with FurMark, Fusion 360 didn’t always hang the laptop. There was no one action that caused it, although, orbiting objects quickly seemed to be a trigger that caused problems more often than not. So I opened a few images and orbited them. No issues.

    Victory?

    I hesitate to declare total victory here: the GPU issue was not consistent, which means all I really know is I am back to where I started. Without some “time under tension,” I’m going to be very wary as I dig into modeling and make sure I save often. But there is promise that the change was for the better. If nothing else, the laptop got a good cleaning that it desperately needed.

  • Tech Tip – Interacting with ETCD in Rancher Kubernetes Engine 2

    Since cycling my cluster nodes is a “fire script and wait” operation, I kicked one off today. I ended up running into an issue that required me to dig a bit into ETCD in RKE2, and could not find direct help, so this is as much my own reference as it is a guide for others.

    I broke it…

    When provisioning new machines, I still have some odd behaviors when it comes to IP address assignment. I do not set the IP address manually: I use a static MAC address on the VM and then create a fixed IP for that MAC address. About 90% of the time, that works great. Every so often, though, in the provisioning process, the VM picks up an IP address from the DHCP instead of the fixed IP, and that wrecks stuff, especially around ETCD.

    This happened today: In standing up a replacement, the new machine picked up a DHCP IP. Unfortunately, I didn’t remove the machine properly, which caused my ETCD cluster to still see the node as a member. When I deleted the node and tried to re-provision, I got ETCD errors because I was trying to add a node name that already exists.

    Getting in to ETCD

    RKE2’s docs are a little quiet on actually viewing what’s in ETCD. Through some googling, I figured out that I could use etcdctl to show and manipulate members, but I couldn’t figure out how to actually run the command.

    As it turns out, the easiest way to run it is to run it on one of the ETCD pods itself. I came across this bug report in RKE2 that indirectly showed me how to run etcdctl commands from my machine through the ETCD pods. The member list command is

    kubectl -n kube-system exec <etcd_pod_name> -- sh -c "ETCDCTL_ENDPOINTS='https://127.0.0.1:2379' ETCDCTL_CACERT='/var/lib/rancher/rke2/server/tls/etcd/server-ca.crt' ETCDCTL_CERT='/var/lib/rancher/rke2/server/tls/etcd/server-client.crt' ETCDCTL_KEY='/var/lib/rancher/rke2/server/tls/etcd/server-client.key' ETCDCTL_API=3 etcdctl member list"

    Note all the credential setting via environment variables. In theory, I could “jump in” to the etcd pod using a simple sh command and run a session, but keeping it like this forces me to be judicious in my execution of etcdctl commands.

    I found the offending entry and removed it from the list, and was able to run my cycle script again and complete my updates.

  • What’s in a home lab?

    A colleague asked today about my home lab configuration, and I came to the realization that I have never published a good inventory of the different software and hardware that I run as part of my home lab / home automation setup. While I have documented bits and pieces, I never pushed a full update. I will do my best to hit the highlights without boring everyone.

    Hardware

    I have a small cabinet in my basement mechanical room which contains the majority of my hardware, with some other devices sprinkled around.

    This is all a good mix of new and used stuff: Ebay was a big help. Most of it was procured over several years, including a number of partial updates to the NAS disks

    • NAS – Synology Diskstation 1517+. This is the 5-bay model. I added the M2D18 expansion card, and I currently have 5 x 4TB WD Red Drives and 2 x 1GB WD SSDs for cache. Total storage in my configuration is 14TB.
    • Server – HP ProLiant DL380p Gen8. Two Xeon E5-2660 processors, 288 GB of RAM, and two separate RAID arrays. The system array is 136GB, while the storage array is 1TB.
    • Network
      • HP ProCurve Switch 2810-24G – A 24 port GB switch that serves most of my switching needs.
      • Unifi Security Gateway – Handles all of my incoming/outgoing traffic through the modem and provides most of my high-level network capabilities.
      • Unifi Access Points – Three in total, 2 are the UAP-AC-LR models, the other is the UAP-AC-M outdoor antenna.
      • Motorola Modem – I did not need the features of the Comcast/Xfinity modem, nor did I want to lease it, so I bought a compatible modem.
    • Miscellaneous Items
      • BananaPi M5 – Runs Nginx as a reverse proxy into my network.
      • RaspberryPi 4B+ – Runs Home Assistant. This was a recent move, documented pretty heavily in a series of posts that starts here.
      • RaspberryPi Model B – That’s right, an O.G. Pi that runs my monitoring scripts to check for system status and reports to statuspage.io.
      • RaspberryPi 4B+ – Mounted behind the television in my office, this one runs a copy of MagicMirror to give me some important information at a glance.
      • RaspberryPi 3B+ – Currently dormant.

    Software

    This one is lengthy, so I broke it down into what I hope are logical and manageable categories.

    The server is running Windows Hyper-V Server 2019. Everything else, unless noted, is running on a VM on that server.

    Server VMs

    • Domain Controllers – Two Windows domain controllers (primary and secondary).
    • SQL Servers – Two SQL servers (non-production and production). It’s a home lab, so the express editions suffice.

    Kubernetes

    My activities around Kubernetes are probably the most well-documented of the bunch, but, to be complete: Three RKE2 Kubernetes clusters. Two three-node clusters and one four-node cluster to run internal, non-production, and production workloads. The nodes are Ubuntu 22.04 images with RKE2 installed.

    Management and Monitoring Tools

    For some management and observability into this system, I have a few different software suites running.

    • Unifi Controller – This makes management of the USG and Access points much easier. It is currently running in the production cluster using the jacobalberty image.
    • ArgoCD – Argo is my current GitOps operator and is used to make sure what I want deployed on my clusters is out there.
    • LGTM Stack – I have instances of Loki, Grafana, Tempo, and Mimir running in my internal cluster, acting as the target for metrics data.
    • Grafana Agent – For my VMs and other hardware that supports it, I installed Grafana Agent and configured them to report metrics and logs to Mimir/Loki.
    • Hashicorp Vault – I am running an instance of Hashicorp Vault in my clusters to provide secret management, using the External Secrets operator to provide cached secret management in Kubernetes.
    • Minio – In order to provide a local storage instance with S3 compatible APIs, I’m running Minio as a docker image directly on the Synology.

    Cluster Tools

    Using Application Sets and the Cluster generator, I configured a number of “cluster tools” which allow me to install different tools to clusters using labels and annotations on the Argo cluster Secret resource.

    This allows me to install multiple tools using the same configuration, which improves consistency. The following are configured for each cluster.

    • kube-prometheus – I use Bitnami’s kube-prometheus Helm chart to install an instance of Prometheus on each cluster. They are configured to remote-write to Mimir.
    • promtail – I use the promtail Helm chart to install an instance of Promtail on each cluster. They are configured to remote-write to Mimir.
    • External Secrets – The External Secrets operator helps bootstrap connection to a variety of external vaults and creates Kubernetes Secret resources from the ExternalSecret / ExternalClusterSecret custom resources.
    • nfs-subdir-external-provisioner – For PersistantVolumes, I use the nfs-subdir-external-provisioner and configure it to point to dedicated NFS shares on the Synology NAS. Each cluster has its own folder, making it easy to backup through the various NAS tools
    • cert-manager – While I currently have cert-manager installed as a cluster tool, if I remember correctly, this was for my testing of Linkerd, which I’ve since removed. Right now, my SSL traffic is offloaded at the reverse proxy. This has multiple benefits, not the least of which is that I was able to automate my certificate renewals in one place. Still, cert-manager is available but no certificate stores are currently configured.

    Development Tools

    It is a lab, after all.

    • Proget – I am running the free version of Proget for private Nuget and container image feeds. As I move to open source my projects, I may migrate to Github artifact storage, but for now, it is stored locally.
    • SonarQube Community – I am running an instance of SonarQube community for quality control. However, as with Proget, I have begun moving some of my open source projects to Sonarcloud.io, so this instance may fall away.

    Custom Code

    I have a few projects, mostly small APIs that allow me to automate some of my tasks. My largest “project” is my instance of Identity Server, which I use primarily to lock down my other APIs.

    And of course…

    WordPress. This site runs in my production cluster, using the Bitnami chart, which includes the database.

    And there you go…

    So that is what makes up my home lab these days. As with most good labs, things are constantly changing, but hopefully this snapshot presents a high level picture into my lab.

  • Tech Tip – You should probably lock that up…

    I have been running in to some odd issues with ArgoCD not updating some of my charts, despite the Git repository having an updated chart version. As it turns out, my configuration and lack of Chart.lock files seems to have been contributing to this inconsistency.

    My GitOps Setup

    I have a few repositories that I use as source repositories for Argo. The contain mix of my own resource definition files, which are raw manifest files, and external Helm charts. The external Helm charts use an umbrella chart to allow me the ability to add supporting resources (like secrets). My Grafana chart is a great example of it.

    Prior to this, I was not including the Chart.lock file in the repository. This made it easier to update the version in the Chart.yaml file without having to run a helm dependency update to update the lock file. I have been running this setup for at least a year, and I never really noticed much problem until recently. There were a few times where things would not update, but nothing systemic.

    And then it got worse

    More recently, however, I noticed that the updates weren’t taking. I saw the issue with both the Loki and Grafana charts: The version was updated, but Argo was looking at the old version.

    I tried hard refreshes on the Applications in Argo, but nothing seemed to clear that cache. I poked around in the logs and noticed that Argo runs helm dependency build, not helm dependency update. That got me thinking “What’s the difference?”

    As it turns out, build operates using the Chart.lock file if it exists, otherwise it acts like upgrade. upgrade uses the Chart.yaml file to install the latest.

    Since I was not committing my Chart.lock file, it stands to reason that somewhere in Argo there is a cached copy of a Chart.lock file that was generated by helm dependency build. Even though my Chart.yaml was updated, Argo was using the old lock file.

    Testing my hypothesis

    I committed a lock file 😂! Seriously, I ran helm dependency update locally to generate a new lock file for my Loki installation and committed it to the repository. And, even though that’s the only file that changed, like magic, Loki determined it needed an update.

    So I need to lock it up. But, why? Well, the lock file exists to ensure that subsequent builds use the exact version you specify, similar to npm and yarn. Just like npm and yarn, helm requires a command to be run to update libraries or dependencies.

    By not committing my lock file, the possibility exists that I could get a different version than I intended or, even worse, get a spoofed version of my package. The lock file maintains a level of supply chain security.

    Now what?

    Step 1 is to commit the missing lock files.

    At both work and home I have Powershell scripts and pipelines that look for potential updates to external packages and create pull requests to get those updates applied. So step 2 is to alter those scripts to run helm dependency update when the Chart.yaml is updated, which will update the Chart.lock and alleviate the issue.

    I am also going to dig into ArgoCD a little bit to see where these generated Chart.lock values could be cached. In testing, the only way around it was to delete the entire ApplicationSet, so I’m thinking that the ApplicationSet controller may be hiding some data.

  • Rollback saved my blog!

    As I was upgrading WordPress from 6.2.2 to 6.3.0, I ran into a spot of trouble. Thankfully, ArgoCD rollback was there to save me.

    It’s a minor upgrade…

    I use the Bitnami WordPress chart as the template source for Argo to deploy my blog to one of my Kubernetes clusters. Usually, an upgrade is literally 1, 2, 3:

    1. Get the latest chart version for the WordPress Bitnami chart. I have a Powershell script for that.
    2. Commit the change to my ops repo.
    3. Go into ArgoCD and hit Sync

    That last one caused some problems. Everything seemed to synchronize, but the WordPress pod stopped at the connect to database section. I tried restarting the pod, but nothing.

    Now, the old pod was still running. So, rather than mess with it, I used Argo’s rollback functionality to roll the WordPress application back to it’s previous commit.

    What happened?

    I’m not sure. You are able to upgrade WordPress from the admin panel, but, well, that comes at a potential cost: If you upgrade the database as part of the WordPress upgrade, but then you “lose” the pod, well, you lose the application upgrade but not the database upgrade, and you are left in a weird state.

    So, first, I took a backup. Then, I started poking around in trying to run an upgrade. That’s when I ran into this error:

    Unknown command "FLUSHDB"

    I use the WordPress Redis Object Cache to get that little “spring” in my step. It seemed to be failing on the FLUSHDB command. At that point, I was stuck in a state where the application code was upgraded but the database was not. So I restarted the deployment and got back to 6.2.2 for both application code and database.

    Disabling the Redis Cache

    I tried to disable the Redis plugin, and got the same FLUSHDB error. As it turns out, the default Bitnami Redis chart disables these commands, but it would seem that the WordPress plugin still wants them.

    So, I enabled the commands in my Redis instance (a quick change in the values files) and then disable the Redis Cache plugin. After that, I was able to upgrade to WordPress 6.3 through the UI.

    From THERE, I clicked Sync in ArgoCD, which brought my application pods up to 6.3 to match my database. Then I re-enabled the Redis Plugin.

    Some research ahead

    I am going to check with the maintainers of the Redis Object Cache plugin. If they are relying on commands that are disabled by default, it most likely caused some issues in my WordPress upgrade.

    For now, however, I can sleep under the warm blanket of Argo roll backs!

  • Mi-Light… In the middle of my street?

    With a new Home Assistant instance running, I have a renewed interest in getting everything into Home Assistant that should be in Home Assistant. Now, HA supports a TON of integrations, so this could be a long post. For now, let’s focus on my LED lighting.

    Light It Up!

    In some remodeling efforts, I have made use of LED light strips for some accent lighting. I have typically purchased my strips from superbrightleds.com. The site has a variety of options for power supplies and controllers, and the pricing is on par with other options. I opted for the Mi-Light/Mi-Boxer controllers on this site for controlling these options.

    Why? Well, truthfully, I did not know any better. When I first installed LEDs, I did not have Home Assistant running, and I was less concerned with integration. I had some false hope that the Mi-Light Wi-Fi gateway would have an appropriate REST API that I could use for future integrations.

    As it turns out, it does not. To make matters worse, since I did not buy everything at the same time, I ended up getting a new version of the Wi-Fi gateway (MiBoxer), which required a different app. So, now, I have some lights on the Mi-Light app, and some lights on the Mi-Boxer app, but no lights in my Home Assistant. Clearly it is time for a change.

    A Myriad of Options

    As I started my Google research, I quickly realized there are a ton of options for controlling LED strips. They range from cloud-controlled options to, quite literally, maker-based options where I would need to solder boards and flash firmware.

    Truthfully, the research was difficult. There were so many vendors with proprietary software or cloud reliance, something I am really trying to avoid. I was hoping for something a bit more “off the shelf”,” but with the capability to not rely on the cloud and with built-in integration with Home Assistant. Then I found Shelly.

    The Shelly Trials

    Shelly is a brand from the European company Allterco which focuses on IoT products. They have a number of controllers for smart lighting and control, and their API documentation is publicly available. This allows integrators like Home Assistant to create solid integration packages without trying to reverse engineer calls.

    I found the RGBW controller on Amazon, and decided to buy one to test it out. After all, I did not want to run into the same problem with Shelly that I did with MiLight/MiBoxer.

    Physical Features

    MiLight Controller (top) vs Shelly RGBW Controller (bottom)

    Before I even plugged it in, the size of the unit caught me by surprise. The controller is easily half the size of the MiLight unit, which makes mounting in some of the waterproof boxes I have easier.

    The wiring is pretty simple and extremely flexible. Since the unit will run on AC or DC, you simply attach it to positive and ground from your power source. The RGBW wires from the strip go into the corresponding terminals on the controller, and the strip’s power wire is jumped off of the main terminal.

    Does this mean that strip is always hot? Yes. You could throw a switch or relay on that strip power, but the strip should only draw power if the lights are on. Those lights are controlled by the RGBW wires, so if the Shelly says it is off, then it is off. It’s important to keep power to the controller, though, otherwise your integration won’t be able to communicate with it.

    Connectivity

    Shelly provides an app that lets you connect to the RGBW controller to configure its Wi-Fi settings. The app then lets you categorize the device in their cloud and assign it to a room and scene.

    However, I do not really care about that. I jumped over into Home Assistant and, lo and behold, a new detected integration popped up. When I configured it, I only needed to add the IP of the device. I statically assigned the IP for that controller using its MAC Address, so I let Home Assistant reach out to that IP for integration.

    And that was it. The device appeared in the Shelly integration with all the necessary entities and controls. I was able to control the device with Home Assistant, including change colors, without any issues.

    Replacement Time

    At about $25 per device, the controllers are not cheap. However, the MiLight controllers, when I bought them, were about $18 each, plus I needed a Wi-Fi controller for every 4 controllers, at $40 each. So, by that math, the MiLight setup was $28 for each individually controlled LED strip with Wi-Fi connectivity. I will have to budget some extra cash to replace my existing controllers with new ones.

    Thankfully, the replacement is pretty simple: remove MiLight controller, replace with Shelly, and setup Shelly. Once all my MiLight controllers are gone, I can unplug the two MiLight/MiBoxer Wi-Fi boxes I have. So that is two less devices on the network!

  • Replacing ISY with Home Assistant – Part 3 – Movin’ In!

    This is the continuation of a short series on transitioning away from the ISY using Home Assistant.

    Having successfully gotten my new Home Assistant instance running, move in day was upon me. I did not have a set plan, but things were pretty simple.

    But first, HACS

    The Home Assistant Community Store (HACS) is a custom component for Home Assistant that enables UI management of other custom components. I have a few integrations that utilize custom components, namely Orbit B-Hyve and GE Home (SmartHQ).

    In my old HA instance, I had simply copied those folders in to the custom_components folder under my config directory, but HACS gives me the ability to manage these components from the UI, instead of via SSH. I followed the setup and configuration instructions to the letter, and was able to install the above custom components with ease.

    The Easy Stuff

    With HACS installed, I could tackle all the “non-major.” I am classifying major as my Insteon and Z-Wave devices, since those require some heavier lifting. There were lots of little integrations with external services that I could pretty quickly setup in the new instance and remove from the old. This included things like:

    • Orbit B-Hyve: I have an irrigation system in the backyard for some potted plants, and I put an Orbit Smart Hose timer on it. The B-Hyve app lets me set the schedule, so I don’t really need to automate that every day, but I do have it setup to enable the rain delay via NodeRED.
    • MyQ: I have a Chamberlain garage door open which is connected to MyQ, so this gives me the status of the door and the ability to open/close it.
    • GE Home: Not sure that I need to be able to see what my oven is doing, but I can.
    • Rheem Econet: I can monitor my hot water heater and set the temperature. It is mostly interesting to watch usage, and it is currently the only thing that allows me to track its power consumption.
    • Ring: This lets me get some information from my Ring doorbell, including its battery percentage.
    • Synology: The Synology integrate lets me monitor all of my drives and cameras. There is not much to control, per say, but it collects a lot of data points that I then scrape into Prometheus for alerting.
    • Unifi: I run the Unifi Controller for my home network, and this integration gives me an entity for all the devices on my network. Again, I do not use much of the control aspect, but I definitely use the data being collected.

    Were these all easy? Definitely. I was able to configure all of these integrations on the new instance and then delete them from the old without conflict.

    Now it’s time for some heavy lifting.

    Z-Wave Migration

    I only have 6 Z-Wave devices, but all were on the Z-Wave network controlled by the ISY. To my knowledge, there is no easy migration. I set up the Z-Wave JS add-on in Home Assistant, selecting my Z-Wave antenna from the USB list. Once that was done, I had to drop each device off of the ISY and then re-add it to the new Home Assistant instance.

    Those steps were basically as follows:

    1. Pick a device to remove.
    2. Select “Remove a Z-Wave Device” from the Z-Wave Menu in the ISY.
    3. While it is waiting, put the device in “enroll/un-enroll” mode. It’s different for every device. On my Mineston plugs, it was ‘click the power button three times quickly.’
    4. Wait for the ISY to detect the removal.
    5. In Home Assistant, under the Z-Wave integration, click Devices. Click the Add Device button, and it will listen for devices.
    6. Put the device in “enroll/un-enroll” mode again.
    7. If prompted, enter the device pin. Some devices require them, some do not. Of my 6 devices, three had pins, three did not.
    8. Home Assistant should detect the device and add it.
    9. Repeat steps 1 through 8 for all your Z-Wave devices.

    As I said, I only have 6 devices, so it was not nearly as painful. If you have a lot of Z-Wave devices, this process will take you some time.

    Insteon Migration

    Truthfully, I expected this to be very painful. It wasn’t that bad. I mentioned in my transition planning post that I grabbed an XML list of all my nodes in the ISY. This is my reference for all my Insteon devices.

    I disconnected the ISY from the PLM and connected it to the Raspberry Pi. I added the Insteon integration, and entered the device address (in my case, it showed up as /dev/ttyUSB1). At that point, the Insteon integration went about finding all my devices. They showed up with their device name and address, and the exercise was to look up the address in my reference and rename the device in Home Assistant.

    Since scenes are written to the devices themselves, my scenes came over too. Once I renamed the devices, I could set the scene names to a friendly name.

    NodeRED automation

    After flipping the URL in my new Home Assistant instance to be my old URL, I went into NodeRED to see the damage. I had to make a few changes to get things working:

    1. I had to generate a new long-lived token in Home Assistant, and update NodeRED with the new token.
    2. Since devices changed, I had to touch every action and make sure I had the right devices selected. Not terrible, just a bit tedious.

    ALEXA!

    I use the ISY Portal for integration with Amazon Alexa, and, well, my family have gotten used to doing some things with Alexa. Nabu Casa provides Home Assistant Cloud to fill this gap.

    It is not worth much space here, other than to say their documentation on installation and configuration was spot on, so check it out if you need integration with Amazon Alexa or Google Assistant.

    Success!!!

    My ISY is shut down, and my Home Assistant is running the house, including the Insteon and Z-Wave devices.

    I did notice that, on reboot, the USB address of the Z-Wave and PLM device swapped. I hope that isn’t a recurring thing. The solution was to re-configure the Insteon and Z-Wave integrations with the new address. Not hard, I just hope it is not a pattern.

    My NodeRED integrations are much more stable. Previously, NodeRED was calling Home Assistant, which was trying to use the ISY to control the devices. This was fraught with errors, mostly because the ISY’s APIs can be dodgy. With Home Assistant calling the shots directly, it’s much more responsive.

    I have to work on some of my scenes and automations for Insteon: While I had previously moved most of my programs out of the ISY and into NodeRED, there were a few stragglers that I need to setup on NodeRED. But that will take about 20 minutes.

    At this point, I’m going to call this venture successful. That said, I will now focus my attention on my LED strips. I have about 6 different LED strips with some form of MiLight/MiBoxer controller. I hate them. So I will be exploring alternatives. Who knows, maybe my exploration will generate another post.

  • Replacing ISY with Home Assistant – Part 2 – A New Home

    This is the continuation of a short series on transitioning away from the ISY using Home Assistant.

    Getting Started, again

    As I mentioned in my previous post, my plan is to run my new instance of Home Assistant in parallel with my old instance and transfer functionality in pieces. This should allow me to minimize downtime, and through the magic of reverse proxy, I will end up with the new instance living at the same URL as the old instance.

    Part of the challenge of getting started is simply getting the Raspberry Pi setup in my desired configuration. I bought an Argon One M.2 case and an M.2 SSD card to avoid running Home Assistant on an SD Card. However, that requires a bit of prework, particularly for my older Pi.

    New Use, New Case

    I ordered the Argon One M.2 case after a short search. I was looking for a solution that allowed me to mount and connect an M.2 SSD. In this sense, there were far too many options. There are a number of “bare board” solutions, including one from Geekworm and another from Startech.com. The price points were similar, hovering around $25 per board. However, the bare board required me to buy a new case, and most of the “tall” cases required for both the Pi and the bare board ran another $15-$25, so I was looking at around $35-$45 for a new board and case.

    My Amazon searches kept bringing up the Argon One case, so I looked into it. It provided both the case and the SSD support, and added some thermal management and a sleek pinout extension. And, at about $47, the price point was similar to what I was going to spend on a board and new case, so I grabbed that case. Hands on, I was not disappointed: the case is solid and had a good guide for installation packaged with it.

    Always read ahead…

    When it comes to instructions, I tend to read ahead. Before I put everything in the case, I wanted to make sure I was going to be ready before I buttoned it up. As I read through the waveshare.com guide for getting the Argon One case running with a boot to SSD, I noticed steps 16 and 17.

    The guide walked through the process of using the SD Card Copier to move the image from the SD card to the SSD card. However, I am planning on using the Home Assistant OS image, which means I’ll need to image the SSD from my machine with that image. Which means I have to get the SSD connected to my machine…

    Yet another Franken-cable

    I do not have a USB adapter for SSD cards, because I do not flash them often enough to care. So how do I use the Raspberry Pi Imager to flash Home Assistant OS onto my SSD? With a Franken-cable!

    I installed the M.2 SSD in the Argon One’s base, but did not put the PI on it. Using the bare base, I installed the “male to male” USB U adapter in the M.2 base, and used a USB extension cable to attach the other end of the U Adapter to my PC. It showed up as an Argon SSD, and I was able to flash the SSD with the Home Assistant OS.

    Updated Install Steps

    So, putting all this together, I did the following to get Home Assistant running on Raspberry Pi / Argon One SSD:

    1. Install the Raspberry Pi in the Argon One case, but do not attach the base with the SSD.
    2. From this guide, follow steps 1-15 as written. Then shutdown the system and take out the SSD.
    3. Install the SSD in Argon One base, and attach it to your PC using the USB Male to Male U adapter (included with the Argon) and a USB extension cable.
    4. Write the Home Assistant OS for RPI4 to the SSD using the Raspberry Pi Imager utility.
    5. Put the Argon One case together, and use the U adapter to connect the SSD to the RPI.
    6. Power on the RPI

    At this point, Home Assistant should boot for the first time and begin its setup process.

    Argon Add-ons

    Now, the Argon case has a built-in fan and fan controller. When using Raspbian, you can install the controller software. Home Assistant OS is different, but thankfully, Adam Outler wrote add-ons to allow Home Assistant to control the Argon fan.

    I followed the instructions, but then realized that I needed to enable I2C in order to get it to work. Adam to the rescue: Adam wrote a HassOS configurator add on for both I2C and Serial support. I installed the I2C configurator and ran it according to its instructions.

    Running on Empty

    My new Home Assistant instance is running. It is not doing anything, but it is running. Next steps will be to start migrating my various integrations from one instance to another.

  • Replacing ISY with Home Assistant – Part 1a – Transition Planning

    This is the continuation of a short series on transitioning away from the ISY using Home Assistant.

    The Experiment

    I mentioned in my previous post that I had ordered some parts to run an experiment with my PowerLinc Modem (PLM). I needed to determine that I could use my existing PLM (an Insteon 2413S, which is the serial version) with Home Assistant’s Insteon plugin.

    Following this highly-detailed post from P. Lutus, I ordered the following parts:

    • A USB Serial Adapter -> I used this one, but I think any USB to DB9 adapter will work.
    • A DB9 to RJ45 Modular Adapter -> The StarTech.com one seems to be popular in most posts, and it was easy to use.

    While I waited for these parts, I grabbed the Raspberry Pi 4 Model B that I have tasked for this and got to work installing a test copy of Home Assistant on it. Before you even ask, I have had this Raspberry Pi 4 for a few years now, prior to the shortages. It has served many purposes, but its most recent task was as a driver for MagicMirror on my office television. However, since I transitioned to a Banana Pi M5 for my network reverse proxy, well, I had a spare Raspberry Pi 3 Model B hanging around. So I moved MagicMirror to the RPi 3, and I have a spare RPi 4 ready to be my new Home Assistant.

    Parts Arrived!

    Once my parts arrived, I assembled the “Frankencable” necessary to connect my PLM to the Raspberry Pi. It goes PLM -> Standard Cat 5 (or Cat 6) Ethernet Cable -> RJ45 to DB9 Adapter -> Serial to USB Adapter -> USB Port on the Pi.

    With regard to the RJ45 to DB9 Adapter, you do need the pinout. Thankfully, Universal Devices provides one as part of their Serial PLM Kit. You could absolutely order their kit and it would work. But their Kit is $26: I was able to get the Serial to USB adapter for $11.99, and the DB9 to RJ45 for $3.35, and I have enough ethernet cable lying around to wire a new house, so I got away for under $20.

    Before I started, I grabbed an output of all of my Insteon devices from my ISY. Now, P. Lutus’ post indicates using the ISY’s web interface to grab those addresses, but if you are comfortable with Postman or have another favorite program for making Web API calls, you can get an XML document with everything. The curl command is

    curl http://<isy ip address>/rest/nodes -u "<isy user>:<isy password>"

    I used Postman to make the call and stored the XML in a file for reference.

    With everything plugged in, I added the Insteon integration to my test Home Assistant installation, selected the PLM Serial option, and filled in the device address. That last one took some digging, as I had to figure out which device to use. The easiest way to do it is to plug in the cable, then use dmesg to determine where in /dev the device is mounted. This linuxhint.com post gives you a few options for finding out more about your USB devices on Linux systems.

    At that point, the integration took some time to discover my devices. As mentioned in P. Lutus’ post, it will take some time to discover everything, and the battery-operated devices will not be found automatically. However, all of my switches came in, and each device address was available.

    What about Z-Wave?

    I have a few Z-Wave devices that also use the ISY as a hub. To move off of the ISY completely, I need a Z-Wave alternative. A colleague of mine runs Z-Wave at home to control EVERYTHING, and does so with a Z-Wave antenna and Home Assistant. I put my trust in his work and did not feel the need to experiment with the Z-Wave aspect. I just ordered an antenna.

    With that out of the way, I declared my experiment a success, and starting working on a transition plan.

    Raspberry Pi à la Mode

    Raspberry Pi is great on its own, but everything is better with some ice cream! In this case, my “ice cream” is a new case and a new M.2 SSD. Why? Home Assistant is chatty, and will be busy running a lot of my Home Automation. And literally every channel I’ve seen on the Home Assistant Discord server says “do not run on an SD card!”

    The case above is an expandable system that not only lets me add an M.2 SATA drive to the Pi, but also adds some thermal management for the platform in general. Sure, it also adds and HDMI daughter board, but considering I’ll be running Home Assistant OS, dual screen displays of the command line does not seem like a wise display choice.

    With those parts on order, I have started to plan my transition. It should be pretty easy, but there are a number of steps involved. I will be running two instances of Home Assistant in parallel for a time, just to make sure I can still turn the lights on and off with the Amazon Echo… If I don’t, my kids might have a fit.

    1. Get a good inventory of what I have defined in the ISY. I will want a good reference.
    2. Get a new instance of Home Assistant running from the SSD on the RPI 4. I know there will be some extra steps to get this all working, so look for that in the next post.
    3. Check out voice control with Home Assistant Cloud. Before I move everything over, I want to verify the Home Assistant Cloud functionality. I am currently paying for the ISY portal, so I’ll be switching from one to the other for the virtual assistant integration.
    4. Migrate my Z-Wave devices. Why Z-Wave first? I only have a few of those (about 6), and they are running less “vital” things, like lamps and landscape lighting. Getting the Z-Wave transferred will allow me to test all of my automation before moving the Insteon Devices
    5. Migrate my Insteon Devices. This should be straight forward, although I’ll have to re-configure any scenes and automations in Node Red.

    Node Red??

    For most of my automation, I am using a separate installation of Node Red and the Home Assistant palette.

    Node Red provides a great drag-and-drop experience for automation, and allows for some pretty unique and complex flows. I started moving away from the ISY programs over the last year. The only issue with it has been that the ISY’s API connectivity is spotty, meaning Home Assistant sometimes has difficulty talking to the ISY. Since Node Red goes through Home Assistant to get to the ISY, sometimes the programs look like they’ve run correctly when, in fact, they have not.

    I am hoping that removing the ISY will provide a much better experience with this automation.

    Next Steps

    When my parts arrive, I will start into my plan. Look for the next in the series in a week or so!