• Publishing Markdown to Confluence

    For what seems like a long time now, I have been using RittmanMead’s md_to_conf project to automatically publish documentation from Markdown files into Confluence Cloud. I am on something of a documentation kick, and what started as a quick fix ultimately turned into a new project.

    It all started with the word “legacy”…

    In publishing our Markdown files, I noticed that the pages all had the legacy editor text in Confluence Cloud. I wanted to move our pages to the updated editor, and my gut reaction was “well, maybe it is because the md_to_conf project was using the v1 APIs.

    The RittmanMead project is great, but it has not changed in about a year. Now, granted, once things work, I wouldn’t expect much change.

    So I forked the project and started changing some API calls. The issue is, well, I just did not know when to stop. My object-oriented tendencies took over, and I ended up way past the point of no return:

    • Split converter and client code into separate Python classes to simplify the main module.
    • Converted the entire project to a Python module and built a wheel for simplified installation and execution.
    • Added Flake8/Black for Linting
    • Added a GitHub workflow for building and publishing.
    • Added analysis steps to analyze the code in SonarCloud.io.

    It is worth noting that, at the end of the day, the editor value was already supported in RittmanMead/md_to_conf: You just have to set the version argument to 2 when running. I found that out about an hour or so into my journey, but, by that time, I was committed.

    Making a break for it

    At this point, a few things had happened:

    1. My code had diverged greatly from the RittmanMead project.
    2. Most likely, the functionality and purpose for which the original project was written had changed.
    3. I broke support for Confluence Server.
    4. I have some plans for additional features for the module, including the ability to pull labels from the Markdown.

    With that in mind, I had GitHub “break” the fork connection between my repository and the RittmanMead repository.

    Let me be VERY clear: the README will ALWAYS reflect the source of this repository. This project would not be possible without the contributors to the RittmanMead script, and whatever happens in my respository is building on their fine work. But I have designs on a more formal package and process, as well as my own functional roadmap, so a split makes the most sense.

    Introducing md-to-conf

    With that in mind, I give you md-to-conf (PyPi / GitHub)! I will be adding some issues for feature enhancements and work on them as I can, although, my first order of business will most likely be some basic testing to make sure I don’t break anything as I work.

    If you have a desire to contribute, please see the contribution guideline and have at it!

  • You really should get that documented…

    One of the most important aspects of a software engineer/architect is to document what they have done. While it is great that they solved the problem, that solution will become a problem for someone else if the solution has not been well documented. Truthfully, I have caused my own problems when revisiting old, improperly documented code.

    Documentation Generation

    Most languages today have tools that will extract code comments and turn them into formatted content. I used mkdocs to generate documentation for my simple Python monitoring tool.

    Why generate from the code? The simple reason is, if the documentation lives within the code, then it is more likely to be updated. Requiring a developer to jump into another repository or project takes time and can be a forgotten step in development. The more automatic your documentation is, the more likely it is to get updated.

    Infrastructure as Code (IaC) is no different. Lately I have been spending some time in Terraform, and sought out a solution for documenting Terraform. terraform-docs to the rescue!

    Documenting Terraform with terraform-docs

    The Getting Started guides for terraform-docs are pretty straight forward and allow you to generate content in a variety of different targets. All I really wanted was a simple README.md file for my projects and any sub-modules, so I left the defaults pretty much as-is.

    I typically structure my repositories with a main terraform folder, and sub-modules under that if needed. Without any additional configuration, this command worked like a charm:

    terraform-docs markdown document --output-file README.md .\terraform\ --recursive

    It generated a README.md file for my main module and all sub-modules. I played a little with configuration, mostly to set up default values in the configuration YAML file so that others could run a simpler command:

    terraform-docs --output-file README.md .\terraform\ --recursive

    I will get a pre-commit hook configured in my repositories to run this command before a commit to ensure the documents are always up to date.

  • Generative AI comes to Google

    About a month ago, I signed up for Google’s generative AI offerings: SGE (generative AI in Search) and Code Tips. I was accepted to the labs a few weeks ago, and, overall, it’s been a pretty positive experience.

    SGE – Yet Another Acronym

    I don’t really have any idea what SGE means, but it is essentially the generative AI component of search. The implementation is fairly intrusive: your search if fed into the generative AI, which processes and returns answers that show at the top of the page. Longer searches require you to click a button to submit the phrase to generative AI, but short searches are always submitted.

    Why is that intrusive? Well, when the results are returned, they can take up nearly all of the screen real estate, pushing actual search results to the bottom. In most cases, I just want the search results, not the generative AI’s opinion on the matter. I would prefer having to submit the search to generative AI explicitly. What would be nicer is if there was a different submit button: type your search and hit “Submit to SGE” or something like that, at which point it will not only search, but also submit to SGE. A standard search, though, would remain the same.

    As to accuracy: It’s about what I’ve seen from other tools, but nothing that really gets me overly excited. Truthfully, Google’s search algorithm tends to return results that are more meaningful than the SGE results.

    Code Tips

    Where SGE failed, Code Tips has succeeded with flying colors. These are the types of searches that I perform on a daily basis, and the Code Tips does a pretty good job.

    For example, this search:

    powershell create object array

    returns the following suggestion from Code Tips:

    $array = @()
    $array += [pscustomobject]@{
      name = "John Doe"
      age = 30
    }
    $array += [pscustomobject]@{
      name = "Jane Doe"
      age = 25
    }

    Every Code Tip comes with a warning: use code with caution with a link to a legal page around responsible use and citations. In other words, Google is playing “CYA”.

    Code Tips works great for direct questions, but some indirect questions, like c# syntax for asynchronous lambda function, return nothing. However, with that same search string, SGE returns a pretty good synopsis of how to write an asynchronous lambda function.

    Overall Impressions

    Code Tips.. A solid B+. I would expect it to get better, but, unlike GitHub CoPilot, it’s not built into an IDE, which means I’m still defaulting to “ask the oracle” for answers.

    SGE, I can’t go above a C+. It’s a little intrusive for my taste, and right now, the answers it provides aren’t nearly as helpful as the search results that I now need to scroll to see.

  • Pack, pack, pack, they call him the Packer….

    Through sheer happenstance I came across a posting for The Jaggerz playing near me and was taken back to my first time hearing “The Rapper.” I happened to go to school with one of the member’s kids, which made it all the more fun to reminisce.

    But I digress. I spent time a while back getting Packer running at home to take care of some of my machine provisioning. At work, I have been looking for an automated mechanism to keep some of our build agents up to date, so I revisited this and came up with a plan involving Packer and Terraform.

    The Problem

    My current problem centers around the need to update our machine images weekly, but still using Terraform to manage our infrastructure. In the case of Azure DevOps, we can provision VM Scale Sets and assign those Scale Sets to an Azure DevOps agent pool. But, when I want to update that image, I can do it two different ways:

    1. Using Azure CLI, I can update the Scale Set directly.
    2. I can modify the Terraform repository to update the image and then re-run Terraform.

    Now, #1 sounds easy, right? Run command and I’m done. But it then defeats the purpose of Terraform, which is to maintain infrastructure as code. So, I started down path #2.

    Packer Revisit

    I previously used Packer to provision Hyper-V VMs, but the provisioner for azure-rm is pretty similar. I was able to configure a simple windows based VM and get the only application I needed installed with a Powershell script.

    One app? On a build agent? Yes, this is a very particular agent, and I didn’t want to install it everywhere, so I created a single agent image with the necessary software.

    Mind you, I have been using the runner-images Packer projects to build my Ubuntu agent at home, and we use them to build both Windows and Ubuntu images at work, so, by comparison, my project is wee tiny. But it gives me a good platform to test. So I put a small repository together with a basic template and a Powershell script to install my application, and it was time to build.

    Creating the Build Pipeline

    My build process should be, for all intents and purposes, one step that runs the packer build command, which will create the image in Azure. I found the PackerBuild@1 task, and thought my job was done. It would seem that the Azure DevOps task hasn’t kept up with the times, either that, or Packer’s CLI needs help.

    I wanted to use the PackerBuild@1 task to take advantage of the service connection. I figured, if I could run the task with a service connection, I wouldn’t have to store service principal credential in a variable library. As it turns out… well, I would have to do that anyway.

    When I tried to run the task, I got an error that “packer fix only supports json.” My template is in HCL format, and everything I have seen suggests that Packer would rather move to HCL. Not to be beaten, I looked at the code for the task to see if I could skip the fix step.

    Not only could I not skip that step, but when I dug into the task, I noticed that I wouldn’t be able to use the service connection parameter with a custom template. So with that, my dreams of using a fancy task went out the door.

    Plan B? Use Packer’s ability to grab environment variables as default values and set the environment variables in a Powershell script before I run the Packer build. It is not super pretty, but it works.

    - pwsh: | 
        $env:ARM_CLIENT_ID = "$(azure-client-id)"
        $env:ARM_CLIENT_SECRET = "$(azure-client-secret)"
        $env:ARM_SUBSCRIPTION_ID = "$(azure-subscription-id)"
        $env:ARM_TENANT_ID = "$(azure-tenant-id)"
        Invoke-Expression "& packer build --var-file values.pkrvars.hcl -var vm_name=vm-image-$(Build.BuildNumber) windows2022.pkr.hcl"
      displayName: Build Packer

    On To Terraform!

    The next step was terraforming the VM Scale Set. If you are familiar with Terraform, the VM Scale Set resource in the AzureRM provider is pretty easy to use. I used the Windows VM Scale Set, as my agents will be Windows based. The only “trick” is finding the image you created, but, thankfully, that can be done by name using a data block.

    data "azurerm_image" "image" {
      name                = var.image_name
      resource_group_name = data.azurerm_resource_group.vmss_group.name
    }

    From there, just set source_image_id to data.azurerm_image.image.id, and you’re good. Why look this up by name? It makes automation very easy.

    Gluing the two together

    So I have a pipeline that builds an image, and I have another pipeline that executes the Terraform plan/apply steps. The latter is triggered on a commit to main in the Terraform repository, so how can I trigger a new build?

    All I really need to do is “reach in” to the Terraform repository, update the variable file with the new image name, and commit it. This can be automated, and I spent a lot of time doing just that as part of implementing our GitOps workflow. In fact, as I type this, I realize that I probably owe a post or two on how exactly we have done that. But, using some scripted git commands, it is pretty easy.

    So, my Packer build pipeline will checkout the Terraform repository, change the image name in the variable file, and commit. This is where the image name is important: Packer spit out the Azure Image ID (at least, not that I saw), so having a known name makes it easy for me to just tell Terraform to use the new image name, and it uses that to look up the value.

    What’s next?

    This was admittedly pretty easy, but only because I have been using Packer and Terraform for some time now. The learning curve is steep, but as I look across our portfolio, I can see areas where these types of practices can help us by allowing us to build fresh machine images on a regular cadence, and stop treating our servers as pets. I hope to document some of this for our internal teams and start driving them down a path of better deployment.

  • The Battle of the Package Managers

    I dove back into React over the past few weeks, and was trying to figure out whether to use NPM or Yarn for package management. NPM has always seemed slow, and in the few times I tried Yarn, it seemed much faster. So I thought I would put them through their paces.

    The Projects

    I was able to test on a few different projects, some at home and some at work. All were React 18 with some standard functionality (testing, linting, etc), although I did vary between applications using Vite and component libraries that used webpack. While most of our work projects use NPM, I did want to try with Yarn in that environment, and I ended up moving my home environment to Yarn for the test.

    The TLDR; version of this is: Yarn is great and fast, but I had so much trouble with authorizing scoped feeds with a Proget NPM feed that I ditched Yarn at work in favor of our NPM standard. At home, where I utilize public packages, it’s not an issue, so I’ll continue using Yarn at home.

    Migrating to Yarn

    NPM to Yarn 1.x is easy: the commands are pretty much fully compatible, node_modules is still used, and the authentication is pretty much the same. Migrating from Yarn 1 to “modern Yarn” is a little more involved.

    However, the migration overall, was easy, at least at home, where I was not dealing with custom registries. At work, I had to use a .yarnrc.yml file to setup some configurations for NPM registries

    Notable Differences

    Modern Yarn has some different syntaxes, but, overall, is pretty close to its predecessor. It’s notably faster, and if you convert to PNP pacakge management, your node_modules folder goes away.

    The package managers are still “somewhat” interchangeable, save for any “npm” commands you may have in custom scripts in your packages.json file. That said, I would NEVER advise you to use different package managers on the same project.

    Yarn is much faster than NPM at pretty much every task. Also, the interactive upgrade plugin makes updating packages a breeze. But, I ran into an authentication problem I could not get past.

    The Auth Problem

    We use Proget for our various feeds. It provides a single repository for packages and container images. For our NPM packages, we have scoped them to our company name.

    In configuring Yarn for these scoped repositories, I was never able to get the authentication working so that I could add a package from our private feeds. The error message was something to the effect of Invalid authentication (as an anonymous user). All my searching yielded no good solutions, in spite of hard-coding a valid auth token in the .yarnrc.yml file.

    Now, I have been having some “weirder” issues with NPM authentication as well, so I am wondering if it is machine specific. I have NOT yet tested at home, which I will get to. However, my work projects have other deadlines, and I wasn’t about to burn cycles on getting auth to work. So, at work, I backed out of Yarn for the time being.

    What to do??

    As I mentioned above, some more research is required. I’d like to setup a private feed at home, just to prove that there is either something wrong with my work machine OR something wrong with Yarn connecting to Proget. I’m thinking it’s the former, but, until I can get some time to test, I’ll go with what I know.

    That said, if it IS just a local issue, I will make an effort to move to Yarn. I believe the speed improvements are worth it alone, but there are some additional benefits that make it a good choice for package management.

  • A small open source contribution never hurt anyone

    Over the weekend, I started using the react-runtime-config library to configure one of my React apps. While it works great, the last activity on the library was over two years ago, so I decided to fork the repository and publish the library under my own scope. This led me down a very eccentric path and opened a number of new doors.

    Step One: Automate

    I wanted to create an automated build and release process for the library. The original repository uses TravisCI for builds, but I much prefer having the builds within Github for everyone to see.

    The build and publish processes are pretty straight forward: I implemented a build pipeline which is triggered on any commit and runs a build and test. The publish pipeline is triggered on creating a release in Github, and runs the same build/test, but updates the package version to the release version and then publishes the package to npmjs.org under the @spydersoft scope.

    Sure, I could have stopped there… but, well, there was a ball of yarn in the corner I had to play with.

    Step 2: Move to Yarn

    NPM is a beast. I have worked on projects which take 5-10 minutes to run an install. Even my little test UI project took about three minutes to run an npm install and npm build.

    The “yarn v. npm” war is not something I’d like to delve into in this blog. If you want more detail, Ashutosh Krishna recently posted a pretty objective review of both over on Knowledge Hut. Before going all in on Yarn with my new library, I tested Yarn on my small UI project.

    I started by deleting my package-lock.json and node_modules folder. Then, I ran yarn install to get things running. By default, Yarn was using Yarn 1.x, so I still got a node_modules folder, but a yarn.lock file instead of package-lock.json. I modified my CI pipelines, and I was up and running.

    On my build machine, the yarn install command ran in 31 seconds, compared to 38 seconds for npm install on the same project. yarn build took 34 seconds, compared to 2 minutes and 20 seconds for npm build.

    Upgrading to modern Yarn

    In my research, I noted that there are two distinct flavors of yarn: what they term “modern versions” of Yarn, which is v2.x and above, and v1.x. As I mentioned, the yarn command on my machine defaults to the 1.x version. Not wanting to be left behind, I decided to migrate to the new version.

    The documentation was pretty straight forward, and I was able to get everything running. I am NOT yet using the Plug and Play installation strategy, but I wanted to take advantage of what the latest versions have to offer.

    First Impressions

    I am not an “everyday npm user,” so I cannot speak to all the edge cases which might occur. Generally, I am limited to the standard install/build/test commands that are used was part of web development. While I needed to learn some new commands, like yarn add instead of npm install, the transition was not overly difficult.

    In a team environment, however, moving to Yarn would require some coordination between the team to ensure everyone knows when changes would be made and avoid situations where both package managers are used.

    With my small UI project converted, I was impressed enough to move the react-runtime-config repository and pipelines to use Yarn 3.

    Step 3: Planned improvements

    I burned my allotted hobby time on steps 1 and 2, so most of what is left is a to-do list for react-runtime-config.

    Documentation

    I am a huge fan of generated docs, so I would like to get some started for my version of the library. The current readmes are great, but I also like to have the technical documentation for those who want it.

    External Configuration Injection

    This one is still very much in a planning stage. My original thought was to allow the library to make a call to an API to get configuration values, however, I do not want to add any unnecessary overhead to the library. It may be best to allow for a hook.

    I would also want to be able to store those values in local storage, but still have them be updatable. This type of configuration will support applications hosted within a “backend for frontend” API, and allow that API to pass configuration values as needed.

    Wrapping up

    I felt like I made some progress over the weekend, if only for my own projects. Moving react-runtime-config allowed me to make some general upgrades to the library (new dependency updates) and sets the stage for some additional work. My renewed interest in all things Node also stimulated a move from Create React App to Vite, which I will document further in an upcoming post.

  • Update: Creating an Nginx-based web server image – React Edition

    This is a short update to Creating a simple Nginx-based web server image which took me about an hour to figure out and 10 seconds to fix…..

    404, Ad-Rock’s out the door

    Yes, I know it’s “four on the floor, Ad-Rock’s at the door.” While working on hosting one of my project React apps in a Docker image, I noticed that the application loaded fine, but I was getting 404 errors after navigating to a sub-page (like /clients) and then hitting refresh.

    I checked the container logs, and lo and behold, there were 404 errors for those paths.

    Letting React Router do its thing

    As it turns out, my original Nginx configuration was missing a key line:

    server { 
      listen 8080;
      server_name localhost;
      port_in_redirect off;
      
      location / {
        root /usr/share/nginx/html;
        index index.html index.htm;
    
        # The line below is required for react-router
        try_files $uri $uri/ /index.html;
      }
      error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   /usr/share/nginx/html;
        }
    }

    That little try_files line made sure to push unknown paths back to index.html, where react-router would handle them.

    And with that line, the 404s disappeared and the React app was running as expected.

  • Configuring React SPAs at Runtime

    Configuring a SPA is a tricky affair. I found some tools to make it a little bit easier, but it should still be used with a fair amount of caution.

    The App

    I built a small React UI to view some additional information that I am storing in my Unifi Controller for network devices. Using the notes field on the Unifi device, I store some additional fields in JSON format in order for other applications to use. It is nothing wild, but allows me to have some additional detail on my network devices.

    In true API-first fashion, any user-friendly interface is an afterthought… Since most of my interaction with the service is through Powershell scripts, I did not bother to create the UI.

    However, I got a little tired of firing up Postman to edit a few things, so I spun up a React SPA for the task.

    Hosting the SPA

    I opted to host the SPA in its own container running Nginx to host the files. Sure, I could have used thrown the SPA inside of the API and hosted it using static files, which is a perfectly reasonable and efficient method. My long-term plan is to create a new “backend for frontend” API project that hosts this SPA and provides appropriate proxying to various backend services, including my Unifi API. But I want to get this out, so a quick Nginx container it is.

    I previously posted about creating a simple web server image using Nginx. Those instructions (and an important update for React) served me well to get my SPA running, but how can I configure the application at runtime? I want to build the image once and deploy it any number of times, so having to rebuild just to change a URL seems crazy.

    Enter react-runtime-config

    Through some searching, I found the react-runtime-config library. This library lets me set configuration values either in local storage, in a configuration file, or in the application as a default value. The library’s documentation is solid and enough to get get you started.

    But, wait, how do I use this to inject my settings??? ConfigMaps! Justin Polidori describes how to use Kubernetes ConfigMaps and volume mounts to replace the config.js file in the container with one from the Kubernetes ConfigMap.

    It took a little finagling since I am using a library chart for my Helm templating, but the steps were something like this:

    1. Configure the React app using react-runtime-config. I added a config.js file to the public folder, and made sure my app was picking settings from that file.
    2. Create a ConfigMap with my window.* settings.
    3. Mount that ConfigMap in my container as /path/to/public/config.js

    Viola! I can now control some of the settings of my React App dynamically.

    Caveat Emptor!

    I cannot stress this enough: THIS METHOD SHOULD NOT BE USED FOR SECRET OR SENSITIVE INFORMATION. Full stop.

    Generally, the problem with SPAs, whether they are React, Angular, or pick your favorite framework, is that they live on the client in plain text. Hit F12 in your favorite browser, and you see the application code.

    Hosting settings like this means the settings for my application are available just by navigating to /config.js. Therefore, it is vital that these settings are not in any way sensitive values. In my case, I am only storing a few public URLs and a Client ID, none of which are sensitive values.

    The Backend for Frontend pattern allows for more security and control in general. I plan on moving to this when I create a BFF API project for my template.

  • Talk to Me Goose

    I’ve gone and done it: I signed up for a trial of Github Copilot. Why? I had two driving needs.

    In my work as an architect, I do not really write a TON of code. When I do, it is typically for proof of concepts or models for others to follow. With that in mind, I am not always worried about the quality of the code: I am just looking to get something running so that others can polish it and make it better. So, if Copilot can accelerate my delivery of these POCs and models, it would be great.

    At home, I tinker when I can with various things. Whether I am contributing contributing to open source projects or writing some APIs to help me at home, having a little AI companion might be helpful.

    My one month experiment

    Github Copilot offers a free thirty day trial, so I signed up. Now, unfortunately, because I did not have a Github Enterprise account, I do not have access to Copilot for Business. Since that has privacy guarantees that Copilot for Individuals does not have, I kept Copilot on my home machine.

    In spite of this, I did sufficient work in the 30 days to get a pretty good idea of what Copilot has to offer. And I will say, I was quite impressed.

    Intellisense on Steroids

    With its integration to VS Code and Visual Studio, Copilot really beefs up intellisense. Where normal intellisense will complete a variable name or function call, Copilot will start to suggest code based on the context in which I am typing. Start typing a function name, and Copilot will suggest the code for the function, using the code around it as reference. Natural language comments are my favorite. By adding a comment like “bubble sort a generic list,” Copilot will generate code for the comment.

    Head to Head!

    As I could not install Copilot on my work machine, I am essentially running a head-to-head comparison of “Copilot vs No Copilot.” In this type of comparison, I typically look for “help without intrusion,” meaning that the tool makes things faster without me knowing it is there. By that standard, Copilot passes with flying colors. On my home machine, it definitely feels as though I am able to generate code faster, yet I am not constantly “going to the tool” to get that done. The integration with Visual Studio and VS Code is very good.

    That said, the only official IDEs supported are Visual Studio, VS Code, VIM/NeoVIM, and the Jetbrains IDEs. That last one is in beta stages. I anticipate more support the tool matures, but if you are using one of those IDEs heavily, I highly recommend giving Copilot a shot. Everyone needs a Goose.

  • Mi-Light… In the middle of my street?

    With a new Home Assistant instance running, I have a renewed interest in getting everything into Home Assistant that should be in Home Assistant. Now, HA supports a TON of integrations, so this could be a long post. For now, let’s focus on my LED lighting.

    Light It Up!

    In some remodeling efforts, I have made use of LED light strips for some accent lighting. I have typically purchased my strips from superbrightleds.com. The site has a variety of options for power supplies and controllers, and the pricing is on par with other options. I opted for the Mi-Light/Mi-Boxer controllers on this site for controlling these options.

    Why? Well, truthfully, I did not know any better. When I first installed LEDs, I did not have Home Assistant running, and I was less concerned with integration. I had some false hope that the Mi-Light Wi-Fi gateway would have an appropriate REST API that I could use for future integrations.

    As it turns out, it does not. To make matters worse, since I did not buy everything at the same time, I ended up getting a new version of the Wi-Fi gateway (MiBoxer), which required a different app. So, now, I have some lights on the Mi-Light app, and some lights on the Mi-Boxer app, but no lights in my Home Assistant. Clearly it is time for a change.

    A Myriad of Options

    As I started my Google research, I quickly realized there are a ton of options for controlling LED strips. They range from cloud-controlled options to, quite literally, maker-based options where I would need to solder boards and flash firmware.

    Truthfully, the research was difficult. There were so many vendors with proprietary software or cloud reliance, something I am really trying to avoid. I was hoping for something a bit more “off the shelf”,” but with the capability to not rely on the cloud and with built-in integration with Home Assistant. Then I found Shelly.

    The Shelly Trials

    Shelly is a brand from the European company Allterco which focuses on IoT products. They have a number of controllers for smart lighting and control, and their API documentation is publicly available. This allows integrators like Home Assistant to create solid integration packages without trying to reverse engineer calls.

    I found the RGBW controller on Amazon, and decided to buy one to test it out. After all, I did not want to run into the same problem with Shelly that I did with MiLight/MiBoxer.

    Physical Features

    MiLight Controller (top) vs Shelly RGBW Controller (bottom)

    Before I even plugged it in, the size of the unit caught me by surprise. The controller is easily half the size of the MiLight unit, which makes mounting in some of the waterproof boxes I have easier.

    The wiring is pretty simple and extremely flexible. Since the unit will run on AC or DC, you simply attach it to positive and ground from your power source. The RGBW wires from the strip go into the corresponding terminals on the controller, and the strip’s power wire is jumped off of the main terminal.

    Does this mean that strip is always hot? Yes. You could throw a switch or relay on that strip power, but the strip should only draw power if the lights are on. Those lights are controlled by the RGBW wires, so if the Shelly says it is off, then it is off. It’s important to keep power to the controller, though, otherwise your integration won’t be able to communicate with it.

    Connectivity

    Shelly provides an app that lets you connect to the RGBW controller to configure its Wi-Fi settings. The app then lets you categorize the device in their cloud and assign it to a room and scene.

    However, I do not really care about that. I jumped over into Home Assistant and, lo and behold, a new detected integration popped up. When I configured it, I only needed to add the IP of the device. I statically assigned the IP for that controller using its MAC Address, so I let Home Assistant reach out to that IP for integration.

    And that was it. The device appeared in the Shelly integration with all the necessary entities and controls. I was able to control the device with Home Assistant, including change colors, without any issues.

    Replacement Time

    At about $25 per device, the controllers are not cheap. However, the MiLight controllers, when I bought them, were about $18 each, plus I needed a Wi-Fi controller for every 4 controllers, at $40 each. So, by that math, the MiLight setup was $28 for each individually controlled LED strip with Wi-Fi connectivity. I will have to budget some extra cash to replace my existing controllers with new ones.

    Thankfully, the replacement is pretty simple: remove MiLight controller, replace with Shelly, and setup Shelly. Once all my MiLight controllers are gone, I can unplug the two MiLight/MiBoxer Wi-Fi boxes I have. So that is two less devices on the network!