Category: Open Source

  • Automating Grafana Backups

    After a few data loss events, I took the time to automate my Grafana backups.

    A bit of instability

    It has been almost a year since I moved to a MySQL backend for Grafana. In that year, I’ve gotten a corrupted MySQL database twice now, forcing me to restore from a backup. I’m not sure if it is due to my setup or bad luck, but twice in less than a year is too much.

    In my previous post, I mentioned the Grafana backup utility as a way to preserve this data. My short-sightedness prevented me from automating those backups, however, so I suffered some data loss. After the most recent event, I revisited the backup tool.

    Keep your friends close…

    My first thought was to simply write a quick Azure DevOps pipeline to pull the tool down, run a backup, and copy it to my SAN. I would have also had to have included some scripting to clean up old backups.

    As I read through the grafana-backup-tool documents, though, I came across examples of running the tool as a Job in Kubernetes via a CronJob. This presented a very unique opportunity: configure the backup job as part of the Helm chart.

    What would that look like? Well, I do not install any external charts directly. They are configured as dependencies for charts of my own. Now, usually, that just means a simple values file that sets the properties on the dependency. In the case of Grafana, though, I’ve already used this functionality to add two dependent charts (Grafana and MySQL) to create one larger application.

    This setup also allows me to add additional templates to the Helm chart to create my own resources. I added two new resources to this chart:

    1. grafana-backup-cron – A definition for the cronjob, using the ysde/grafana-backup-tool image.
    2. grafana-backup-secret-es – An ExternalSecret definition to pull secrets from Hashicorp Vault and create a Secret for the job.

    Since this is all built as part of the Grafana application, the secrets for Grafana were already available. I went so far as to add a section in the values file for the backup. This allowed me to enable/disable the backup and update the image tag easily.

    Where to store it?

    The other enhancement I noticed in the backup tool was the ability to store files in S3 compatible storage. In fact, their example showed how to connect to a MinIO instance. As fate would have it, I have a MinIO instance running on my SAN already.

    So I configured a new bucket in my MinIO instance, added a new access key, and configured those secrets in Vault. After committing those changes and synchronizing in ArgoCD, the new resources were there and ready.

    Can I test it?

    Yes I can. Google, once again, pointed me to a way to create a Job from a CronJob:

    kubectl create job --from=cronjob/<cronjob-name> <job-name> -n <namespace-name>

    I ran the above command to create a test job. And, viola, I have backup files in MinIO!

    Cleaning up

    Unfortunately, there doesn’t seem to be a retention setting in the backup tool. It looks like I’m going to have to write some code to clean up my Grafana backups bucket, especially since I have daily backups scheduled. Either that, or look at this issue and see if I can add it to the tool. Maybe I’ll brush off my Python skills…

  • Upgrades and Mermaids

    What I thought was going to be a small upgrade to fix a display issue turned into a few nights of coding. Sounds like par for the course.

    MD-TO-CONF

    I forked RittmanMead‘s md-to-conf project about 6 months ago in order to update the tool for Confluence Cloud’s new API version and to move it to Python 3.11. I use the new tool to create build pipelines that publish Markdown documentation from various repositories into Confluence.

    Why? Well, in the public space, I usually utilize GitHub Pages to publish HTML-based documentation for things, as I did with md-to-conf. But in the corporate space, we tend to use tools like Confluence or Sharepoint as spaces for documentation and collaboration. As it happens, both my previous company and my current one are heavy Confluence users.

    But why two places? Well, generally, I have found that engineers don’t like to document things. Having to have them find (or create) the appropriate page in Confluence can be a painful affair. Keeping the documentation in the repository means it is at the engineer’s fingertips. However, for those that don’t want to (or don’t have access to) open GitHub, publishing the documents to Confluence means those team members have access to the documentation.

    A Small Change…

    As I built an example pipeline for this process, I noticed that the nested lists were not being rendered correctly. My gut reaction was, perhaps the python-markdown library needed an update. So, I updated the library, created a PR, and pushed a new release. And it broke everything.

    I am no Python expert, so I am not really sure what happened, since I did not change any code. As best I can deduce, the way my module was built, with the amount of code in __init__.py, was causing running as a module to behave differently then running with the wheel based build. In any case, as I worked to change it, I figured, why not make it better.

    So I spent a few evenings pulling code out of __init__.py and putting it into it’s own class. And, in doing that, SonarCloud failed most of my work because I did not have unit tests for my new code. So, yes, that took me down the rabbit hole of learning about using pytest and pytest-mock to start to get better coverage on my code.

    But Did You Fix It?!

    As it turns out, the python-markdown update did NOT fix the nested list issues. Apparently, all I really needed to do was make sure I configured python-markdown to use the sane_lists extension.

    So after many small break-fix releases, v1.0.9 is out and working. I fixed the nested lists issue and a few other small bugs found by adding additional unit tests.

    Mermaid Support

    For Confluence, Mermaid support is a paid extension (of course). However, you can use the Mermaid CLI (or, in my case, the docker image) to convert any Mermaid in the MD file into an image, which is then published to Confluence. I built a small pipeline template that runs these two steps. Have a look!

    While it would be nice to build the Mermaid to image conversion directly in md-to-conf, I was not able to quickly find a python library to do that work and, well, the mermaid-cli handles this conversion nicely, so I am happy with this particular two-step. Just don’t make me dance.

  • Building a Radar, Part 1 – A New Pattern

    This is the first in a series meant to highlight my work to build a non-trivial application that can serve as a test platform and reference. When it comes to design, it is helpful to have an application with enough complexity to properly evaluate the features and functionality of proposed solutions.

    Backend For Frontend

    Lately, I have been somewhat obsessed with the Backend for Frontend pattern, or BFF. There are a number of benefits to the pattern, articulated well all across the internet, so I will avoid a recap. I wanted an application that took advantage of this pattern so that I could start to demonstrate the benefits.

    I had previously done some work in putting a simple backend on the Zalando tech radar. It is a pretty simple Create/Retrieve/Update/Delete (CRUD) application, but complex enough that it would work in this case.

    Configuring the BFF

    At first, I started looking at converting the existing project, but quickly realized that this is a good time for a clean slate. I followed the MSDN tutorial to the letter to get a working sample application. From there, I moved my existing SPA to the working sample.

    With that in place, I walked through Auth0’s tutorial on implementing Backend for Frontend authentication in ASP.NET Core. In this case, I substituted my Duende Identity Server for the OAuth/Okta instance used in the tutorial. This all worked great, with the notable exception that I had to ensure all my proxies were in order.

    Show Your Work!

    Now, admittedly, my blogging is well behind my actual work, so if you go browsing the repository, it is a little farther ahead than this post. Next in this series, I’ll discuss configuring the BFF to proxy calls to a backend service.

    While the work is ahead of the post, the documentation is WAY behind, so please ignore the README.md file for now. I’ll get proper documentation completed as soon as I can.

  • Mermaids!

    Whether it’s software, hardware, or real world construction, an architect’s life is about drawings. I am always on the lookout for new tools to make keeping diagrams and drawings up-to-date, and, well, I found a mermaid.

    Mermaid.js

    Mermaid is, essentially, a system to render diagrams and visualizations using text and code. According to their site, it is Javascript-based and Markdown-inspired, and allows developers to spend less time managing diagrams and more time writing code.

    It currently supports a number of different diagram types, including flow charts, sequence diagrams, and state diagrams. In addition to that, many providers (including Github and Atlassian Confluence Cloud) provide support for Mermaid charts, either free of charge (thanks Github!) or via paid add on applications (not surprised, Atlassian). I’m sure other providers have support, but those are the two I am using.

    Mermaid in Action

    As of right now, I have only had the opportunity to use Mermaid charts at work, so my examples are not publicly available. You will have to settle for my anecdotes until I get some charts and visualization into some of my open source projects.

    At work, though, I have been using the Gitgraph diagrams to visualize some of our current and proposed workflows for our development teams. Being able to visualize that Git work flow make the documentation much easier to understand for our teams.

    Additionally, I created a few sequence diagrams to illustrate a proposed flow for authentication across multiple services and applications. I could have absolutely created these diagrams in Miro (which is our current illustrating tool), but aligning the different boxes and lines would take a tremendous amount of time. By comparison, my Mermaid diagrams were around 20 lines and fully illustrated my scenarios.

    In WordPress?

    Obviously, I would really like to be able to use Mermaid charts in my blog to add visualizations to posts. Since Mermaid is Javascript-based, I figured there would be a plugin for to render Mermaid code to blog post.

    WP-Mermaid should, in theory, make this work. However…. well, it doesn’t. I’m not the only person with the issue. A quick bit of research shows that the issue is how WordPress is “cleaning up” the code that is put in, since it’s not tagged as preformatted (using the pre tag). I was able to hack in a test to see if adding pre and then changing the rendering in the plugin would work. It works just fine…

    And so my to-do list grows. I would like to use Mermaid charts in WordPress, but I have to fix it first.

  • More GitOps Fun!

    I have been curating some scripts that help me manage version updates in my GitOps repositories… It’s about time they get shared with the world.

    What’s Going On?

    I manage the applications in my Kubernetes clusters using Argo CD and a number of Git repositories. Most of the ops- repositories act as “desired state” repositories.

    As part of this management, I have a number of external tools running in my clusters that are installed using their Helm charts. Since I want to keep my installs up to date, I needed a way to update the Helm chart versions as new releases came out.

    However.. some external tools do not have their own Helm charts. For that, I have been using a Helm library chart from bjw-s. In that case, I have had to manually find new releases and update my values.yaml file.

    While I have had the Helm chart version updates automated for some time, I just recently got around to updating the values.yaml file from external sources. Now is a good time to share!

    The Scripts

    I put the scripts in the ops-automation repository in the Spydersoft organization. I’ll outline the basics of each script, but if you are interested in the details, check out the scripts themselves.

    It is worth nothing that these scripts require the git and helm command line tools to be installed, in addition to the Powershell Yaml module.

    Also, since I manage more than one repository, all of these scripts are designed to be given a basePath and then a list of directory names for the folders that are the Git repositories I want to update.

    Update-HelmRepositoryList

    This script iterates through the given folders to find the chart.yaml files in it. For every dependency in the found chart files, it adds the repository to the local helm if the URL does not already exist.

    Since I have been running this on my local machine, I only have to do this once. But, on a build agent, this script should be run every time to make sure the repository list contains all the necessary repositories for an update.

    Update-HelmCharts

    This script iterates through the given folders to find the chart.yaml files in it. For every dependency, the script determines if there is an updated version of the dependency available.

    If there is an update available, the Chart.yaml file is updated, and helm dependency update is run to update the Chart.lock file. Additionally, commit comments are created to note the version changes.

    For each chart.yaml file, a call to Update-FromAutoUpdate will be made to make additional updates if necessary.

    Update-FromAutoUpdate

    This script looks for a file called auto-update.json in the path given. The file has the following format:

    {
        "repository": "redis-stack/redis-stack",
        "stripVFromVersion": false,
        "tagPath": "redis.image.tag"
    }

    The script looks for the latest release from the repository in Github, using tag_name from Github as the version. If the latest release is newer than the current tagPath in values.yaml, the script then updates the tagPath in the values.yaml file to the new version. The script returns an object indicating whether or not an update was made, as well as a commit comment indicating the version jump.

    Right now, the auto-update only works for images that come from Github releases. I have one item (Proget) that needs to search a docker API directly, but that will be a future enhancement.

    Future Tasks

    Now that these are automated tasks, I will most likely create an Azure Pipeline that runs weekly to get these changes made and committed to Git.

    I have Argo configured to not auto-sync these applications, so even though the changes are made in Git, I still have to manually apply the updates. And I am ok with that. I like to stagger application updates, and, in some cases, make sure I have the appropriate backups before running an update. But this gets me to a place where I can log in to Argo and sync apps as I desire.

  • Git Out! Migrating to GitHub

    Git is Git. Wherever it’s hosted, the basics are the same. But the features and community around tools has driven me to make a change.

    Starting Out

    My first interactions with Git happened around 2010, when we decided to move away from Visual SourceSafe and Subversion and onto Git. At the time, some of the cloud services were either in their infancy or priced outside of what our small business could absorb. So we stood up a small Git server to act as our centralized repository.

    The beauty of Git is that, well, everyone has a copy of the repository locally, so it’s a little easier to manage the backup and disaster recovery aspects of a centralized Git server. So the central server is pretty much a glorified file share.

    To the Cloud!

    Our acquisition opened up access to some new tools, including Bitbucket Cloud. We quickly moved our repositories to Bitbucket Cloud so that we could decommission our self-hosted server.

    Personally, I started storing my projects in Bitbucket Cloud. Sure, I had a GitHub account. But I wasn’t ready for everything to be public, and Bitbucket Cloud offered unlimited private repos. At the time, I believe GitHub was charging for private repositories.

    I also try to keep my home setup as close to work as possible in most cases. Why? Well, if I am working on a proof of concept that involves specific tools and their interaction with one another, it’s nice to have a sandbox that I can control. My home lab ecosystem has evolved based on the ecosystem at my job:

    • Self-hosted Git / TeamCity
    • Bitbucket Cloud / TeamCity
    • Bitbucket Cloud / Azure DevOps
    • Bitbucket Cloud / Azure DevOps / ArgoCD

    To the Hub!

    Even before I changed jobs, a move to GitHub was in the cards, both personally and professionally.

    Personally, as a community, I cannot think of a more popular platform than GitHub for sharing and finding open/public code. My GitHub profile is, in a lot of ways, a portfolio of my work and contributions. As I have started to invest more time into open source projects, my portfolio has grown. Even some of my “throw away” projects are worth a little, if only as a reference for what to do and what not to do.

    Professionally, GitHub has made a great many strides in its Enterprise offering. Microsoft’s acquisition only pushed to give GitHub access to some of the CI/CD Pipeline solutions that Azure DevOps has, coupled with the ease of use of GitHub. One of the projects on the horizon at my old company was to identify if GitHub and GitHub actions could be the standard for build and deploy moving forward.

    With my move, we have a mix of ecosystem: GitHub + Azure DevOps Pipelines. I would like to think, long term, I could get to GitHub + GitHub Actions (at least at home), the interoperability of Azure DevOps Pipelines with Azure itself makes it hard to migrate completely. So, with a new professional ecosystem in front of me, I decided it was time to drop BitBucket Cloud and move to GitHub for everything.

    Organize and Move

    Moving the repos is, well, simple. Using GitHub’s Import functionality, I pointed at my old repositories, entered my BitBucket Cloud username and personal access token, and GitHub imported it.

    This simplicity meant I had time to think about organization. At this point, I am using GitHub for two pretty specific types of projects:

    • Storage for repositories, either public or private, that I use for my own portfolio or personal projects.
    • Storage for repositories, all public, that I have published as true Open Source projects.

    I wanted to separate the projects into different organizations, since the hope is the true Open Source projects could see contributions from others in the future. So before I started moving everything, I created a new GitHub organization. As I moved repositories from BitBucket Cloud, I put them in either my personal GitHub space or this new organization space, based on their classification above. I also created a new SonarCloud organization to link to the new GitHub organization.

    All Moved In!

    It really only took about an hour to move all of my repositories and re-configure any automation that I had to point to GitHub. I setup new scans in the new SonarCloud organization and re-pointed the actions correctly, and everything seems to be working just fine.

    With all that done, I deleted my BitBucket Cloud workspaces. Sure, I’m still using Jira Cloud and Confluence Cloud, but I am at least down a cloud service. Additionally, since all of the projects that I am scanning with Sonar are public, I moved them to SonarCloud and deleted my personal instance of SonarQube. One less application running in the home lab.

  • Stacks on Stacks!

    I have Redis installed at home as a simple caching tool. Redis Stack adds on to Redis OSS with some new features that I am eager to start learning. But, well, I have to install it first.

    Charting a Course

    I have been using the Bitnami Redis chart to install Redis on my home K8 cluster. The chart itself provides the necessary configuration flexibility for replicas and security. However, Bitnami does not maintain a similar chart for redis-stack or redis-stack-server.

    There are some published Helm charts from Redis, however, they lack the built-in flexibility and configurability that the Bitnami charts provide. The Bitnami chart is so flexible, I wondered if it was possible to use it with the redis-stack-server image. A quick search showed I was not the only person with this idea.

    New Image

    Gerk Elznik posted last year about deploying Redis Stack using Bitnami’s Redis chart. Based on this post, I made attempted to customize the Bitnami chart to use the redis-stack-server image. Gerk’s post indicated that a new script was needed to successfully start the image. That seemed like an awful lot of work, and, well, I really didn’t want to do that.

    In the comments of Gerk’s post, Kamal Raj posted a link to his version of the Bitnami Redis Helm chart, modified for Redis Stack. This seemed closer to what I wanted: a few tweaks and off to the races.

    In reviewing Kamal’s changes, I noticed that everything he changed could be overridden in the values.yaml file. So I made a few changes to my values file:

    1. Added repository and tag in the redis.image section, pointing the chart to the redis-stack-server image.
    2. Updated the command for both redis.master and redis.replica to reflect Kamal’s changes.

    I ran a quick template, and everything looked to generate correctly, so I committed the changes and let ArgoCD take over.

    Nope….

    ArgoCD synchronized the stateful set as expected, but the pod didn’t start. The error in the K8 events was about “command not found.” So I started digging into the “official” Helm Chart for the redis-stack-server image.

    That chart is very simple, which means it was pretty easy to see that there was no special command for startup. So, I started to wonder if I really needed to override the command, or simply use the redis-stack-server in place of the default image.

    So I commented out the custom overrides to the command settings for both master and replica, and committed those changes. Lo and behold, ArgoCD synced and the pod started up great!

    What Matters Is, Does It Work?

    Excuse me for stealing from Celebrity Jeopardy, but “Gussy it up however you want, Trebek, what matters is, does it work?” For that, I needed a Redis client.

    Up to this point, most of my interactions with Redis have simply been through the redis-cli that’s installed on the image. I use kubectl to get into the pod and run redis-cli in the pod to see what keys are in the instance.

    Sure, that works fine, but as I start to dig into to Redis a bit more, I need a client that lets me visualize the database a little better. As I was researching Redis Stack, I came across RedisInsight, and thought it was worth a shot.

    After installing RedisInsight, I set up port forwarding on my local machine into the Kubernetes service. This allows me to connect directly to the Redis instance without creating a long term service on Node Port or some other forwarding mechanism. Since I only need access to the Redis server within the cluster, this helps me secure it.

    I got connected, and the instance shows. But, no modules….

    More Hacking Required

    As it turns out, the Bitnami Redis chart changes the startup command to a script within the chart. This allows some of the flexibility, but comes at the cost of not using the entrypoint scripts that are in the image. Specifically, the entrypoint script for redis-stack-server, which uses the command line to load the modules.

    Now what? Well, there’s more than one way to skin a cat (to use an arcane and cruel sounding metaphor). Reading through the Redis documentation, you can also load modules through the configuration. Since the Bitnami Redis chart allows you to add to the configuration using the values.yaml file, that’s where I ended up. I added the following to my values.yaml file:

    master:
        configuration: | 
          loadmodule /opt/redis-stack/lib/redisearch.so MAXSEARCHRESULTS 10000 MAXAGGREGATERESULTS 10000
          loadmodule /opt/redis-stack/lib/redistimeseries.so
          loadmodule /opt/redis-stack/lib/rejson.so
          loadmodule /opt/redis-stack/lib/redisbloom.so
    

    With those changes, I now see the appropriate modules running.

    Lots Left To Do

    As I mentioned, this seems pretty “hacky” to me. Right now, I have it running, but only in standalone mode. I haven’t had the need to run a full Redis cluster, but I’m SURE that some additional configuration will be required to apply this to running a Redis Stack cluster. Additionally, I could not get the Redis Gears module loaded, but I did get Search, JSON, Time Series, and Bloom installed.

    For now, that’s all I need. Perhaps if I find I need Gears, or I want to run a Redis cluster, I’ll have to revisit this. But, for now, it works. The full configuration can be found in my non-production infrastructure repository. I’m sure I’ll move to production, but everything that happens here happens in non-production first, so keep tabs on that if you’d like to know more.

  • One Task App to Rule them All!

    New job means new systems, which prompted me to reevaluate my task tracking.

    State of the Union

    For the last, oh, decade or more, I have been using the ClearContext Outlook plugin for task management built into Outlook. And I really like it. I have become proficient with the keyboard shortcuts that let me quickly review, file, and organize my emails. They “Email to Task and Appointment” feature is great to turn emails into tasks, and allows me to quickly follow the “Getting Things Done” methodology by David Allen.

    I use Gmail for personal emails, though, and I had no real drive to find a GTD pattern for Gmail. And then I changed jobs.

    Why Switch?

    I started using Microsoft To Do for personal tasks, displaying them on my office television via MagicMirror. However, as ClearContext was my muscle memory, I never switched over to using To Do for work tasks. So I had two areas where tasks were listed: In Microsoft To Do for personal tasks, and in Outlook for professional ones.

    My new company uses Google workplace. This change has driven two changes:

    1. Find a GTD workflow for Gmail to allow me to get to “zero inbox.”
    2. Find a Google Tasks module for Magic Mirror.

    Regarding #1, this will be a “trial and error” type of thing. I have started writing some filters and such, which should help with keeping a zero inbox.

    As for #2, well, it looks like it is time for some MagicMirror module work.

    MMM-GoogleTasks

    When I started looking, there were two MMM-GoogleTasks repositories in GitHub, both related to one another. I forked the most recently updated one and began poking around.

    This particular implementation allows you to show tasks from only one list in Google, and shows all tasks on that list. Microsoft To Do has the notion of a “planned” view which only shows non-completed Tasks with a due date. I contributed a change to MMM-MicrosoftToDo to allow for this view in that MagicMirror module, so I started down the path of updating MMM-GoogleTasks.

    I could not help but start to convert this project to Typescript, which means, most likely, it will never get merged back. However, I appreciate the ability to create Typescript modules and classes, but ultimately have things rollup into three files for the MagicMirror module system.

    So I got the planned view added to my fork of MMM-GoogleTasks. Now what? I have two Gmail accounts and my work account, and I would like to see tasks from all three of those accounts on my MagicMirror display. Unfortunately, I do not have a ton of time to refactor for multiple account support right now… so it made the issues list.

    First Impressions

    It has been about two weeks since I switched over. I am making strides in finding a pattern that works for me to keep me at zero inbox, both in my personal inbox as well as my professional one. I am sure I will run into some hiccups, but, for now, things look good.

  • SonarCloud has become my Frank’s Red Hot…

    … I put that $h!t on everything!

    A lot has been made in recent weeks about open source and its effects on all that we do in software. And while we all debate the ethics of Hashicorp’s decision to turn to a “more closed” licensing model and question the subsequent fork of their open source code, we should remember that there are companies who offer their cloud solutions free for open source projects.

    But first, Github

    Github has long been the mecca for open source developers, and even under Microsoft’s umbrella, that does not look to be slowing down. Things like CI/CD through Github Actions and Package Storage are free for public repositories. So, without paying a dime, you can store your open source code, get automatic security and version updates, build your code, and store build artifacts all in Github. All of this built on the back of a great ecosystem for pull request reviews and checks. For my open source projects, it provides great visibility into my code and puts MOST of what I want in one place.

    And then SonarQube/Cloud

    SonarSource’s SonarQube offering is a great way to get static code analysis on your code. While their community edition is missing features that require an enterprise license, their cloud offering provides free analysis of open source projects.

    With that in mind, I have started to add my open source projects to SonarCloud.io. Why? Well, first, it does give me some insight into where my code could be better, which keeps me honest. Second, on the off chance that anyone wants to contribute to my projects, the Sonar analysis will help me quickly determine the quality of the incoming code before I accept the PR.

    Configuring the SonarCloud integration with Github even provides a sonarcloud bot that reports on the quality gate for pull requests. What does that mean? It means I get a great picture of the quality of the incoming code:

    What Next?

    I have been spending a great deal of time on the Static Code Analysis side of the house, and I have been reasonably impressed with SonarQube. I have a few more public projects which will receive a SonarCloud instance, but at work, it is more about identifying the value that can come from this type of scanning.

    So, what is that value, you may ask? Enhancing and automating your quality gates is always beneficial, as it streamlines your developer work flow. It also sets expectations: Engineers know that bad/smelly code will be caught well before a pull request is merged.

    If NOTHING else, SonarQube allows you to track your testing coverage and ensuring it does not trend backwards. If we did nothing else, we should at least ensure that we continue to cover what we write new, even if those before us did not.

  • Taking my MagicMirror modules to Typescript

    It came as a bit of a shock that I have been running MagicMirror in my home office for almost two years now. I even wrote two modules, one to display Prometheus alerts and one to show Status Page status.

    In the past few years I have started to become more and more comfortable with Typescript, I wanted to see if I could convert my modules to Typescript.

    Finding an example

    As is the case with most development, the first step was to see if someone else had done it. As it turns out, a few folks have done it.

    I stumbled across Michael Scharl’s post on dev.to which covered his Typescript MagicMirror module. In the same search, I ran across a forum post by Jalibu that focused a little more on the nitty-gritty, including his contribution of the magicmirror-module in DefinitelyTyped.

    Migrating to Typescript

    Ultimately, the goal was to generate the necessary module files for MagicMirror through transpilation using Rollup (see below), but first I needed to move my code and convert it to Typescript. I created a src folder, moved my module file and node_helper into there, and changed the extension to .ts.

    From there, I split things up into a more logical configuration, utilizing Typescript as well as taking advantage of ESNext based module imports. As it would all be transpiled into Javascript, I could take advantage of the module options in Typescript to clean up my code.

    My modules already had a good amount of development packages around linting and formatting, so I updated all of those and added packages necessary for Typescript linting.

    A Note on Typing

    Originally, following Michael Scharl’s sample code, I had essentially copied the module-types.ts file from the MagicMirror repo and renamed it ModuleTypes.d.ts in my own code. I did not particularly like that method, as it required me to have extra code in my module, and I would have to update it as the MagicMirror types changed.

    Jalibu‘s addition of the @types/magicmirror-module package simplified things greatly. I installed the package and imported what I needed.

    import * as Log from "logger";
    import * as NodeHelper from "node_helper";

    The package includes a Module namespace that makes registering your module easy:

    Module.register<Config>("MMM-PrometheusAlerts", {
      // Module implementation
    }

    A few tweaks to the tsconfig.json file, and the tsc command was running!

    Using Rollup

    The way that MagicMirror is set up, the modules generally need the following:

    • Core Module File, named after the module (<modulename>.js)
    • Node Helper (node_helper.js) that represents a Node.js backend task. It is optional, but I always seem to have one.
    • CSS file, if needed. Would contain any custom styling for the HTML generated in the Core Module file.

    Michael Scharl’s post detailed his use of Rollup to create these files, however, as the post is a few years old, it required a few updates. Most of it was installing the scoped rollup packages (@rollup), but I also removed the banner plugin.

    I configured my Rollup in a ‘one to one’ fashion, mapping my core module file (src/MMM-PrometheusAlerts.ts) to its output file (MMM-PrometheusAlerts.js) and my node helper (src/node_helper.ts) to its output file (node_helper.js). Rollup would use the Typescript transpiler to generate the necessary Javascript files, bringing in any of the necessary imports.

    Taking a cue from Jalibu, I used the umd output format for node_helper, since it will be running on the backend, but iife for the core module, since it will be included in the browser.

    Miscellaneous Updates

    As I was looking at code that had not been touched in almost two years, I took the opportunity to update libraries. I also switched over to Jest for testing, as I am certainly more familiar with it, and I need the ability to mock to complete some of my tests. I also figured out how to implement a SASS compiler as part of rollup, so that I could generate my module CSS as well.

    To make things easier on anyone who might use this module, I added a postinstall script that performs the build task. This generates the necessary Javascript files for MagicMirror using Rollup.

    One down, one to go

    I converted MMM-PrometheusAlerts, but I need to convert MMM-StatusPageIo. Sadly, the latter may require some additional changes, since StatusPage added paging to their APIs and I am not yet in full compliance…. I’ve never had enough incidents that I needed to page. But it has been on my task list for a bit now, and moving to Typescript might give me the excuse I need to drop back in.