• WSL for Daily Use

    Windows Subsystem for Linux (WSL) lets me do what I used to do in college: have Windows and Linux on the same machine. In 1999, that mean dual booting. Hypervisors everywhere and increased computing power mean today, well, I just run a VM, without even knowing I’m running one.

    Docker Started It All

    When WSL first came out, I read up on the topic, but never really stepped into it in earnest. At the time, I had no real use for a Linux environment on my desktop. As my home lab grew and I dove into the world of Kubernetes, I started to use Linux systems more.

    With that, my familiarity with, and love of, the command line started to come back. Sure, I use Powershell a lot, but there’s nothing more nerdy than running headless Linux servers. What really threw me back into WSL was some of the ELT work I did at my previous company.

    Diving In

    It was much easier to get the various Python tools running in Linux, including things like the Anaconda virtual environment manager. At first, I was using Windows to clone and edit the files using VS Code. Through WSL, I accessed the files using the /mnt/ paths in Ubuntu to get to my drives.

    In some reading, I came across the guide for using VS Code with WSL. It describes how to use VS Code to connect to WSL as a remote computer and edit the files “remotely.” Which sounds weird because it’s just a VM, but it’s still technically remote.

    With VS Code setup to access remotely, I stopped using the /mnt/ folders and started cloning repositories within the WSL Ubuntu instance itself.

    Making It Pretty

    I am a huge fan of a pretty command line. I have been using Oh My Posh as an enhancement to Powershell and Powershell Core for some time. However, Oh My Posh is meant to be used for any shell, so I got to work on installing it in WSL.

    As it turns out, in this case, I did use the /mnt mount path in order to share my Oh My Posh settings file between my Windows profile and the WSL Ubuntu box. In this way, I have the same Oh My Posh settings, regardless of whether I’m using Windows Powershell/Powershell Core or WSL Ubuntu.

    Bringing It All Together

    How can I get to WSL quickly? Well, through Windows Terminal! Windows Terminal supports a number of different prompts, including the standard command prompt, Powershell, and Powershell Core. It also lets you start a WSL session via a Terminal Profile.

    This integration means my Windows Terminal is now my “go to” window for most tasks, whether in WSL or on my local box.

  • Don’t Mock Me!

    I spent almost two hours on a unit test issue yesterday, walking away with the issue unresolved and myself frustrated. I came back to it this morning and fixed it in 2 minutes. Remember, you don’t always need a mock library to create fakes.

    The Task at Hand

    In the process of removing some obsolete warnings from our builds, I came across a few areas where the change was less than trivial. Before making the changes, I decided it would be a good idea to write some unit tests to ensure that my changes did not affect functionality.

    The class to be tested, however, took IConfiguration in the constructor. Our current project does not make use of the Options pattern in .Net Core, meaning anything that needs configuration values has to carry around a reference to IConfiguration and then extract the values manually. Yes, I will want to change that, but not right now.

    So, in order to write these unit tests, I had to create a mock of IConfiguration that returned the values this class needed. Our project currently uses Telerik JustMock, so I figured it would be a fairly easy task to mock. However, I ran into a number of problems that had me going down the path of creating multiple mock classes for different interfaces, including IConfigurationSection. I immediately thought “There has to be a better way.”

    The Better Way

    Some quick Google research led me to this gem of a post on StackOverflow. In all my time with .Net configuration, I never knew about or used the AddInMemoryCollection extension. And that led me to the simplest solution: create an “real boy” instance of IConfiguration with the properties my class needs, and pass that to the class in testing.

    I suppose this is “dumb mocking” in the sense that it doesn’t use libraries written and dedicated to mocking objects. But it gets the job done in the simplest method possible.

  • New Cert, now what?

    I completed my PADI Advanced Open Water certification over the weekend. The question is, what’s next?

    Advanced Open Water

    The Advanced Open Water certification is a continuation of the Open Water training with a focus on opening up new dive sites, primarily by expanding the available depth. My certification focused on five areas:

    1. Deep Dive (below 18m/60ft)
    2. Navigation
    3. Bouyancy
    4. Boat
    5. Drift

    The Boat and Drift specialties were a fun introspection into pretty much the only dive types I’ve ever done: of my 17 official dives, 16 have been boat AND drift dives. Truthfully, I’d be a little anxious if I had to find an anchored dive boat by myself.

    The Deep specialty opens up a number of new dive sites below 60 feet, and taught me a little more about the pressure at those depths. On paper, I see the math regarding depth and atmosphere, but it’s incredible to see just how much difference there is between 20m and 30m in terms of pressure. I also learned a bow knot, and how to tie one at 90 feet.

    Navigation was interesting, although pretty easy considering the environment. Swimming a square with a compass is much different when you can see the full square in the 30+ feet of visibility of the Caribbean versus the 6 feet of visibility in a quarry. Considering I’ve only ever done drift dives, precise navigation has been somewhat less important. I will have to work on my orienteering on dry land so that I’m more comfortable with a compass.

    Buoyancy was, by far, the most useful of the specialty dives. I’ve been pretty consistently using 10 kilograms (22 lbs.) since I got certified. However, I forgot that, when I did my certification dives, I was wearing a 3mm shorty wetsuit (short sleeves, short legs). Since then, I’ve shed the wetsuit since I’ve been diving in warmer waters. However, I didn’t shed the weight. Through some trial and error, my instructor helped me get down to diving with 4 kilograms (8.8 lbs.). My last 4 dives were at that weight and there was a tremendous difference. Far less fiddling with my BCD to adjust buoyancy, and a lot more opportunity to use breathing and thrust to control depth.

    So many specialties…

    PADI offers a TON of specialty courses. Wreck Diving, Night Diving, Peak Performance Buoyancy, Search and Rescue, and so many more. I’m interested in a number of them, so the question really is, what’s my plan?

    Right now, well, I think I am going to review their specialty courses and make a list. As for “big” certifications, Rescue Diver seems like the next logical step, but it requires a number of specialties first. However, there is something to be said for just diving. Every dive has increased my confidence in the basics, making every dive more enjoyable. So I don’t anticipate every trip being a “certification trip.” Sometimes, it’s just nice to dive!

  • Using Architectural Decision Records

    Recently, I was exposed to Architectural Decision Records (ADRs) as a way to document software architecture decisions quickly and effectively. The more I’ve learned, the more I like.

    Building Evolutionary Architectures

    Architecture, software or otherwise, is typically a tedious and time-consuming process. We must design to meet existing requirements, but have to anticipate potential future requirements without creating an overly complex (i.e. expensive) system. This is typically accomplished through a variety of patterns which aim to decouple components and make them easily replaceable.

    Replace it?!? Everything ages, even software. If I have learned one thing, it is that the code you write today “should” not exist in its same form in the future. All code needs to change and evolve as the platforms and frameworks we use change and evolve. Building Evolutionary Architectures is a great read for any software engineer, but I would suggest it to be required reading for any software architect.

    How architecture is documented and communicated has evolved in the last 30 years. The IEEE published an excellent white paper outlining how early architecture practices have evolved into these ADRs.

    But what IS it?

    Architectural Decision Records (ADRs) are, quite simply, records of decisions made that affect the architecture of the system. ADRs are simple text documents with a concise format that anyone (architect, engineer, or otherwise) can consume quickly. ADRs are stored next to the code (in the same repository), so they are subject to the same peer review process. Additionally, ADRs add the ability to track decisions and changes over time.

    Let’s consider a simple example, taken straight from the adr-manager tool used to create ADRs in a GitHub repository. The context/problem is pretty simple:

    We want to record architectural decisions made in this project. Which format and structure should these records follow?

    The document then outlines some potential options for tracking architectural decisions. In the end, the document states that MADR 2.1.2 will be used, and outlines the rationale behind the decision.

    It may seem trivial, but putting this document in the repository, accessible to all, gives great visibility to the decision. Changes, if any, are subject to peer review.

    Now, in this case, say 6 months down the road the team decides that they hate MADR 2.1.2 and want to use Y-Statements instead. That’s easy: create a new ADR that supersedes the old one. In the new ADR, the same content should exist: what’s the problem, what are our options, and define the final decision and rationale. Link the two so that it’s easy to see related ADRs, and you are ready to go.

    Tools of the Trade

    There is an ADR GitHub organization that is focused on standardizing some of the nomenclature around ADRs. The page includes links to several articles and blog posts dedicated to describing ADRs and how to implement and use them within your organization. Additionally, the organization has started to collect and improve upon some of the tooling for supporting an ADR process. One that I found beneficial is ADR-Manager.

    ADR-Manager is a simple website that lets you interact with your GitHub repositories (using your own GitHub credentials) to create and edit ADRs. Through your browser, you connect to your repositories and view/edit ADR documents. It generates MADR-styled files within your repository which can be committed to branches with appropriate comments.

    Make it so…

    As I work to get my feet under me in my new role, the idea of starting to use ADRs has gained some traction. As we continue to scale, having these decision records readily available for new teams will be important to maintain consistency across the platform.

    As I continue to work through my home projects, I will use ADRs to document decisions I make in those projects. While no one may read them, it’s a good habit to build.

  • If there is a problem…

    My son’s broken Stanley handle led me down the path of designing and printing a new handle. Overkill? Probably. But fun nonetheless.

    What’d You Do?

    Whatd You Do Chris Farley GIF - Find & Share on GIPHY

    My son came home last week from school last week with a bit of a sob story. Someone knocked his Stanley Quencher 40 ounce Tumbler off his desk, and it landed in such a way that the handle broke off. It was not that the screws came loose: the handle mounts just busted off.

    It was a relatively new purchase, so I told him to contact Stanley about a warranty. To Stanley’s credit, they shipped him a new one, so he’s got a new one and can move on with his life. But now we have a perfectly functional Stanley with no handle. This felt like a perfect opportunity to design a new one!

    It’s Prototyping Time!

    I checked Grabcad.com for an existing model of the Stanley 40 ounce Quencher, but alas, there was not one. I also checked Thangs.com and Printables.com, but nothing popped up. So, I took a few measurements from the Stanley I had in hand and went about mocking up a quick model of the Stanley. It is not at all pretty, but it gives me the proper dimensions for my handle model. I left the handle off because, well, mine does not have the handle, hence the reason for my work.

    Bare Stanley Quencher – No Handle

    The design I had in mind is similar to some existing Yeti handles that I have seen. While the official Yeti ones have a single ring, because the Stanley is a not flared as it goes up, I wanted a double ring design to take advantage of the lower flare on the Stanley.

    With a double ring, I needed to break this out into a multi-part component in order to print it. In order to add some strength to the finished product, I added dovetails at the joints.

    I ended up with this:

    Stanley Handle Rings
    Stanley Handle
    Stanley Handle

    Off to the Printer!

    I am still mulling over purchasing a 3D printer, so for now, I rely on Pittsburgh3DPrints to have my prints completed. I sent my design over and had the print turned around in a few days. Normally I can just pick it up, but they are out of the shop this week, so I had to wait patiently for the USPS to deliver it.

    How did it turn out?

    Pretty much exactly as I had planned… Which isn’t always the case, but I do love it when a plan comes together.

    The dovetails joints are actually pretty tight on their own, but I’m going to glue them up just for added strength and removing the worry of it coming apart and losing pieces.

    Getting closer…

    As I design and print more, the list of “pros” continues to grow in the “should I buy a 3D printer” debate I am having with myself. My modeling is, right now, serviceable, but I am getting better with each tutorial and print. Perhaps one day I’ll convince myself to get my own printer.

  • Streamlining my WordPress Install

    My professional change served as a catalyst for some personal change. Nothing drastic, just messing with this site a little.

    New Look!

    I have been sitting on the Twenty Twenty-One theme for a few years now. When it comes to themes, I just want something that looks nice and is low maintenance, and it served its purpose well. I skipped Twenty Twenty-Two because, well, I did not really want to dig into changing it to my personal preference.

    The latest built-in theme, Twenty Twenty-Three, is nice and clean, and pretty close to what I was using in Twenty Twenty-One. I went ahead and activated that one, and chose the darker style to match my soul…. I am kidding. I appreciate a good dark theme, so you know my site will reflect that.

    New Plugins!

    From time to time, I will make sure that my plug-ins are updated and that the WordPress Site Health page does not have any unexpected warnings. This time around, I noticed that I had no caching detected.

    But, wait…. I have Redis Object Cache installed and running. And, literally, as soon as I read that plugin name, I realized that “object cache” is not the same as “browser cache.” So I started looking for a browser cache plugin.

    I landed on WP-Optimize from UpdraftPlus. The free version is sufficient for what I need, and the setup was very easy. I got the plugin installed, and just before I ran the optimization, I noticed the warning to backup the DB using UpdraftPlus. And that’s when I realized, my backup process was, well, non-existent.

    In the past, I have used the All-in-One WP Migration plugin to backup/restore. However, the free version is limited to a size that I have long surpassed, and there is no way that I saw to automate backups. Additionally, the “backups” are stored in the same storage location, so unless I manually grabbed them, they did not go offsite.

    UpdraftPlus provides scheduled backups as well as the ability to push those backups to external storage. Including, as luck would have it, an S3 bucket. So I was able to configure UpdraftPlus to push backups to a new bucket in my MinIO instance, which means I know have daily backups of this site…. It only took 2 years.

    With UpdraftPlus and WP-Optimize installed, I dropped the All-in-One WP Migration plugin.

    New Content?

    Nope…. Not yet, anyway. Over the past year, I have really tried to post every four days. While I do not always hit that, having deadlines pushed me to post more often than I have in the past. While I don’t have the capacity to increase the number of posts, I am targeting to add some variety to my posts. I have been leaning heavily towards technical posts, but there’s a lot of non-technical topics on which I can wax poetic… Or, more like, do a brain dump on…

  • Get to the point!

    Navigating to mattgerega.com used to take you to a small home page, from which you had to navigate to the recent posts or other pages.

    Some quick analysis of my page traffic indicated that, well, all of my traffic was going to the posts. As my wife would say, “Don’t bury the lead!” With that, I change my WordPress options to take visitors directly to the recent posts.

    For the two of us who visit other pages… my apologies.

  • One Task App to Rule them All!

    New job means new systems, which prompted me to reevaluate my task tracking.

    State of the Union

    For the last, oh, decade or more, I have been using the ClearContext Outlook plugin for task management built into Outlook. And I really like it. I have become proficient with the keyboard shortcuts that let me quickly review, file, and organize my emails. They “Email to Task and Appointment” feature is great to turn emails into tasks, and allows me to quickly follow the “Getting Things Done” methodology by David Allen.

    I use Gmail for personal emails, though, and I had no real drive to find a GTD pattern for Gmail. And then I changed jobs.

    Why Switch?

    I started using Microsoft To Do for personal tasks, displaying them on my office television via MagicMirror. However, as ClearContext was my muscle memory, I never switched over to using To Do for work tasks. So I had two areas where tasks were listed: In Microsoft To Do for personal tasks, and in Outlook for professional ones.

    My new company uses Google workplace. This change has driven two changes:

    1. Find a GTD workflow for Gmail to allow me to get to “zero inbox.”
    2. Find a Google Tasks module for Magic Mirror.

    Regarding #1, this will be a “trial and error” type of thing. I have started writing some filters and such, which should help with keeping a zero inbox.

    As for #2, well, it looks like it is time for some MagicMirror module work.

    MMM-GoogleTasks

    When I started looking, there were two MMM-GoogleTasks repositories in GitHub, both related to one another. I forked the most recently updated one and began poking around.

    This particular implementation allows you to show tasks from only one list in Google, and shows all tasks on that list. Microsoft To Do has the notion of a “planned” view which only shows non-completed Tasks with a due date. I contributed a change to MMM-MicrosoftToDo to allow for this view in that MagicMirror module, so I started down the path of updating MMM-GoogleTasks.

    I could not help but start to convert this project to Typescript, which means, most likely, it will never get merged back. However, I appreciate the ability to create Typescript modules and classes, but ultimately have things rollup into three files for the MagicMirror module system.

    So I got the planned view added to my fork of MMM-GoogleTasks. Now what? I have two Gmail accounts and my work account, and I would like to see tasks from all three of those accounts on my MagicMirror display. Unfortunately, I do not have a ton of time to refactor for multiple account support right now… so it made the issues list.

    First Impressions

    It has been about two weeks since I switched over. I am making strides in finding a pattern that works for me to keep me at zero inbox, both in my personal inbox as well as my professional one. I am sure I will run into some hiccups, but, for now, things look good.

  • A New Chapter

    After 16 years, it is time to make a change. With that change, a bit of self reflection never hurts.

    Getting here

    At this point, I have spent more time in corporate full time employment than any other aspect of my life. I started my first full-time, salaried role in November of 2002, which I suppose means that my corporate life is approaching its 21st birthday. Thankfully, drinking laws do not apply to corporate life.

    My time in software was marked by a relatively tumultuous start: I was at three different companies in my first 5 years. I grew quickly in those roles and quickly grew out of them. The summer of 2007, however, brought me to Four Rivers, which I really believe is where my career began.

    Why there? My previous positions taught me a lot about the basics of software engineering. I learned about the practical applications for the algorithms and patterns that I learned in school, as well as the realization that nothing I ever write will be good enough, even for me, and this idea of constant evolution and improvement must be baked in to software.

    Four Rivers, however, took the time to invest in me. They worked to teach me what it means to be a good manager, a good leader, and to be knowledgeable about the business side of software. The 7 years I spent with the leaders there taught me so much, and I am forever grateful for their mentorship and guidance.

    In 2014, Four Rivers was acquired by Accruent. I spent the next 9 years honing my skills across different platforms. I learned many new skills and worked for a number of different leaders. From each of those leaders, I gained new perspectives into how things can be done, and sometimes, things that don’t work.

    Time for a change

    At some point, I started to feel like I had stagnated, both in my own career growth as well as what I could contribute to the company. While my seniority said 16 years, the change at Accruent definitely made me feel as though I had been working for different companies.

    However, as Snake Plissken famously quotes, “the more things change, the more they stay the same.” Yes, it was Alphonse Karr who wrote it first, but it is more memorable to me when Kurt Russell grumbles it before (#spoileralert) shutting down the world. So I started to look around for opportunities for a change.

    Reaching for more

    I had several interviews over the span of a few months. The market is very weird right now, as it is very much a “hirers” market. There are a lot of folks applying for relatively few positions, so companies can be more selective in their process. In a few instances, I made it through some initial phase interviews only to be ghosted or simply sent a form email thanking me for my time.

    A short rant: your candidate’s time is just as valuable as your own. Please try to remember that as you go through your hiring process, as it speaks volumes about your attitude towards your employees.

    I had an opportunity to interview for a position at Aspire for a software architect role. I must say, I was very impressed from the initial meeting. Every interview or screening was not an interrogation, but rather, a discussion about what I have done in the past, the things the company was seeking, and whether my skills and experience aligned with their needs and goals. It reminded me of my own style of interview, and was a refreshing change of pace from some recent experiences.

    I ended up receiving and accepting an offer for the position at Aspire, and started earlier this week. Sure, I’ve only been here a week, but the culture and atmosphere is a refreshing change. The company and the team are actively growing, and I look forward to digging in and starting to contribute.

    Am I scared? Absolutely. I have spent the last 16 years building up knowledge, experience, and reputation with the team at Four Rivers/Accruent. Now, I am basically the new guy, something I haven’t been for a very long time. However, with that clean slate comes the opportunity to learn from a new group and contribute what I know to their success.

  • Prototype 1 – Printed!

    With a little help from the folks at Pittsburgh 3D Prints, my first 3D printed prototype is complete. I am fairly certain this will lead to a new hobby with some useful output.

    But first, the problem

    As I mentioned in an earlier post, I have started to dabble in 3D printing. This desire actually came out of two separate issues that I wanted to address.

    1. I need a case for the Shelly LED Controller.
    2. I need an outdoor mount for our UE Megaboom 3 speakers.

    For my first prototype, I picked the Shelly LED Controller case, since it is more pressing and probably the easier of the two.

    Why a Case?

    I moved my LED controllers to the Shelly LED controllers in June. For containment, the LED strips outside already had waterproof boxes, so putting the Shelly in there was quick and painless.

    However, the kids have LED strips in their room, and that presents a problem. The Mi-Light box is self-contained, and includes a power jack to plug in a 5.5mm barrel plug transformer. The Shelly, well, does not. So I need a solution to contain the Shelly and provide a plug.

    Design Time!

    After replacing the GPU in my laptop, Fusion 360 is running quite nicely. I was able to grab a model of the Shelly LED controller from Grabcad.com. From there, I sketched out a rough bottom shell, leaving enough room on one side for wire connections and on the other for a panel mount barrel jack. I added a small slot for the LED 5 wire flat, as well as retention tabs to keep the Shelly in place. I added two build outs to hold a few M2 threaded inserts.

    Finished Case Body

    For the top, I used the sweep function to create a lip profile that matches the inside of the case body. I added additional retention tabs that match three of the tabs on the bottom, as well as a small cube to hold the Shelly against the case bottom. A few small holes for the M2 screws, and I was ready to print a prototype.

    A little help?

    I do not yet have a 3D printer. I am absolutely interested, but want to make sure it is something that I will use long term, as it is an investment. Luckily, the Pittsburgh3DPrints.com shop is literally 2 miles from my house, and they gladly printed my prototype for a reasonable fee. I exported my components as mesh files and sent them over, and with a day or so, I had a prototype printed in PLA.

    Prototype with Shelly LED Controller
    Case with Lid

    An this is why we prototype. When I drew the lid on the case, I matched the profile exactly to the case body. The lack of any gap or margin is causing the top to bow in the middle, since the PLA is not so rigid that it will press into the case completely.

    I went back into Fusion 360 and added an offset face operation to the lip, pushing it back 0.2mm from the edge of the case body. This should give me sufficient gap to allow the lid to sit flush while still maintaining a tight fit.

    I’m currently waiting on my Amazon order so that I can install the barrel jack and threaded inserts to complete the prototype. While I don’t anticipate any issues, I want to make sure everything fits before I order two more cases in ABS.

    What’s next?

    I have always enjoyed making physical things. So much of my job is to create solutions in a virtual space, it is nice to actually be able to touch and feel something that I have created. 3D printing is an interesting juxtaposition of those two worlds: I spend time modeling virtually something that the printer turns into something physical.

    For now, I am going to attack a few outdoor mounts for our Megaboom speakers. From there, who knows.