• Git Out! Migrating to GitHub

    Git is Git. Wherever it’s hosted, the basics are the same. But the features and community around tools has driven me to make a change.

    Starting Out

    My first interactions with Git happened around 2010, when we decided to move away from Visual SourceSafe and Subversion and onto Git. At the time, some of the cloud services were either in their infancy or priced outside of what our small business could absorb. So we stood up a small Git server to act as our centralized repository.

    The beauty of Git is that, well, everyone has a copy of the repository locally, so it’s a little easier to manage the backup and disaster recovery aspects of a centralized Git server. So the central server is pretty much a glorified file share.

    To the Cloud!

    Our acquisition opened up access to some new tools, including Bitbucket Cloud. We quickly moved our repositories to Bitbucket Cloud so that we could decommission our self-hosted server.

    Personally, I started storing my projects in Bitbucket Cloud. Sure, I had a GitHub account. But I wasn’t ready for everything to be public, and Bitbucket Cloud offered unlimited private repos. At the time, I believe GitHub was charging for private repositories.

    I also try to keep my home setup as close to work as possible in most cases. Why? Well, if I am working on a proof of concept that involves specific tools and their interaction with one another, it’s nice to have a sandbox that I can control. My home lab ecosystem has evolved based on the ecosystem at my job:

    • Self-hosted Git / TeamCity
    • Bitbucket Cloud / TeamCity
    • Bitbucket Cloud / Azure DevOps
    • Bitbucket Cloud / Azure DevOps / ArgoCD

    To the Hub!

    Even before I changed jobs, a move to GitHub was in the cards, both personally and professionally.

    Personally, as a community, I cannot think of a more popular platform than GitHub for sharing and finding open/public code. My GitHub profile is, in a lot of ways, a portfolio of my work and contributions. As I have started to invest more time into open source projects, my portfolio has grown. Even some of my “throw away” projects are worth a little, if only as a reference for what to do and what not to do.

    Professionally, GitHub has made a great many strides in its Enterprise offering. Microsoft’s acquisition only pushed to give GitHub access to some of the CI/CD Pipeline solutions that Azure DevOps has, coupled with the ease of use of GitHub. One of the projects on the horizon at my old company was to identify if GitHub and GitHub actions could be the standard for build and deploy moving forward.

    With my move, we have a mix of ecosystem: GitHub + Azure DevOps Pipelines. I would like to think, long term, I could get to GitHub + GitHub Actions (at least at home), the interoperability of Azure DevOps Pipelines with Azure itself makes it hard to migrate completely. So, with a new professional ecosystem in front of me, I decided it was time to drop BitBucket Cloud and move to GitHub for everything.

    Organize and Move

    Moving the repos is, well, simple. Using GitHub’s Import functionality, I pointed at my old repositories, entered my BitBucket Cloud username and personal access token, and GitHub imported it.

    This simplicity meant I had time to think about organization. At this point, I am using GitHub for two pretty specific types of projects:

    • Storage for repositories, either public or private, that I use for my own portfolio or personal projects.
    • Storage for repositories, all public, that I have published as true Open Source projects.

    I wanted to separate the projects into different organizations, since the hope is the true Open Source projects could see contributions from others in the future. So before I started moving everything, I created a new GitHub organization. As I moved repositories from BitBucket Cloud, I put them in either my personal GitHub space or this new organization space, based on their classification above. I also created a new SonarCloud organization to link to the new GitHub organization.

    All Moved In!

    It really only took about an hour to move all of my repositories and re-configure any automation that I had to point to GitHub. I setup new scans in the new SonarCloud organization and re-pointed the actions correctly, and everything seems to be working just fine.

    With all that done, I deleted my BitBucket Cloud workspaces. Sure, I’m still using Jira Cloud and Confluence Cloud, but I am at least down a cloud service. Additionally, since all of the projects that I am scanning with Sonar are public, I moved them to SonarCloud and deleted my personal instance of SonarQube. One less application running in the home lab.

  • I Did a Thing

    I have been participating in the open source software community for a while. My expedition into 3D modeling and printing has brought me to a new type of open community.

    Finding Inspiration

    The Google Feed on my phone popped up an article on all3dp.com called “The 30 Most Useful Things to Print in PLA.” I was intrigued, so I clicked on it and read through.

    There were a number of useful items, but many of them are duplicates of things I already owned. Phone stands and Raspberry Pi cases are great first prints. However, I usually buy mine to get built-in wireless charging on phone stands and various features on Pi cases.

    The third item in that article, however, almost spoke to me through the screen.

    A Stack of Cards

    I recently went through my office desk drawers in an attempt to organize. What I quickly realized was I had a collection of USB, SD, and MicroSD cards. The MicroSDs I’ve amassed as a side effect of my Raspberry Pi projects. The USB sticks are just “good things to have around,” especially in an era where none of my laptops even have a CD drive anymore. I can load up an OS on a bootable USB and reload the laptop.

    The problem I had was, well, they all sat in a small container in my drawer. There was little organization, just a bucket of parts. As I’m browsing the all3dp.com article, I came across the USB SD and MicroSD Holder. The design had such beauty in its simplicity. No frills, just slots for USB, SD, and MicroSD cards in a way that makes them organized and easily accessible.

    Making It My Own

    Sure, I could have printed it as is and been done with my journey. But, well, what fun is that! I took a look at the design and added a few requirements of my own.

    1. I wanted something that fit relatively neatly in my desk drawer, and used the space that I had.
    2. I needed some additional slots for all storage types.
    3. Most importantly, I wanted some larger spacing in between items to allow the bear claws that I called hands easy access to the smaller cards.

    With these new requirements, I fired up Fusion 360 and got to work. I used some of the measurements from Lalo_Solo’s design for the card slots, but added some additional spacing in between for easy access. I extended the block to fit the width of my desk, with a enough padding to grab the block out of the desk without an issue. That extension was enough to get the additional storage I needed.

    And with that, I sent the design off to Pittsburgh3DPrints to get printed. A few days later, I picked up my print.

    It turned out great! I brought it home and loaded it up with a few of my USB sticks and SD cards, as seen above. I have a few more, but not enough to fill up the entire holder, which means I have some room for expansion.

    I chuckled a little when I picked up the print: at the shop, I noticed another print of my design on the desk with a few USB/SD cards in it. It felt awesome to see that the design is useful for more than just me!

    Dropping the Remix

    Like DJ Khalid, I wanted to drop the remix on the world. I went about creating a Thingiverse.com account and posted the new design file, along with the above picture. With Thingiverse, you can post things as “remixes” of existing designs. This is a great way to attribute inspiration to the original designers, and keep in line with the Creative Commons license.

    With a new Thingiverse account, I will work on posting the designs for the other projects I printed. At this point, nothing I did was groundbreaking, but it’s nice to share… you never know when someone might find your design useful.

  • Stacks on Stacks!

    I have Redis installed at home as a simple caching tool. Redis Stack adds on to Redis OSS with some new features that I am eager to start learning. But, well, I have to install it first.

    Charting a Course

    I have been using the Bitnami Redis chart to install Redis on my home K8 cluster. The chart itself provides the necessary configuration flexibility for replicas and security. However, Bitnami does not maintain a similar chart for redis-stack or redis-stack-server.

    There are some published Helm charts from Redis, however, they lack the built-in flexibility and configurability that the Bitnami charts provide. The Bitnami chart is so flexible, I wondered if it was possible to use it with the redis-stack-server image. A quick search showed I was not the only person with this idea.

    New Image

    Gerk Elznik posted last year about deploying Redis Stack using Bitnami’s Redis chart. Based on this post, I made attempted to customize the Bitnami chart to use the redis-stack-server image. Gerk’s post indicated that a new script was needed to successfully start the image. That seemed like an awful lot of work, and, well, I really didn’t want to do that.

    In the comments of Gerk’s post, Kamal Raj posted a link to his version of the Bitnami Redis Helm chart, modified for Redis Stack. This seemed closer to what I wanted: a few tweaks and off to the races.

    In reviewing Kamal’s changes, I noticed that everything he changed could be overridden in the values.yaml file. So I made a few changes to my values file:

    1. Added repository and tag in the redis.image section, pointing the chart to the redis-stack-server image.
    2. Updated the command for both redis.master and redis.replica to reflect Kamal’s changes.

    I ran a quick template, and everything looked to generate correctly, so I committed the changes and let ArgoCD take over.

    Nope….

    ArgoCD synchronized the stateful set as expected, but the pod didn’t start. The error in the K8 events was about “command not found.” So I started digging into the “official” Helm Chart for the redis-stack-server image.

    That chart is very simple, which means it was pretty easy to see that there was no special command for startup. So, I started to wonder if I really needed to override the command, or simply use the redis-stack-server in place of the default image.

    So I commented out the custom overrides to the command settings for both master and replica, and committed those changes. Lo and behold, ArgoCD synced and the pod started up great!

    What Matters Is, Does It Work?

    Excuse me for stealing from Celebrity Jeopardy, but “Gussy it up however you want, Trebek, what matters is, does it work?” For that, I needed a Redis client.

    Up to this point, most of my interactions with Redis have simply been through the redis-cli that’s installed on the image. I use kubectl to get into the pod and run redis-cli in the pod to see what keys are in the instance.

    Sure, that works fine, but as I start to dig into to Redis a bit more, I need a client that lets me visualize the database a little better. As I was researching Redis Stack, I came across RedisInsight, and thought it was worth a shot.

    After installing RedisInsight, I set up port forwarding on my local machine into the Kubernetes service. This allows me to connect directly to the Redis instance without creating a long term service on Node Port or some other forwarding mechanism. Since I only need access to the Redis server within the cluster, this helps me secure it.

    I got connected, and the instance shows. But, no modules….

    More Hacking Required

    As it turns out, the Bitnami Redis chart changes the startup command to a script within the chart. This allows some of the flexibility, but comes at the cost of not using the entrypoint scripts that are in the image. Specifically, the entrypoint script for redis-stack-server, which uses the command line to load the modules.

    Now what? Well, there’s more than one way to skin a cat (to use an arcane and cruel sounding metaphor). Reading through the Redis documentation, you can also load modules through the configuration. Since the Bitnami Redis chart allows you to add to the configuration using the values.yaml file, that’s where I ended up. I added the following to my values.yaml file:

    master:
        configuration: | 
          loadmodule /opt/redis-stack/lib/redisearch.so MAXSEARCHRESULTS 10000 MAXAGGREGATERESULTS 10000
          loadmodule /opt/redis-stack/lib/redistimeseries.so
          loadmodule /opt/redis-stack/lib/rejson.so
          loadmodule /opt/redis-stack/lib/redisbloom.so
    

    With those changes, I now see the appropriate modules running.

    Lots Left To Do

    As I mentioned, this seems pretty “hacky” to me. Right now, I have it running, but only in standalone mode. I haven’t had the need to run a full Redis cluster, but I’m SURE that some additional configuration will be required to apply this to running a Redis Stack cluster. Additionally, I could not get the Redis Gears module loaded, but I did get Search, JSON, Time Series, and Bloom installed.

    For now, that’s all I need. Perhaps if I find I need Gears, or I want to run a Redis cluster, I’ll have to revisit this. But, for now, it works. The full configuration can be found in my non-production infrastructure repository. I’m sure I’ll move to production, but everything that happens here happens in non-production first, so keep tabs on that if you’d like to know more.

  • WSL for Daily Use

    Windows Subsystem for Linux (WSL) lets me do what I used to do in college: have Windows and Linux on the same machine. In 1999, that mean dual booting. Hypervisors everywhere and increased computing power mean today, well, I just run a VM, without even knowing I’m running one.

    Docker Started It All

    When WSL first came out, I read up on the topic, but never really stepped into it in earnest. At the time, I had no real use for a Linux environment on my desktop. As my home lab grew and I dove into the world of Kubernetes, I started to use Linux systems more.

    With that, my familiarity with, and love of, the command line started to come back. Sure, I use Powershell a lot, but there’s nothing more nerdy than running headless Linux servers. What really threw me back into WSL was some of the ELT work I did at my previous company.

    Diving In

    It was much easier to get the various Python tools running in Linux, including things like the Anaconda virtual environment manager. At first, I was using Windows to clone and edit the files using VS Code. Through WSL, I accessed the files using the /mnt/ paths in Ubuntu to get to my drives.

    In some reading, I came across the guide for using VS Code with WSL. It describes how to use VS Code to connect to WSL as a remote computer and edit the files “remotely.” Which sounds weird because it’s just a VM, but it’s still technically remote.

    With VS Code setup to access remotely, I stopped using the /mnt/ folders and started cloning repositories within the WSL Ubuntu instance itself.

    Making It Pretty

    I am a huge fan of a pretty command line. I have been using Oh My Posh as an enhancement to Powershell and Powershell Core for some time. However, Oh My Posh is meant to be used for any shell, so I got to work on installing it in WSL.

    As it turns out, in this case, I did use the /mnt mount path in order to share my Oh My Posh settings file between my Windows profile and the WSL Ubuntu box. In this way, I have the same Oh My Posh settings, regardless of whether I’m using Windows Powershell/Powershell Core or WSL Ubuntu.

    Bringing It All Together

    How can I get to WSL quickly? Well, through Windows Terminal! Windows Terminal supports a number of different prompts, including the standard command prompt, Powershell, and Powershell Core. It also lets you start a WSL session via a Terminal Profile.

    This integration means my Windows Terminal is now my “go to” window for most tasks, whether in WSL or on my local box.

  • Don’t Mock Me!

    I spent almost two hours on a unit test issue yesterday, walking away with the issue unresolved and myself frustrated. I came back to it this morning and fixed it in 2 minutes. Remember, you don’t always need a mock library to create fakes.

    The Task at Hand

    In the process of removing some obsolete warnings from our builds, I came across a few areas where the change was less than trivial. Before making the changes, I decided it would be a good idea to write some unit tests to ensure that my changes did not affect functionality.

    The class to be tested, however, took IConfiguration in the constructor. Our current project does not make use of the Options pattern in .Net Core, meaning anything that needs configuration values has to carry around a reference to IConfiguration and then extract the values manually. Yes, I will want to change that, but not right now.

    So, in order to write these unit tests, I had to create a mock of IConfiguration that returned the values this class needed. Our project currently uses Telerik JustMock, so I figured it would be a fairly easy task to mock. However, I ran into a number of problems that had me going down the path of creating multiple mock classes for different interfaces, including IConfigurationSection. I immediately thought “There has to be a better way.”

    The Better Way

    Some quick Google research led me to this gem of a post on StackOverflow. In all my time with .Net configuration, I never knew about or used the AddInMemoryCollection extension. And that led me to the simplest solution: create an “real boy” instance of IConfiguration with the properties my class needs, and pass that to the class in testing.

    I suppose this is “dumb mocking” in the sense that it doesn’t use libraries written and dedicated to mocking objects. But it gets the job done in the simplest method possible.

  • New Cert, now what?

    I completed my PADI Advanced Open Water certification over the weekend. The question is, what’s next?

    Advanced Open Water

    The Advanced Open Water certification is a continuation of the Open Water training with a focus on opening up new dive sites, primarily by expanding the available depth. My certification focused on five areas:

    1. Deep Dive (below 18m/60ft)
    2. Navigation
    3. Bouyancy
    4. Boat
    5. Drift

    The Boat and Drift specialties were a fun introspection into pretty much the only dive types I’ve ever done: of my 17 official dives, 16 have been boat AND drift dives. Truthfully, I’d be a little anxious if I had to find an anchored dive boat by myself.

    The Deep specialty opens up a number of new dive sites below 60 feet, and taught me a little more about the pressure at those depths. On paper, I see the math regarding depth and atmosphere, but it’s incredible to see just how much difference there is between 20m and 30m in terms of pressure. I also learned a bow knot, and how to tie one at 90 feet.

    Navigation was interesting, although pretty easy considering the environment. Swimming a square with a compass is much different when you can see the full square in the 30+ feet of visibility of the Caribbean versus the 6 feet of visibility in a quarry. Considering I’ve only ever done drift dives, precise navigation has been somewhat less important. I will have to work on my orienteering on dry land so that I’m more comfortable with a compass.

    Buoyancy was, by far, the most useful of the specialty dives. I’ve been pretty consistently using 10 kilograms (22 lbs.) since I got certified. However, I forgot that, when I did my certification dives, I was wearing a 3mm shorty wetsuit (short sleeves, short legs). Since then, I’ve shed the wetsuit since I’ve been diving in warmer waters. However, I didn’t shed the weight. Through some trial and error, my instructor helped me get down to diving with 4 kilograms (8.8 lbs.). My last 4 dives were at that weight and there was a tremendous difference. Far less fiddling with my BCD to adjust buoyancy, and a lot more opportunity to use breathing and thrust to control depth.

    So many specialties…

    PADI offers a TON of specialty courses. Wreck Diving, Night Diving, Peak Performance Buoyancy, Search and Rescue, and so many more. I’m interested in a number of them, so the question really is, what’s my plan?

    Right now, well, I think I am going to review their specialty courses and make a list. As for “big” certifications, Rescue Diver seems like the next logical step, but it requires a number of specialties first. However, there is something to be said for just diving. Every dive has increased my confidence in the basics, making every dive more enjoyable. So I don’t anticipate every trip being a “certification trip.” Sometimes, it’s just nice to dive!

  • Using Architectural Decision Records

    Recently, I was exposed to Architectural Decision Records (ADRs) as a way to document software architecture decisions quickly and effectively. The more I’ve learned, the more I like.

    Building Evolutionary Architectures

    Architecture, software or otherwise, is typically a tedious and time-consuming process. We must design to meet existing requirements, but have to anticipate potential future requirements without creating an overly complex (i.e. expensive) system. This is typically accomplished through a variety of patterns which aim to decouple components and make them easily replaceable.

    Replace it?!? Everything ages, even software. If I have learned one thing, it is that the code you write today “should” not exist in its same form in the future. All code needs to change and evolve as the platforms and frameworks we use change and evolve. Building Evolutionary Architectures is a great read for any software engineer, but I would suggest it to be required reading for any software architect.

    How architecture is documented and communicated has evolved in the last 30 years. The IEEE published an excellent white paper outlining how early architecture practices have evolved into these ADRs.

    But what IS it?

    Architectural Decision Records (ADRs) are, quite simply, records of decisions made that affect the architecture of the system. ADRs are simple text documents with a concise format that anyone (architect, engineer, or otherwise) can consume quickly. ADRs are stored next to the code (in the same repository), so they are subject to the same peer review process. Additionally, ADRs add the ability to track decisions and changes over time.

    Let’s consider a simple example, taken straight from the adr-manager tool used to create ADRs in a GitHub repository. The context/problem is pretty simple:

    We want to record architectural decisions made in this project. Which format and structure should these records follow?

    The document then outlines some potential options for tracking architectural decisions. In the end, the document states that MADR 2.1.2 will be used, and outlines the rationale behind the decision.

    It may seem trivial, but putting this document in the repository, accessible to all, gives great visibility to the decision. Changes, if any, are subject to peer review.

    Now, in this case, say 6 months down the road the team decides that they hate MADR 2.1.2 and want to use Y-Statements instead. That’s easy: create a new ADR that supersedes the old one. In the new ADR, the same content should exist: what’s the problem, what are our options, and define the final decision and rationale. Link the two so that it’s easy to see related ADRs, and you are ready to go.

    Tools of the Trade

    There is an ADR GitHub organization that is focused on standardizing some of the nomenclature around ADRs. The page includes links to several articles and blog posts dedicated to describing ADRs and how to implement and use them within your organization. Additionally, the organization has started to collect and improve upon some of the tooling for supporting an ADR process. One that I found beneficial is ADR-Manager.

    ADR-Manager is a simple website that lets you interact with your GitHub repositories (using your own GitHub credentials) to create and edit ADRs. Through your browser, you connect to your repositories and view/edit ADR documents. It generates MADR-styled files within your repository which can be committed to branches with appropriate comments.

    Make it so…

    As I work to get my feet under me in my new role, the idea of starting to use ADRs has gained some traction. As we continue to scale, having these decision records readily available for new teams will be important to maintain consistency across the platform.

    As I continue to work through my home projects, I will use ADRs to document decisions I make in those projects. While no one may read them, it’s a good habit to build.

  • If there is a problem…

    My son’s broken Stanley handle led me down the path of designing and printing a new handle. Overkill? Probably. But fun nonetheless.

    What’d You Do?

    Whatd You Do Chris Farley GIF - Find & Share on GIPHY

    My son came home last week from school last week with a bit of a sob story. Someone knocked his Stanley Quencher 40 ounce Tumbler off his desk, and it landed in such a way that the handle broke off. It was not that the screws came loose: the handle mounts just busted off.

    It was a relatively new purchase, so I told him to contact Stanley about a warranty. To Stanley’s credit, they shipped him a new one, so he’s got a new one and can move on with his life. But now we have a perfectly functional Stanley with no handle. This felt like a perfect opportunity to design a new one!

    It’s Prototyping Time!

    I checked Grabcad.com for an existing model of the Stanley 40 ounce Quencher, but alas, there was not one. I also checked Thangs.com and Printables.com, but nothing popped up. So, I took a few measurements from the Stanley I had in hand and went about mocking up a quick model of the Stanley. It is not at all pretty, but it gives me the proper dimensions for my handle model. I left the handle off because, well, mine does not have the handle, hence the reason for my work.

    Bare Stanley Quencher – No Handle

    The design I had in mind is similar to some existing Yeti handles that I have seen. While the official Yeti ones have a single ring, because the Stanley is a not flared as it goes up, I wanted a double ring design to take advantage of the lower flare on the Stanley.

    With a double ring, I needed to break this out into a multi-part component in order to print it. In order to add some strength to the finished product, I added dovetails at the joints.

    I ended up with this:

    Stanley Handle Rings
    Stanley Handle
    Stanley Handle

    Off to the Printer!

    I am still mulling over purchasing a 3D printer, so for now, I rely on Pittsburgh3DPrints to have my prints completed. I sent my design over and had the print turned around in a few days. Normally I can just pick it up, but they are out of the shop this week, so I had to wait patiently for the USPS to deliver it.

    How did it turn out?

    Pretty much exactly as I had planned… Which isn’t always the case, but I do love it when a plan comes together.

    The dovetails joints are actually pretty tight on their own, but I’m going to glue them up just for added strength and removing the worry of it coming apart and losing pieces.

    Getting closer…

    As I design and print more, the list of “pros” continues to grow in the “should I buy a 3D printer” debate I am having with myself. My modeling is, right now, serviceable, but I am getting better with each tutorial and print. Perhaps one day I’ll convince myself to get my own printer.

  • Streamlining my WordPress Install

    My professional change served as a catalyst for some personal change. Nothing drastic, just messing with this site a little.

    New Look!

    I have been sitting on the Twenty Twenty-One theme for a few years now. When it comes to themes, I just want something that looks nice and is low maintenance, and it served its purpose well. I skipped Twenty Twenty-Two because, well, I did not really want to dig into changing it to my personal preference.

    The latest built-in theme, Twenty Twenty-Three, is nice and clean, and pretty close to what I was using in Twenty Twenty-One. I went ahead and activated that one, and chose the darker style to match my soul…. I am kidding. I appreciate a good dark theme, so you know my site will reflect that.

    New Plugins!

    From time to time, I will make sure that my plug-ins are updated and that the WordPress Site Health page does not have any unexpected warnings. This time around, I noticed that I had no caching detected.

    But, wait…. I have Redis Object Cache installed and running. And, literally, as soon as I read that plugin name, I realized that “object cache” is not the same as “browser cache.” So I started looking for a browser cache plugin.

    I landed on WP-Optimize from UpdraftPlus. The free version is sufficient for what I need, and the setup was very easy. I got the plugin installed, and just before I ran the optimization, I noticed the warning to backup the DB using UpdraftPlus. And that’s when I realized, my backup process was, well, non-existent.

    In the past, I have used the All-in-One WP Migration plugin to backup/restore. However, the free version is limited to a size that I have long surpassed, and there is no way that I saw to automate backups. Additionally, the “backups” are stored in the same storage location, so unless I manually grabbed them, they did not go offsite.

    UpdraftPlus provides scheduled backups as well as the ability to push those backups to external storage. Including, as luck would have it, an S3 bucket. So I was able to configure UpdraftPlus to push backups to a new bucket in my MinIO instance, which means I know have daily backups of this site…. It only took 2 years.

    With UpdraftPlus and WP-Optimize installed, I dropped the All-in-One WP Migration plugin.

    New Content?

    Nope…. Not yet, anyway. Over the past year, I have really tried to post every four days. While I do not always hit that, having deadlines pushed me to post more often than I have in the past. While I don’t have the capacity to increase the number of posts, I am targeting to add some variety to my posts. I have been leaning heavily towards technical posts, but there’s a lot of non-technical topics on which I can wax poetic… Or, more like, do a brain dump on…

  • Get to the point!

    Navigating to mattgerega.com used to take you to a small home page, from which you had to navigate to the recent posts or other pages.

    Some quick analysis of my page traffic indicated that, well, all of my traffic was going to the posts. As my wife would say, “Don’t bury the lead!” With that, I change my WordPress options to take visitors directly to the recent posts.

    For the two of us who visit other pages… my apologies.