• Immutable Build Objects

    Before I can make a case for Immutable Deployable Artifacts (I’m going to use IDA for short), it is probably a good idea to define what I mean by that term.

    Regardless of your technology stack, most systems follow a similar process for building deployment artifacts: get code, fetch dependencies, build code, putting together the resulting deployable artifact, whether it is .Net DLLs, .jar files, or docker images. These things are referred to as “deployable artifacts.”

    To say that these are immutable, by definition, means that they cannot be changed. However, in this context, it means a little more than that: having immutable deployable artifacts means that these artifacts are built once, but deployed many times to progressive environments. In this manner, a single build artifact is deployed to QA, internally verified, promoted to staging, externally verified, and then promoted to production.

    Separation of Concerns – Build vs. Deploy

    First and foremost, we need to remember that build and deploy are uniquely separate concerns: Build is responsible for creating deployable artifacts, while deployment is responsible for the distribution and configuration of the artifacts in various environments.

    Tools like Jenkins muddy those waters, because they allow us to integrate build and deploy into single actions. This is not a bad thing, but we need to make sure we maintain logical separation between the actions, even if their physical separation is muddied.

    Gitflow

    I won’t go into the merits and pain points of GitFlow here. It’s suffice to say that the majority of our teams utilized GitFlow as a branching strategy.

    Gitflow and IDAs

    The typical CI/CD implementations of GitFlow, particularly at my current company, map branches to environments (develop to QA, release to staging, master to production). To effect an update to an environment, a pull request is generated from one branch to another, and based on a commit to the branch. A build is then generated and deployed. This effectively means that the “build” you tested against is not the same as the build you are pushing to production. This gives development managers (current and former, such as myself) cold sweats. Why? In order to have accurate builds between branches, we rely very heavily on the following factors not changing:

    • No changes were introduced to the “source branch” (master, in the case of production builds) outside of the merge request.
    • No downstream dependencies changed. Some builds will pull the latest package from a particular repository (nuget, npm, etc), so we are relying that no new versions of a third party dependency have been published OR that the new version is compatible.
    • Nothing has changed in the build process itself (including any changes to the build agent configuration).

    Even at my best as a development manager, there is no way I could guarantee all three of those things with every build. These variables open up our products to unnecessary risks in deployment, as we are effectively deploying an untested build artifact directly into production.

    Potential solutions

    There is no universal best practice, but, we can draw some conclusions based on others.

    Separate Branch Strategy from Deployment strategy

    The first step, regardless of your branching strategy, is to separate the hard link between branch names and deployment environments. While it is certainly helpful to tag build artifacts with the branch from which they came, there is no hard and fast rule that “builds from develop always and only go to QA, never to stage or release.” If the build artifact(s) pass the necessary tests, they should be available to higher environments through promotion.

    I’m not suggesting we shouldn’t limit branches which produce release candidate builds , but the notion that “production code MUST be built directly from master” is, quite frankly, dangerous.

    If you are using GitFlow, one example of this would be to generate your build artifacts from ONLY your release and hotfix branches. There are some consequences that you need to be aware of with this, and they are outlined far better than i can by Ben Teese.

    Simplify your branching strategy

    Alternatively, if your project is smaller or your development cycles are short enough that you support fix-forward only, you may consider moving to the simpler feature branch workflow. In this branching mechanism, build artifacts come from your master branch, and feature branches are used only for feature development.

    Change your deployment and promotion process

    Once you have build artifacts being generated, it’s up to your deployment process to deploy and configure those artifacts. This can be as automated or as manually as your team chooses. In an ideal scenario, if all of your testing is 100% automated, then your deployment process could be as simple as deploying the generated artifacts, running all of your tests, and then promoting those artifacts to production if all your tests pass. Worst case, if your testing is 100% manual, you will need to define a process for testing and promotion of build artifacts. Octopus Deploy can be helpful in this area by creating build pipelines and allow for promotion through user interaction.

    What about Continuous Integration?

    There is some, well, confusion around what continuous integration is. Definitions may differ, but the basics of CI are that we should automate the build and test process so that we can verify our code state, ideally after every change. What this means to you and your team, though, may differ. Does your team have the desire, capacity, and ability to run build and test on every feature branch? Or is your team more concerned with build and test on the develop branch as an indicator of code health.

    Generally speaking, CI is something that happens daily, as an automated process kicked off by developer commits. It happens BEFORE Continuous Deployment/Continuous Delivery, and is usually a way to short circuit the CD process to prevent us from deploying bad code.

    The changes above should have no bearing on your CI process in that whatever build/test processes you do now should continue.

    A note on Infrastructure as Code

    It’s worth mentioning that IaC projects such as terraform deployment projects should have a direct correlation between branch and environment. Since the code is meant to define an environment and no build artifacts, that link is logical and necessary.

  • Ubuntu and Docker…. Oh Snap!

    A few months ago, I made the decision to start building my my .NET Core side projects from as Linux-based containers instead of Windows-based containers.  These projects are mostly CRUD APIs, meaning none of them require the Windows based containers.  And, quite frankly, Linux is cheaper….

    Now, I had previously built out a few Ubuntu servers with the express purpose of hosting my Linux containers, and changing the Dockerfile was easy enough.  But I ran into a HUGE roadblock in trying to get my Octopus deployments to work.

    I was able to install Octopus Tentacles just fine, but I could NOT get the tentacle to authenticate to my private docker repository.  There were a number of warnings and errors around the docker-credential-helper and pass, and, in general, I was supremely frustrated.  I got to a point where I uninstalled everything but docker on the Ubuntu box, and that still didn’t work.  So, since it was a development box, I figured there would be no harm in uninstalling Docker…  And that is where things got interesting.

    When I provision my Ubuntu VMs, I typically let the Ubuntu setup install docker.  It does this through snaps, which, as I have seen, have some weird consequences.  One of those consequences seemed to be a weird interaction between docker login and non-interactive users.  The long and short of it was, after several hours of trying to figure out what combination of docker-credential-helper packages and configurations were required, removing EVERYTHING and installing Docker via apt (and docker-compose via a straight download from github) made everything work quite magically.

    While I would love nothing more than to recreate the issue and figure out why it was occurring, frankly, I do not have the time.  It was easier for me to swap my snap-installed Docker packages for apt-installed ones and move forward with my project.

  • Polyglot-on for Punishment

    I apologize for the misplaced pun, but Polyglot has been something of a nemesis of mine over the last year or so. The journey started with my Chamberlain MyQ-compatible garage door and a desire to control it through my ISY. Long story short (see here and here for more details), running Polyglot on a docker image on my VM docker hosts is more of a challenge than I’m willing to tackle. Either it’s a networking issues between the container and the ISY or I’m doing something wrong, but in any case, I have resorted to running Polyglot on an extra Raspberry Pi that I have at home

    Why the restart? Well, I recently renovated my master bathroom, and as part of that renovation i installed RGBW LED strips controlled by a MiLight controller. There is a node server for the MiLight WiFi box I purchased, so, in addition to the Insteon switches for the lights in that bathroom, I can now control the electronics in the bathroom through my ISY.

    While it would be incredibly nice to have all this running in a docker container, I’m not at all upset that I got it running on the Pi. My next course of action will be to restart the development of my HA Kiosk app…. Yes, I know there are options for apps to control the ISY, but I want a truly customized HA experience, and for that, I’d like to write my own app.

  • Supporting Teamcity Domain Authentication

    TLDR: TeamCity in Linux (or in a Linux Docker container) only supports SMBv1. Make sure you enable the SMB 1.0/CIFS File Sharing Support feature on your domain controllers.

    A few weeks ago, I decided it was time to upgrade my domain controllers. On a Hypervisor with space, it seemed a pretty easy task. My abbreviated steps were something like this:

    1. Remove the backup DC and shut the machine down.
    2. Create a new DC, add it to the domain, and replicate
    3. Give the new DC the master roles
    4. remove the old primary DC from the domain and shut it down
    5. Create a new backup DC and add it to the domain.

    Seems easy, right? Except that, during step 4, the old primary DC essentially gave up. I was forced to remove it from the domain manually.

    Also, while i was able to change the DHCP settings to reassign DNS servers for the clients which get their IP via DHCP, the machines with static IP addresses required more work to reset the DNS settings. But, after a bit of a struggle, I got it working.

    Except that I couldn’t log in to TeamCity using my domain credentials any more. I did some research, and, on Linux, TeamCity only supports SMBv1, not SMB2. So I installed the SMB 1.0/CIFS File Sharing Support feature on both domain controllers and that fixed my authentication issues.

  • MS Teams Notifications Plugin

    I have spent the better part of my last 20 years working on software in one form or another. During that time, it’s been impossible to avoid open source software components in one form or another.

    I have not, until today, contributed back to that community in a large way. Perhaps I’ve suggested a change here or there, but never really took the opportunity to get involved. I suppose my best excuse is that I have a difficult time finding a spot to jump in and get to work.

    About two years ago, I ported a Teamcity Plugin for Slack notifications to get it to work with Microsoft Teams. It’s been in use at my current company since then, and it has a few users who happened to have found it on GitHub. I took the step today to publish this plugin to Jetbrains’ plugin library.

    So, here’s to my inaugural open source publication!

  • Windows docker containers for TeamCity Build Agents

    I have been using TeamCity for a few years now, primarily as a build tool for some of our platforms at work.  However, because I like to have a sandbox in which to play, I have been hosting an instance of TeamCity at home for roughly the same amount of time. 

    At first, I went with a basic install on a local VM and put the database on my SQL Server, and spun up another VM (a GUI Windows Server instance) which acted as my build agent.  I installed a full-blown copy of Visual Studio Community on the build agent, which provided me the ability to pretty much run any build I wanted.

    As some of my work research turned me towards containers, I realized that this setup is probably a little to heavy, and running some of my support systems (TeamCity, Unifi, etc) in docker makes them much easier to manage and update.

    I started small, with a Linux (Ubuntu) docker server running the Unifi Controller software and the TeamCity server containers.  Then this blog, which is hosted on the same docker server.  And then, as quickly as I had started, I quit containerizing.  I left the VM with the build agent running, and it worked. 

    Then I got some updated hardware, specifically a new hypervisor.  I was trying to consolidate VMs onto the new hypervisor, and for one reason or another, the build VM did not want to move nicely.  Whether the image was corrupt or what, I don’t know, but it was stuck.   So I took the opportunity to jump into Docker on Windows (or Windows Containers, or whatever you want to call it).

    I was able to take the base docker image that JetBrains provides and add the MSBuild tools to it. That gave me an image that I could use to run as a TeamCity Build agent. You can see my Dockerfile and docker-compose.yml files in my infrastructure repository.

  • Polyglot v2 and Docker – Success!

    I can’t believe it was 6 months ago that I first tried (and failed) to get the Polyglot v2 server from Universal Devices running on Docker. Granted, the problem had a workaround (put it on a Raspberry Pi), so I ignored the problems and just let my Pi do the work.

    But, well, I needed that Pi, so, this issue reared it’s ugly head again. Getting back into it, I remembered that the application kept getting stuck on this error message:

    Auto Discovering ISY on local network.....

    My presumption is that something about the auto-discovery did not like the Docker network, and promptly puked a bit. To try and get past that, I set the ISY_HOSTISY_PORT, and ISY_HTTPS environment variables in the docker-compose file. However, the portion of the code that skips the autodetection of the ISY host doesn’t look at the environment variables in docker: it looks for environment variables stored in the .env file in ~/.polyglot. In the docker environment using that particular docker compose file, i wasn’t able to make it work, because there’s no mounted volume.

    The quick way out was to add this to the docker-compose.yml file:

    volumes:
      - /path/on/host:/root/.polyglot

    and then, put a .env file on your host (/path/on/host/.env) and set your ISY_HOST, ISY_PORT, and ISY_HTTPS values in the .env file. It should skip the auto discovery.

    Also, by default, the container wants to run in HTTPS (https://localhost:3000). Since i use SSL offloading, i turned that off (USE_HTTPS=false in the .env file)

    Here are my new Dockerfile and docker-compose.yml files. Note a few differences:

    1. Based the build off of ubuntu/trusty instead of debian/stretch. You can use whatever you like there, although if you are using a different architecture, the link to download the binaries will have to change.
    2. Created a new directory on the host (/var/polyglot) and created a symbolic link from /root/.polyglot to /var/polyglot.
    3. Added a volume at /var/polyglot in the Dockerfile
    4. Mapped volumes on both the MongoDB service and the Polyglot service (/var/polyglot) to preserve data across compose up/down.
    5. My .env file looks like this
    ISY_HOST=my.isy.address
    ISY_PORT=80
    ISY_HTTPS=false 
    USE_HTTPS=false
  • At long last… new input devices

    I have spent the better part of the last ten years using the Microsoft Wireless Natural 7000 keyboard and mouse… The only link I can find is to this Amazon listing, which is clearly inflating the price because it is no longer available.

    I broke the keyboard about a year ago, so I substituted the most basic Logitech keyboard I could find. At the time, I was watching the Das Keyboard 5Q, and cannot count the number of times I clicked the pre-order button but didn’t go through with it. There was something about having flashy keys that I just could not justify.

    So, I soldiered on with my Logitech keyboard and the Wireless Natural 7000 mouse. It has been showing signs of wear….

    My old friend…

    Notice the literal signs of wear on the mouse buttons and palm. I won’t even upload the picture of the bottom: the battery cover is broken and barely stays in place, and the feet are all but gone.

    After some debate, I invested in two new input devices:

    I have always liked the Das Keyboard models, but couldn’t justify the 5Q, especially since I realized that I don’t look at the keyboard enough to have keyboard notifications be useful. And while it still carries a high price point, I like the over-sized volume knob and USB 3.0 hub: it means I do not have to pull my laptop dock forward to find my side USB ports anymore.

    As for the mouse, well, it is well worth the cost. I have only been using it for a few hours, but I already notice the difference in ease of manipulation. The horizontal scroll button and the gesture button are great for programming additional tasks, and the Options software from Logitech is easy to use.

    So, my first few hours with the products have been great. We’ll see how the next few years go.

  • Building and deploying container applications

    On and off over the last few months, I have spent a considerable amount of time working to create a miniature version of what could be a production system at home. The goal, at least in this first phase, is to create an API which supports containerized deployment and a build and deploy pipeline to move this application through the various states of development (in my case, development, staging, and production).

    The Tools

    I chose tools that are currently being used (or may be used) at work to allow my research to be utilized in multiple areas, and it has allowed me to dive into a lot of new technologies and understand how everything works together.

    • Teamcity – I utilize TeamCity for building the API code and producing the necessary artifacts. In this case, the artifacts are docker images. Certainly, Jenkins or another build pipeline could be used here.
    • Octopus Deploy – This is the go-to deployment application for the Windows Server based applications at my current company, so I decided to utilize it to roll out the images to their respective container servers. It supports both Windows and Linux servers.
    • Proget/Docker Registry – I have an instance of ProGet which houses my internal nuget packages, and was hoping to use a repository feed to house my docker images. Alas, it doesn’t support Docker Registries properly, so I ended up standing up an instance of the Docker Registry for my private images.

    TeamCity and Docker

    The easiest of all these steps was adding Docker support to my ASP.NET Core 2.2 Web API. I followed the examples on Docker’s web site, and had a Docker file in my repository in a few minutes.

    From there, it was a matter of installing a TeamCity build agent on my internal Windows Docker Container server (windocker). Once this was done, I could use the Docker Runner plugin in TeamCity to build my docker image on windocker and then push it to my private repository.

    Proget Registry and Octopus Deployment

    This is where I ran into a snag. Originally, I created a repository on my ProGet server to house these images, and TeamCity had no problem pushing the images to the private repository. The ProGet registry feeds, however, don’t fully support the Docker API. Specifically, Octopus Deploy calls out the missing _catalog endpoint, which is required for selection of the images during release. I tried manually entering the values (which involved some guessing), but with the errors I ran into, I did not want to continue.

    So I started following the instructions for deploying an instance of Docker Registry on my internal Linux Docker Container Server (docker). The documentation is very good, and I did not have any trouble until I tried to push a Windows image… I kept getting a 500 Internal Server Error with a blob unknown to registry error in the log. I came across this open issue regarding pushing windows images. As it turns out, I had to disable validation in order to get that to work. Once I figured out that the proper environmental variable was REGISTRY_VALIDATION_DISABLED=true (REGISTRY_VALIDATION_ENABLED=false didn’t work), I was able to push my Windows images to the registry.

    Next Steps

    With that done, I can now use Octopus to deploy and promote my docker images from development to staging to production. As seen in the image, my current network topology has matching container servers for both Windows and Linux, along with servers for internal systems (including running my TeamCity Server and Build Agents as well as the Docker Registry).

    My current network topology

    My next goal is to have the Octopus Deployment projects for my APIs create and deploy APIs in my Gravitee.io instance. This will allow me to use an API management tool through all of my environments and provide a single source of documentation for these APIs. It will even allow me to expose my APIs for public usage, but with appropriate controls such as rate throttling and authentication.

    I will be maintaining my infrastructure files and scripts in a public GitHub repository.

  • Polyglot v2 and Docker

    Update: I was able to get this working on my Docker server. Check out the details here.


    Before you read through all of this:  No, I have not been able to get the Polyglot v2 server working in Docker on my docker host (Ubuntu 18.04).  I ended up following the instructions for installing on a Raspberry Pi using Raspbian.  

    Goal

    My goal was to get a Polyglot v2 server up and running so that I could run the MyQ node server and connect my ISY Home Automation controller to my garage door.  And since the ISY is connected through the ISY Portal, I could then configure my Amazon Echo to open and close the garage door.

    Approach #1 – Docker

    Since I already have an Ubuntu server running several docker containers, the most logical solution was to run the Polyglot server as a docker container.  And, according to this forum post, the author had already created a docker compose file for the Polyglot server and a MongoDB instance.  It would seem I should be running about as quickly as I could download the file and run the docker-compose command.  But it wasn’t that easy.

    The MongoDB server started, and as best I can tell, the Polyglot server started and got stuck looking for the ISY.  The compose file had the appropriate credentials and address for the ISY, so I am not sure why it was getting stuck there.  Additionally, I could not locate any logs with more detail than what I was getting from the application output, so it seemed as though I was stuck.  I tried a few things around modifying port numbers and the like, but to no avail.  In the interest of just wanting to get this to work, I attempted something else.

    Approach #2 – Small VM

    I wanted to see if I could get Polyglot running on a VM.  No, it would not be as flashy as Docker, but it should get the job done.  And it would still let me virtualize the machine on my server so I didn’t need to have a Raspberry Pi just hanging around to open and close the garage door.

    I ran into even more trouble here:  I had libcurl4 installed on the Ubuntu Server, but MongoDB wanted libcurl3.  Apparently this is something of a known issue, and there were some workarounds that I found.  But, quite frankly, I did not feel the need to dive into it.  I had a Raspberry Pi so I did the logical thing…

    Giving up…

    I gave up, so to speak.  The Polyglot v2 instructions are written for installation on a Raspberry Pi.  I have two at home, neither in much use, so I followed the installation instructions from the repository on one of my Raspberry Pi’s and a fresh SD card.  It took about 30 minutes to get everything running, MyQ installed and configured, and the ISY Portal updated to let me open the garage door with my Echo.  No special steps, just read the instructions and you are good.

    What’s next?

    Well, I would really like to get that server running in Docker.  It would make it a lot easier to manage and free up that Raspberry Pi for more hardware related things (like my home bar project!).  I posted a question to the ISY forums in the hopes that someone has seen a similar problem and might have some solutions for me.  But for now, I have a Raspberry Pi running that lets me open my garage door with the Amazon Echo.