• Polyglot-on for Punishment

    I apologize for the misplaced pun, but Polyglot has been something of a nemesis of mine over the last year or so. The journey started with my Chamberlain MyQ-compatible garage door and a desire to control it through my ISY. Long story short (see here and here for more details), running Polyglot on a docker image on my VM docker hosts is more of a challenge than I’m willing to tackle. Either it’s a networking issues between the container and the ISY or I’m doing something wrong, but in any case, I have resorted to running Polyglot on an extra Raspberry Pi that I have at home

    Why the restart? Well, I recently renovated my master bathroom, and as part of that renovation i installed RGBW LED strips controlled by a MiLight controller. There is a node server for the MiLight WiFi box I purchased, so, in addition to the Insteon switches for the lights in that bathroom, I can now control the electronics in the bathroom through my ISY.

    While it would be incredibly nice to have all this running in a docker container, I’m not at all upset that I got it running on the Pi. My next course of action will be to restart the development of my HA Kiosk app…. Yes, I know there are options for apps to control the ISY, but I want a truly customized HA experience, and for that, I’d like to write my own app.

  • Supporting Teamcity Domain Authentication

    TLDR: TeamCity in Linux (or in a Linux Docker container) only supports SMBv1. Make sure you enable the SMB 1.0/CIFS File Sharing Support feature on your domain controllers.

    A few weeks ago, I decided it was time to upgrade my domain controllers. On a Hypervisor with space, it seemed a pretty easy task. My abbreviated steps were something like this:

    1. Remove the backup DC and shut the machine down.
    2. Create a new DC, add it to the domain, and replicate
    3. Give the new DC the master roles
    4. remove the old primary DC from the domain and shut it down
    5. Create a new backup DC and add it to the domain.

    Seems easy, right? Except that, during step 4, the old primary DC essentially gave up. I was forced to remove it from the domain manually.

    Also, while i was able to change the DHCP settings to reassign DNS servers for the clients which get their IP via DHCP, the machines with static IP addresses required more work to reset the DNS settings. But, after a bit of a struggle, I got it working.

    Except that I couldn’t log in to TeamCity using my domain credentials any more. I did some research, and, on Linux, TeamCity only supports SMBv1, not SMB2. So I installed the SMB 1.0/CIFS File Sharing Support feature on both domain controllers and that fixed my authentication issues.

  • MS Teams Notifications Plugin

    I have spent the better part of my last 20 years working on software in one form or another. During that time, it’s been impossible to avoid open source software components in one form or another.

    I have not, until today, contributed back to that community in a large way. Perhaps I’ve suggested a change here or there, but never really took the opportunity to get involved. I suppose my best excuse is that I have a difficult time finding a spot to jump in and get to work.

    About two years ago, I ported a Teamcity Plugin for Slack notifications to get it to work with Microsoft Teams. It’s been in use at my current company since then, and it has a few users who happened to have found it on GitHub. I took the step today to publish this plugin to Jetbrains’ plugin library.

    So, here’s to my inaugural open source publication!

  • Windows docker containers for TeamCity Build Agents

    I have been using TeamCity for a few years now, primarily as a build tool for some of our platforms at work.  However, because I like to have a sandbox in which to play, I have been hosting an instance of TeamCity at home for roughly the same amount of time. 

    At first, I went with a basic install on a local VM and put the database on my SQL Server, and spun up another VM (a GUI Windows Server instance) which acted as my build agent.  I installed a full-blown copy of Visual Studio Community on the build agent, which provided me the ability to pretty much run any build I wanted.

    As some of my work research turned me towards containers, I realized that this setup is probably a little to heavy, and running some of my support systems (TeamCity, Unifi, etc) in docker makes them much easier to manage and update.

    I started small, with a Linux (Ubuntu) docker server running the Unifi Controller software and the TeamCity server containers.  Then this blog, which is hosted on the same docker server.  And then, as quickly as I had started, I quit containerizing.  I left the VM with the build agent running, and it worked. 

    Then I got some updated hardware, specifically a new hypervisor.  I was trying to consolidate VMs onto the new hypervisor, and for one reason or another, the build VM did not want to move nicely.  Whether the image was corrupt or what, I don’t know, but it was stuck.   So I took the opportunity to jump into Docker on Windows (or Windows Containers, or whatever you want to call it).

    I was able to take the base docker image that JetBrains provides and add the MSBuild tools to it. That gave me an image that I could use to run as a TeamCity Build agent. You can see my Dockerfile and docker-compose.yml files in my infrastructure repository.

  • Polyglot v2 and Docker – Success!

    I can’t believe it was 6 months ago that I first tried (and failed) to get the Polyglot v2 server from Universal Devices running on Docker. Granted, the problem had a workaround (put it on a Raspberry Pi), so I ignored the problems and just let my Pi do the work.

    But, well, I needed that Pi, so, this issue reared it’s ugly head again. Getting back into it, I remembered that the application kept getting stuck on this error message:

    Auto Discovering ISY on local network.....

    My presumption is that something about the auto-discovery did not like the Docker network, and promptly puked a bit. To try and get past that, I set the ISY_HOSTISY_PORT, and ISY_HTTPS environment variables in the docker-compose file. However, the portion of the code that skips the autodetection of the ISY host doesn’t look at the environment variables in docker: it looks for environment variables stored in the .env file in ~/.polyglot. In the docker environment using that particular docker compose file, i wasn’t able to make it work, because there’s no mounted volume.

    The quick way out was to add this to the docker-compose.yml file:

    volumes:
      - /path/on/host:/root/.polyglot

    and then, put a .env file on your host (/path/on/host/.env) and set your ISY_HOST, ISY_PORT, and ISY_HTTPS values in the .env file. It should skip the auto discovery.

    Also, by default, the container wants to run in HTTPS (https://localhost:3000). Since i use SSL offloading, i turned that off (USE_HTTPS=false in the .env file)

    Here are my new Dockerfile and docker-compose.yml files. Note a few differences:

    1. Based the build off of ubuntu/trusty instead of debian/stretch. You can use whatever you like there, although if you are using a different architecture, the link to download the binaries will have to change.
    2. Created a new directory on the host (/var/polyglot) and created a symbolic link from /root/.polyglot to /var/polyglot.
    3. Added a volume at /var/polyglot in the Dockerfile
    4. Mapped volumes on both the MongoDB service and the Polyglot service (/var/polyglot) to preserve data across compose up/down.
    5. My .env file looks like this
    ISY_HOST=my.isy.address
    ISY_PORT=80
    ISY_HTTPS=false 
    USE_HTTPS=false
  • At long last… new input devices

    I have spent the better part of the last ten years using the Microsoft Wireless Natural 7000 keyboard and mouse… The only link I can find is to this Amazon listing, which is clearly inflating the price because it is no longer available.

    I broke the keyboard about a year ago, so I substituted the most basic Logitech keyboard I could find. At the time, I was watching the Das Keyboard 5Q, and cannot count the number of times I clicked the pre-order button but didn’t go through with it. There was something about having flashy keys that I just could not justify.

    So, I soldiered on with my Logitech keyboard and the Wireless Natural 7000 mouse. It has been showing signs of wear….

    My old friend…

    Notice the literal signs of wear on the mouse buttons and palm. I won’t even upload the picture of the bottom: the battery cover is broken and barely stays in place, and the feet are all but gone.

    After some debate, I invested in two new input devices:

    I have always liked the Das Keyboard models, but couldn’t justify the 5Q, especially since I realized that I don’t look at the keyboard enough to have keyboard notifications be useful. And while it still carries a high price point, I like the over-sized volume knob and USB 3.0 hub: it means I do not have to pull my laptop dock forward to find my side USB ports anymore.

    As for the mouse, well, it is well worth the cost. I have only been using it for a few hours, but I already notice the difference in ease of manipulation. The horizontal scroll button and the gesture button are great for programming additional tasks, and the Options software from Logitech is easy to use.

    So, my first few hours with the products have been great. We’ll see how the next few years go.

  • Building and deploying container applications

    On and off over the last few months, I have spent a considerable amount of time working to create a miniature version of what could be a production system at home. The goal, at least in this first phase, is to create an API which supports containerized deployment and a build and deploy pipeline to move this application through the various states of development (in my case, development, staging, and production).

    The Tools

    I chose tools that are currently being used (or may be used) at work to allow my research to be utilized in multiple areas, and it has allowed me to dive into a lot of new technologies and understand how everything works together.

    • Teamcity – I utilize TeamCity for building the API code and producing the necessary artifacts. In this case, the artifacts are docker images. Certainly, Jenkins or another build pipeline could be used here.
    • Octopus Deploy – This is the go-to deployment application for the Windows Server based applications at my current company, so I decided to utilize it to roll out the images to their respective container servers. It supports both Windows and Linux servers.
    • Proget/Docker Registry – I have an instance of ProGet which houses my internal nuget packages, and was hoping to use a repository feed to house my docker images. Alas, it doesn’t support Docker Registries properly, so I ended up standing up an instance of the Docker Registry for my private images.

    TeamCity and Docker

    The easiest of all these steps was adding Docker support to my ASP.NET Core 2.2 Web API. I followed the examples on Docker’s web site, and had a Docker file in my repository in a few minutes.

    From there, it was a matter of installing a TeamCity build agent on my internal Windows Docker Container server (windocker). Once this was done, I could use the Docker Runner plugin in TeamCity to build my docker image on windocker and then push it to my private repository.

    Proget Registry and Octopus Deployment

    This is where I ran into a snag. Originally, I created a repository on my ProGet server to house these images, and TeamCity had no problem pushing the images to the private repository. The ProGet registry feeds, however, don’t fully support the Docker API. Specifically, Octopus Deploy calls out the missing _catalog endpoint, which is required for selection of the images during release. I tried manually entering the values (which involved some guessing), but with the errors I ran into, I did not want to continue.

    So I started following the instructions for deploying an instance of Docker Registry on my internal Linux Docker Container Server (docker). The documentation is very good, and I did not have any trouble until I tried to push a Windows image… I kept getting a 500 Internal Server Error with a blob unknown to registry error in the log. I came across this open issue regarding pushing windows images. As it turns out, I had to disable validation in order to get that to work. Once I figured out that the proper environmental variable was REGISTRY_VALIDATION_DISABLED=true (REGISTRY_VALIDATION_ENABLED=false didn’t work), I was able to push my Windows images to the registry.

    Next Steps

    With that done, I can now use Octopus to deploy and promote my docker images from development to staging to production. As seen in the image, my current network topology has matching container servers for both Windows and Linux, along with servers for internal systems (including running my TeamCity Server and Build Agents as well as the Docker Registry).

    My current network topology

    My next goal is to have the Octopus Deployment projects for my APIs create and deploy APIs in my Gravitee.io instance. This will allow me to use an API management tool through all of my environments and provide a single source of documentation for these APIs. It will even allow me to expose my APIs for public usage, but with appropriate controls such as rate throttling and authentication.

    I will be maintaining my infrastructure files and scripts in a public GitHub repository.

  • Polyglot v2 and Docker

    Update: I was able to get this working on my Docker server. Check out the details here.


    Before you read through all of this:  No, I have not been able to get the Polyglot v2 server working in Docker on my docker host (Ubuntu 18.04).  I ended up following the instructions for installing on a Raspberry Pi using Raspbian.  

    Goal

    My goal was to get a Polyglot v2 server up and running so that I could run the MyQ node server and connect my ISY Home Automation controller to my garage door.  And since the ISY is connected through the ISY Portal, I could then configure my Amazon Echo to open and close the garage door.

    Approach #1 – Docker

    Since I already have an Ubuntu server running several docker containers, the most logical solution was to run the Polyglot server as a docker container.  And, according to this forum post, the author had already created a docker compose file for the Polyglot server and a MongoDB instance.  It would seem I should be running about as quickly as I could download the file and run the docker-compose command.  But it wasn’t that easy.

    The MongoDB server started, and as best I can tell, the Polyglot server started and got stuck looking for the ISY.  The compose file had the appropriate credentials and address for the ISY, so I am not sure why it was getting stuck there.  Additionally, I could not locate any logs with more detail than what I was getting from the application output, so it seemed as though I was stuck.  I tried a few things around modifying port numbers and the like, but to no avail.  In the interest of just wanting to get this to work, I attempted something else.

    Approach #2 – Small VM

    I wanted to see if I could get Polyglot running on a VM.  No, it would not be as flashy as Docker, but it should get the job done.  And it would still let me virtualize the machine on my server so I didn’t need to have a Raspberry Pi just hanging around to open and close the garage door.

    I ran into even more trouble here:  I had libcurl4 installed on the Ubuntu Server, but MongoDB wanted libcurl3.  Apparently this is something of a known issue, and there were some workarounds that I found.  But, quite frankly, I did not feel the need to dive into it.  I had a Raspberry Pi so I did the logical thing…

    Giving up…

    I gave up, so to speak.  The Polyglot v2 instructions are written for installation on a Raspberry Pi.  I have two at home, neither in much use, so I followed the installation instructions from the repository on one of my Raspberry Pi’s and a fresh SD card.  It took about 30 minutes to get everything running, MyQ installed and configured, and the ISY Portal updated to let me open the garage door with my Echo.  No special steps, just read the instructions and you are good.

    What’s next?

    Well, I would really like to get that server running in Docker.  It would make it a lot easier to manage and free up that Raspberry Pi for more hardware related things (like my home bar project!).  I posted a question to the ISY forums in the hopes that someone has seen a similar problem and might have some solutions for me.  But for now, I have a Raspberry Pi running that lets me open my garage door with the Amazon Echo.

  • Why I am deleting Facebook

    This is extraordinarily random, but I thought it worth mentioning why I decided to finally request a full delete of my Facebook account.  The short answer:  I feel less connected when I am on it.

    The Why

    This was a fairly lengthy decision making process on my part, and there were a few big questions that I had to answer before I could commit to it.

    What about (insert friend’s name here)?

    As I perused my list of Facebook friends, it occurred to me that I already have phone numbers for those with whom I want to keep in touch.  There were a few notable exceptions, and I took some steps to remedy those cases: I just asked for a cell phone number and gave mine in return.  

    So my “real” friends, those with whom I want to cultivate a lifelong relationship, will probably be annoyed that I will be sending more text messages simply asking how things are going.  But again, it is active cultivation on my part, not the passive knowledge acquisition that Facebook promotes.

    How will I know what’s going on?

    This was probably the hardest question for me to answer.  Facebook is a drug of sorts, triggering a dopamine high.  So what am I to do when my sense of belonging and concept of self can no longer be easily satisfied by logging in to Facebook and checking the likes on my posts?

    Much like becoming physically fit, the answer is simple but not easy.  Consistent improvement through introspection and cultivation of relationships.  

    What about the gym!?!?

    My current gym uses social media (Facebook and Instagram) to communicate information about the gym and the community.  Our social media director outdoes herself when it comes to keeping folks up to date on the latest at the gym and doing her best to keep people engaged in the community.  

    However, if I am being honest, the sense of community does not come from Facebook, but from the people.  It is an amazing group that have helped me turn my physical well being around.  Sure, I will have to be a little more inquisitive about people’s activities outside of the gym, but is it really so bad to be forced to talk to people?

    The How

    Building real relationships

    Why do we use the term “cultivate” when describing building friendships or relationships?  Because it is a continuous, difficult process that yields amazing results.  Farming is one of the most arduous and necessary tasks in the human world, and cultivation is the act of preparing the ground (aka, digging, tilling, etc).  Sure, mechanization has made things easier, but for thousands of years, farming was difficult, back-breaking work.

    So is maintaining friendships.  Facebook makes it easy:  we accept a friend request, skim our news feeds throughout the day, and we presume we are friends.  But that friendship is not real friendship: real friendship has an active component.  You listen to a friend when they have problems, you console them through a difficult time, and you celebrate with them in times of triumph.  When I re-evaluated my friend list on Facebook, only a percentage really met that criteria.

    Cutting the cord

    Why not just use it less?  I could remove it from my phone or monitor my own usage to ensure I am not spending too much time on Facebook.  Or I could deactivate my account for a time.

    For myself, deletion is the only option.  I am terrible at moderation.  When I find something that makes me feel good, I will keep going back to it.  For a few years I deactivated my account, but the draw was too great:  it was too easy to log back in and see who was up to what.

    The last thing I want is to make this some sort of rallying cry to drop Facebook.  Many people can use it responsibly and it provides connections that they would not get otherwise.  It has become, for better or worse, the center of the internet for most people.  But for me, it is all or nothing, and at this moment I prefer nothing.

    What about Instagram?

    I will be keeping my Instagram account, primarily to monitor my teenage son’s activity, but also because I feel like it is less intrusive.  I do not get the same dopamine hit on Instagram, for me it acts mostly as a source of entertainment.

    Perhaps it is foolish of me to think that I can drop one and keep the other.  But I will be tracking my time on Instagram more closely in the coming months to ensure I am not simply replacing one vice with another.  The ultimate goal of my switch is to free up time for more interpersonal relationships.  More beers with friends, as it were.

  • TeamCity Plugin for Microsoft Teams Build Notifications

    TLDR;

    I wrote a TeamCity plugin that allows you to post notifications to Microsoft Teams.  Open source, based on tcSlackBuildNotifier.  Please use and contribute!

    GitHub Repository   Build Status

    If you really care…

    IRC is back, baby!  Well, not exactly.  Corporate messaging applications like Slack and Microsoft Teams are becoming fairly commonplace in companies big and small.  They allow for your standard one-on-one messaging, but their draw comes in the form of channels for hosting and storing long-running chatter (IRC!), forming one-off group chats for large and small projects or incidents, hosting video calls in both individual and group context, and generally being the “go to” application for communication within the company.  Additionally, most of these applications have a fairly robust API layer and plugin framework to allow those of us with the desire to do it a chance to integrate with the platform.

    A few years ago, my current company started with the free version of Slack.  As part of that, we used the tcSlackBuildNotifier plugin to allow TeamCity to post to Slack channels as a notification avenue.  The Slack implementation, however, was done through what could affectionately be referred to as “back channels:” A group signed up for it to use for their own purposes and it exploded from there.  It was not long before we had outgrown the free version and were looking to purchase licenses.

    We ran into two problems: Slack is expensive for our company size, and as part of our agreements with Microsoft (specifically, Office 365), we already had access to Microsoft Teams.  After a lot of futile arguments from some of those using Slack, we went with Teams as our official company communication platform.

    My problem was that I couldn’t find a similar notification tool for Teams that we had for Slack, and my team and I really liked having the build notifications posted to the channels.  It allowed us a quick notification when something failed or succeeded after failure.

    So, I took the tcSlackBuildNotifier and modified it to allow for posting to Microsoft Teams.  It is my first foray into Maven and what amounts to a re-introduction to Java, but it gets the job done.  I will not be terrible active in maintaining it:  if something breaks I will fix it, but I don’t anticipate new features coming out.  So, if you use TeamCity and Microsoft Teams, try it out!