Author: Matt

  • Windows docker containers for TeamCity Build Agents

    I have been using TeamCity for a few years now, primarily as a build tool for some of our platforms at work.  However, because I like to have a sandbox in which to play, I have been hosting an instance of TeamCity at home for roughly the same amount of time. 

    At first, I went with a basic install on a local VM and put the database on my SQL Server, and spun up another VM (a GUI Windows Server instance) which acted as my build agent.  I installed a full-blown copy of Visual Studio Community on the build agent, which provided me the ability to pretty much run any build I wanted.

    As some of my work research turned me towards containers, I realized that this setup is probably a little to heavy, and running some of my support systems (TeamCity, Unifi, etc) in docker makes them much easier to manage and update.

    I started small, with a Linux (Ubuntu) docker server running the Unifi Controller software and the TeamCity server containers.  Then this blog, which is hosted on the same docker server.  And then, as quickly as I had started, I quit containerizing.  I left the VM with the build agent running, and it worked. 

    Then I got some updated hardware, specifically a new hypervisor.  I was trying to consolidate VMs onto the new hypervisor, and for one reason or another, the build VM did not want to move nicely.  Whether the image was corrupt or what, I don’t know, but it was stuck.   So I took the opportunity to jump into Docker on Windows (or Windows Containers, or whatever you want to call it).

    I was able to take the base docker image that JetBrains provides and add the MSBuild tools to it. That gave me an image that I could use to run as a TeamCity Build agent. You can see my Dockerfile and docker-compose.yml files in my infrastructure repository.

  • Polyglot v2 and Docker – Success!

    I can’t believe it was 6 months ago that I first tried (and failed) to get the Polyglot v2 server from Universal Devices running on Docker. Granted, the problem had a workaround (put it on a Raspberry Pi), so I ignored the problems and just let my Pi do the work.

    But, well, I needed that Pi, so, this issue reared it’s ugly head again. Getting back into it, I remembered that the application kept getting stuck on this error message:

    Auto Discovering ISY on local network.....

    My presumption is that something about the auto-discovery did not like the Docker network, and promptly puked a bit. To try and get past that, I set the ISY_HOSTISY_PORT, and ISY_HTTPS environment variables in the docker-compose file. However, the portion of the code that skips the autodetection of the ISY host doesn’t look at the environment variables in docker: it looks for environment variables stored in the .env file in ~/.polyglot. In the docker environment using that particular docker compose file, i wasn’t able to make it work, because there’s no mounted volume.

    The quick way out was to add this to the docker-compose.yml file:

    volumes:
      - /path/on/host:/root/.polyglot

    and then, put a .env file on your host (/path/on/host/.env) and set your ISY_HOST, ISY_PORT, and ISY_HTTPS values in the .env file. It should skip the auto discovery.

    Also, by default, the container wants to run in HTTPS (https://localhost:3000). Since i use SSL offloading, i turned that off (USE_HTTPS=false in the .env file)

    Here are my new Dockerfile and docker-compose.yml files. Note a few differences:

    1. Based the build off of ubuntu/trusty instead of debian/stretch. You can use whatever you like there, although if you are using a different architecture, the link to download the binaries will have to change.
    2. Created a new directory on the host (/var/polyglot) and created a symbolic link from /root/.polyglot to /var/polyglot.
    3. Added a volume at /var/polyglot in the Dockerfile
    4. Mapped volumes on both the MongoDB service and the Polyglot service (/var/polyglot) to preserve data across compose up/down.
    5. My .env file looks like this
    ISY_HOST=my.isy.address
    ISY_PORT=80
    ISY_HTTPS=false 
    USE_HTTPS=false
  • At long last… new input devices

    I have spent the better part of the last ten years using the Microsoft Wireless Natural 7000 keyboard and mouse… The only link I can find is to this Amazon listing, which is clearly inflating the price because it is no longer available.

    I broke the keyboard about a year ago, so I substituted the most basic Logitech keyboard I could find. At the time, I was watching the Das Keyboard 5Q, and cannot count the number of times I clicked the pre-order button but didn’t go through with it. There was something about having flashy keys that I just could not justify.

    So, I soldiered on with my Logitech keyboard and the Wireless Natural 7000 mouse. It has been showing signs of wear….

    My old friend…

    Notice the literal signs of wear on the mouse buttons and palm. I won’t even upload the picture of the bottom: the battery cover is broken and barely stays in place, and the feet are all but gone.

    After some debate, I invested in two new input devices:

    I have always liked the Das Keyboard models, but couldn’t justify the 5Q, especially since I realized that I don’t look at the keyboard enough to have keyboard notifications be useful. And while it still carries a high price point, I like the over-sized volume knob and USB 3.0 hub: it means I do not have to pull my laptop dock forward to find my side USB ports anymore.

    As for the mouse, well, it is well worth the cost. I have only been using it for a few hours, but I already notice the difference in ease of manipulation. The horizontal scroll button and the gesture button are great for programming additional tasks, and the Options software from Logitech is easy to use.

    So, my first few hours with the products have been great. We’ll see how the next few years go.

  • Building and deploying container applications

    On and off over the last few months, I have spent a considerable amount of time working to create a miniature version of what could be a production system at home. The goal, at least in this first phase, is to create an API which supports containerized deployment and a build and deploy pipeline to move this application through the various states of development (in my case, development, staging, and production).

    The Tools

    I chose tools that are currently being used (or may be used) at work to allow my research to be utilized in multiple areas, and it has allowed me to dive into a lot of new technologies and understand how everything works together.

    • Teamcity – I utilize TeamCity for building the API code and producing the necessary artifacts. In this case, the artifacts are docker images. Certainly, Jenkins or another build pipeline could be used here.
    • Octopus Deploy – This is the go-to deployment application for the Windows Server based applications at my current company, so I decided to utilize it to roll out the images to their respective container servers. It supports both Windows and Linux servers.
    • Proget/Docker Registry – I have an instance of ProGet which houses my internal nuget packages, and was hoping to use a repository feed to house my docker images. Alas, it doesn’t support Docker Registries properly, so I ended up standing up an instance of the Docker Registry for my private images.

    TeamCity and Docker

    The easiest of all these steps was adding Docker support to my ASP.NET Core 2.2 Web API. I followed the examples on Docker’s web site, and had a Docker file in my repository in a few minutes.

    From there, it was a matter of installing a TeamCity build agent on my internal Windows Docker Container server (windocker). Once this was done, I could use the Docker Runner plugin in TeamCity to build my docker image on windocker and then push it to my private repository.

    Proget Registry and Octopus Deployment

    This is where I ran into a snag. Originally, I created a repository on my ProGet server to house these images, and TeamCity had no problem pushing the images to the private repository. The ProGet registry feeds, however, don’t fully support the Docker API. Specifically, Octopus Deploy calls out the missing _catalog endpoint, which is required for selection of the images during release. I tried manually entering the values (which involved some guessing), but with the errors I ran into, I did not want to continue.

    So I started following the instructions for deploying an instance of Docker Registry on my internal Linux Docker Container Server (docker). The documentation is very good, and I did not have any trouble until I tried to push a Windows image… I kept getting a 500 Internal Server Error with a blob unknown to registry error in the log. I came across this open issue regarding pushing windows images. As it turns out, I had to disable validation in order to get that to work. Once I figured out that the proper environmental variable was REGISTRY_VALIDATION_DISABLED=true (REGISTRY_VALIDATION_ENABLED=false didn’t work), I was able to push my Windows images to the registry.

    Next Steps

    With that done, I can now use Octopus to deploy and promote my docker images from development to staging to production. As seen in the image, my current network topology has matching container servers for both Windows and Linux, along with servers for internal systems (including running my TeamCity Server and Build Agents as well as the Docker Registry).

    My current network topology

    My next goal is to have the Octopus Deployment projects for my APIs create and deploy APIs in my Gravitee.io instance. This will allow me to use an API management tool through all of my environments and provide a single source of documentation for these APIs. It will even allow me to expose my APIs for public usage, but with appropriate controls such as rate throttling and authentication.

    I will be maintaining my infrastructure files and scripts in a public GitHub repository.

  • Polyglot v2 and Docker

    Update: I was able to get this working on my Docker server. Check out the details here.


    Before you read through all of this:  No, I have not been able to get the Polyglot v2 server working in Docker on my docker host (Ubuntu 18.04).  I ended up following the instructions for installing on a Raspberry Pi using Raspbian.  

    Goal

    My goal was to get a Polyglot v2 server up and running so that I could run the MyQ node server and connect my ISY Home Automation controller to my garage door.  And since the ISY is connected through the ISY Portal, I could then configure my Amazon Echo to open and close the garage door.

    Approach #1 – Docker

    Since I already have an Ubuntu server running several docker containers, the most logical solution was to run the Polyglot server as a docker container.  And, according to this forum post, the author had already created a docker compose file for the Polyglot server and a MongoDB instance.  It would seem I should be running about as quickly as I could download the file and run the docker-compose command.  But it wasn’t that easy.

    The MongoDB server started, and as best I can tell, the Polyglot server started and got stuck looking for the ISY.  The compose file had the appropriate credentials and address for the ISY, so I am not sure why it was getting stuck there.  Additionally, I could not locate any logs with more detail than what I was getting from the application output, so it seemed as though I was stuck.  I tried a few things around modifying port numbers and the like, but to no avail.  In the interest of just wanting to get this to work, I attempted something else.

    Approach #2 – Small VM

    I wanted to see if I could get Polyglot running on a VM.  No, it would not be as flashy as Docker, but it should get the job done.  And it would still let me virtualize the machine on my server so I didn’t need to have a Raspberry Pi just hanging around to open and close the garage door.

    I ran into even more trouble here:  I had libcurl4 installed on the Ubuntu Server, but MongoDB wanted libcurl3.  Apparently this is something of a known issue, and there were some workarounds that I found.  But, quite frankly, I did not feel the need to dive into it.  I had a Raspberry Pi so I did the logical thing…

    Giving up…

    I gave up, so to speak.  The Polyglot v2 instructions are written for installation on a Raspberry Pi.  I have two at home, neither in much use, so I followed the installation instructions from the repository on one of my Raspberry Pi’s and a fresh SD card.  It took about 30 minutes to get everything running, MyQ installed and configured, and the ISY Portal updated to let me open the garage door with my Echo.  No special steps, just read the instructions and you are good.

    What’s next?

    Well, I would really like to get that server running in Docker.  It would make it a lot easier to manage and free up that Raspberry Pi for more hardware related things (like my home bar project!).  I posted a question to the ISY forums in the hopes that someone has seen a similar problem and might have some solutions for me.  But for now, I have a Raspberry Pi running that lets me open my garage door with the Amazon Echo.

  • Why I am deleting Facebook

    This is extraordinarily random, but I thought it worth mentioning why I decided to finally request a full delete of my Facebook account.  The short answer:  I feel less connected when I am on it.

    The Why

    This was a fairly lengthy decision making process on my part, and there were a few big questions that I had to answer before I could commit to it.

    What about (insert friend’s name here)?

    As I perused my list of Facebook friends, it occurred to me that I already have phone numbers for those with whom I want to keep in touch.  There were a few notable exceptions, and I took some steps to remedy those cases: I just asked for a cell phone number and gave mine in return.  

    So my “real” friends, those with whom I want to cultivate a lifelong relationship, will probably be annoyed that I will be sending more text messages simply asking how things are going.  But again, it is active cultivation on my part, not the passive knowledge acquisition that Facebook promotes.

    How will I know what’s going on?

    This was probably the hardest question for me to answer.  Facebook is a drug of sorts, triggering a dopamine high.  So what am I to do when my sense of belonging and concept of self can no longer be easily satisfied by logging in to Facebook and checking the likes on my posts?

    Much like becoming physically fit, the answer is simple but not easy.  Consistent improvement through introspection and cultivation of relationships.  

    What about the gym!?!?

    My current gym uses social media (Facebook and Instagram) to communicate information about the gym and the community.  Our social media director outdoes herself when it comes to keeping folks up to date on the latest at the gym and doing her best to keep people engaged in the community.  

    However, if I am being honest, the sense of community does not come from Facebook, but from the people.  It is an amazing group that have helped me turn my physical well being around.  Sure, I will have to be a little more inquisitive about people’s activities outside of the gym, but is it really so bad to be forced to talk to people?

    The How

    Building real relationships

    Why do we use the term “cultivate” when describing building friendships or relationships?  Because it is a continuous, difficult process that yields amazing results.  Farming is one of the most arduous and necessary tasks in the human world, and cultivation is the act of preparing the ground (aka, digging, tilling, etc).  Sure, mechanization has made things easier, but for thousands of years, farming was difficult, back-breaking work.

    So is maintaining friendships.  Facebook makes it easy:  we accept a friend request, skim our news feeds throughout the day, and we presume we are friends.  But that friendship is not real friendship: real friendship has an active component.  You listen to a friend when they have problems, you console them through a difficult time, and you celebrate with them in times of triumph.  When I re-evaluated my friend list on Facebook, only a percentage really met that criteria.

    Cutting the cord

    Why not just use it less?  I could remove it from my phone or monitor my own usage to ensure I am not spending too much time on Facebook.  Or I could deactivate my account for a time.

    For myself, deletion is the only option.  I am terrible at moderation.  When I find something that makes me feel good, I will keep going back to it.  For a few years I deactivated my account, but the draw was too great:  it was too easy to log back in and see who was up to what.

    The last thing I want is to make this some sort of rallying cry to drop Facebook.  Many people can use it responsibly and it provides connections that they would not get otherwise.  It has become, for better or worse, the center of the internet for most people.  But for me, it is all or nothing, and at this moment I prefer nothing.

    What about Instagram?

    I will be keeping my Instagram account, primarily to monitor my teenage son’s activity, but also because I feel like it is less intrusive.  I do not get the same dopamine hit on Instagram, for me it acts mostly as a source of entertainment.

    Perhaps it is foolish of me to think that I can drop one and keep the other.  But I will be tracking my time on Instagram more closely in the coming months to ensure I am not simply replacing one vice with another.  The ultimate goal of my switch is to free up time for more interpersonal relationships.  More beers with friends, as it were.

  • TeamCity Plugin for Microsoft Teams Build Notifications

    TLDR;

    I wrote a TeamCity plugin that allows you to post notifications to Microsoft Teams.  Open source, based on tcSlackBuildNotifier.  Please use and contribute!

    GitHub Repository   Build Status

    If you really care…

    IRC is back, baby!  Well, not exactly.  Corporate messaging applications like Slack and Microsoft Teams are becoming fairly commonplace in companies big and small.  They allow for your standard one-on-one messaging, but their draw comes in the form of channels for hosting and storing long-running chatter (IRC!), forming one-off group chats for large and small projects or incidents, hosting video calls in both individual and group context, and generally being the “go to” application for communication within the company.  Additionally, most of these applications have a fairly robust API layer and plugin framework to allow those of us with the desire to do it a chance to integrate with the platform.

    A few years ago, my current company started with the free version of Slack.  As part of that, we used the tcSlackBuildNotifier plugin to allow TeamCity to post to Slack channels as a notification avenue.  The Slack implementation, however, was done through what could affectionately be referred to as “back channels:” A group signed up for it to use for their own purposes and it exploded from there.  It was not long before we had outgrown the free version and were looking to purchase licenses.

    We ran into two problems: Slack is expensive for our company size, and as part of our agreements with Microsoft (specifically, Office 365), we already had access to Microsoft Teams.  After a lot of futile arguments from some of those using Slack, we went with Teams as our official company communication platform.

    My problem was that I couldn’t find a similar notification tool for Teams that we had for Slack, and my team and I really liked having the build notifications posted to the channels.  It allowed us a quick notification when something failed or succeeded after failure.

    So, I took the tcSlackBuildNotifier and modified it to allow for posting to Microsoft Teams.  It is my first foray into Maven and what amounts to a re-introduction to Java, but it gets the job done.  I will not be terrible active in maintaining it:  if something breaks I will fix it, but I don’t anticipate new features coming out.  So, if you use TeamCity and Microsoft Teams, try it out!

     

  • First Reactions – ISY994i ZW/IR PRO Controller – Insteon, Z-Wave & IR + PLM

    As I posted last week, I started a journey down the home automation path with the purchase of a controller and some Insteon switches and sensors.  Given my current schedule with work and kids, it’s hard to find time to do much, but I was able to install the switches and one of the sensors and get a few scenes and programs setup, so I’ll run through a quick review.

    ISY994i ZW/IR PRO Controller – Insteon, Z-Wave & IR + PLM

    The controller, purchased from SmartHome.com, was extremely easy to install:  plugged in the PLM, connected the PLM to the controller and the controller to my switch, and then plugged in the power cable for the controller.  Configuration required getting a Java applet downloaded and running.  As a developer, the interface is fine for me, however, it lacks the ease of use that we have come to expect in our increasing mobile-centric world.  With some poking around, though, I was able to setup some scenes and a few programs.

    Of note:  I had to install Universal Devices “alpha” version of the 5.0 software in order to get compatibility with the Siren module, which isn’t exactly a new module.  From what I could tell, version 5 of the ISY Portal has been in development for at least two years now, so I’m not sure what the delay is about, but the installation of version 5 was pretty easy and I haven’t run into any major problems yet.

    The controller has a pretty substantial REST api which I believe I could use as the backend for a better API, either a mobile app or responsive web app.  That being said, that’s a lot of work.  If I was sure I would only be using Insteon products, I may have opted for an Insteon Hub, but I already know I want to interface with some other systems, so I’ll take my chances with this controller.

    Insteon Remote Toggle Switch

    Let me preface this review with the following disclaimer:  I am reasonable comfortable with residential electrical wiring, to the degree that I planned and installed all the wiring for lighting and switches when we finished our basement.  That said, the Insteon switches were pretty easy to install, but I do have a word of warning:  they are bulky.  Before you go investing a ton into them, it might be wise to buy one or two of the ones you want and test fit them in your home’s electrical work boxes.  I know for a fact that in a few of my switch boxes I may not have the room for them boxes without trimming existing wiring to make some room.

    Once installed, the controller did a great job of picking up the new hardware through a “device add” wizard.  From there, adding it to scenes and programs was pretty easy.  I was able to add a program to run my pool pump for 12 hours during the day and created a scene which turns on both the pool pump and lights.

    A word of caution, though:  if you plan on doing a large install and then having the controller pick up changes, make sure you write down the network address for each switch and where you installed it.  This will allow you to quickly name your switches for identification without doing what I did, which was turn one switch on and then check the controller and find the switch in the “On” state.

    Overall, I am impressed with the system.  The controller provides a good balance of features and usability, but as I mentioned, it leans more towards features and less towards a slick user interface.  It’s perfect for someone who wants all the features of home automation at a reasonable cost.  And with the APIs and a few third party software solutions, the UI issues can be addressed.

  • Go big or go home automation….

    First and foremost:  I know, the title really is terrible, but my wit is adjusting to the 30 degree temperature swing our little corner of the world experienced over the last 24 hours.

    Last fall, we put in a pool.  The crazy weather over the last six months, though, means that we still have a few more odds and ends to complete before we can officially call it done.  Two things in this project have led me to put a little investment into home automation over the last few days.

    There are some pretty fancy (and expensive) systems for controlling your pool.  Most of the Pentair systems start at around $600 for just the controllers and go way up from there.  While I would have loved to control all of that through the Pentair systems, I did not want to add another two thousand dollars to the cost of the pool, so we opted out of that one.

    While the electricians were here last week, I mentioned those systems, and the electrician said he would add switched for the pool lights and pump.  I asked him if I could replace the switches he installed with a few home automation-compatible switches, and he said sure.  So, with the right controller, I can control my pool lights and pump from a standard home automation system.  So I started thinking, which is always an expensive proposition.

    In addition to that, per our building code, we need to put an alarm on the back door, since our pool layout is such that the house forms one side of the barrier around the pool (our new fence is the other three sides).  Of course, I could have just purchased a $20 door alarm, but where is the fun in that?  So I went shopping and found a door sensor and alarm unit that will meet our building codes and let me add some additional monitoring to the house.

    As for the technical specs, I went with the Universal Devices controller with the PLM module from SmartHome.com.  The overall completeness of this package was a huge draw, not to mention a fairly substantial REST API interface which should allow me to tinker with integrating the controller with some other items in my home.

    I bought a few of the simple Insteon toggle switches and a four pack of the Insteon open/close sensors for the starter pack, as well as the Insteon Siren for the alarm sound.  My hope, however, is to tie all of this in to my Amazon Echo unit so that the Echo can generate the audio alerts for certain actions.

    So yea, I may have spent a little more than $20, but this little adventure into home automation is something that I have wanted to investigate for a few years now, and it seems like the right time to do so.  That, coupled with a convenient sale on Insteon products from SmartHome, let me to jump into this a bit more than I initially planned.

    Stay tuned for more updates as I get the system up and running.

  • Getting organized with myAgile

    Organization, prioritization, and execution are the keys to success.  Many of us work in positions where projects can run over extended periods, interruptions and distractions are frequent, and we are asked to juggle a variety of responsibilities and tasks.  However, no matter where you work, if you can get things done, you will be viewed as successful.

    There are more books about organization and task management than I care to list here, and I am pretty sure that if you have spent any time in a company which puts some effort into employee training, you have been asked to read one or more of these books in order to improve your efficiency and throughput.  Each author’s method has its own strengths and weaknesses, but reading any of them and implementing at least one aspect of the method can often put you in a better place than you are now.

    The process you use to organize yourself, however, works best if it is personalized.  Yes, it can be based on other people’s ideas, but the most organized people I know have taken parts of other people’s processes and forged them into a unique process that works with their style and situation.  Additionally, their processes evolve over time.  Nothing is stagnant, and new methodologies or technologies will certainly appear that will add a new dimension to your process.

    Recently, I began a bit of a personal journey to get organized again.  The stress of work and family have taken their toll on my desire to organize, and things just were not getting done.  With some prodding from a former colleague, I melded some of the organizational techniques I have picked up over the years with the concepts of Agile software development to come up with an Agile process for organizing my life.  I have dubbed it myAgile.

    Like all good Agile processes, the idea is not just to get things done, but to also identify ways to get better.  So if you are looking for a new way to get organized, give it a shot:  the worst that can happen is you put to paper all the things that you want to do in the next few months.