Tag: docker

  • Migrating to Github Packages

    I have been running a free version of Proget locally for years now. It served as a home for Nuget packages, Docker images, and Helm charts for my home lab projects. But, in an effort to slim down the apps that are running in my home lab, I took a look at some alternatives.

    Where can I put my stuff?

    When I logged in to my Proget instance and looked around, it occurred to me that I only had 3 types of feeds: Nuget packages, Docker images, and Helm charts. So to move off of Proget, I need to find replacements for all of these.

    Helm Charts

    Back in the heady days of using Octopus Deploy for my home lab, I used published Helm charts to deploy my applications. However, since I switched to a Gitops workflow with ArgoCD, I haven’t published a Helm chart in a few years. I deleted that feed in Proget. One down, two to go.

    Nuget Packages

    I have made a few different attempts to create Nuget packages for public consumption. A number of years ago, I tried publishing a data layer that was designed to be used across platforms (think APIs and mobile applications), but even I stopped using that in favor of Entity Framework Core and good old fashioned data models. More recently, I created some “platform” libraries to encapsulate some of the common code that I use in my APIs and other projects. They serve as utility libraries as well as a reference architecture for my professional work.

    There are a number of options for hosting Nuget feeds, with varying costs depending on structure. I considered the following options:

    • Azure DevOps Artifacts
    • Github Packages
    • Nuget.org

    I use Azure DevOps for my builds, and briefly considered using the artifacts feeds. However, none of my libraries are private. Everything I am writing is a public repository in Github. With that in mind, it seemed that the free offerings from Github and Nuget were more appropriate.

    I published the data layer packages to Nuget previously, so I have some experience with that. However, with these platform libraries, while they are public, I do not expect them to be heavily used. For that reason, I decided that publishing the packages to Github Packages made a little more sense. If these platform libraries get to the point where they are heavily used, I can always publish stable packages to Nuget.org.

    Container Images

    In terms of storage percentage, container images take up the bulk of my Proget storage. Now, I only have 5 container images, but I never clean anything up, so those 5 containers are taking up about 7 GB of data. When I was investigating alternatives, I wanted to make sure I had some way to clean up old pre-release tags and manifests to keep my usage down.

    I considered two alternatives:

    • Azure Container Registry
    • Github Container Registry

    An Azure Container Registry instance would cost me about $5 a month and provide me with 10 GB of storage. Github Container Registry provides 500MB of storage and 1GB of Data transfer per month, but that is for private repositories.

    As with my Nuget packages, nothing that I have is private. Github packages are free for public packages. Additionally, I found a Github task that will clean up Github the images. As this was one of my “new” requirements, I decided to take a run at Github packages.

    Making the switch

    With my current setup, the switch was fairly simple. Nuget publishing is controlled by my Azure DevOps service connections, so I created a new service connection for my Github feed. The biggest change was some housekeeping to add appropriate information to the Nuget package itself. This included added the RepositoryUrl property on the .csproj files. This tells Github which repository to associate the package with.

    Container registry wasn’t much different, and again, some housekeeping in adding the appropriate labels to the images. From there, a few template changes and the images were in the Github container registry.

    Overall, the changes were pretty minimal. I have a few projects left to convert, and once that is done, I can decommission my Proget instance.

    Next on the chopping block…

    I am in the beginning stages of evaluating Azure Key Vault as a replacement for my Hashicorp Vault instance. Although it comes at a cost, for my usage it is most likely under $3 a month, and getting away from self-hosted secrets management would make me a whole lot happier.

  • Tech Tip – Chiseled Images from Microsoft

    I have been spending a considerable amount of time in .Net 8 lately. In addition to some POC work, I have been transitioning some of my personal projects to .Net 8. While the details of that work will be the topic of a future post (or posts), Microsoft’s chiseled containers are worth a quick note.

    In November, Microsoft released .NET Chiseled Containers into GA. These containers are slimmed-down versions of the .NET Linux containers, focused on getting a “bare bones” container that can be used as a base for a variety of containers.

    If you are building containers from Microsoft’s .NET container images, chiseled containers are worth a look!

    A Quick Note on Globalization

    I tried moving two of my containers to the 8.0-jammy-chiseled base image. The fronted, with no database connection, worked fine. However, the API with the database connection ran into a globalization issue.

    Apparently, Microsoft.Data.SqlClient requires a few OS libraries that are not part of chiseled. Specifically the International Components for Unicode (ICU) is not included, by default, in the chiseled image. Ubuntu-rocks demonstrates how it can be added, but, for now, I am leaving that image as the standard 8.0-jammy image.

  • Installing Minio on a Synology Diskstation with Nginx SSL

    In an effort to get rid of a virtual machine on my hypervisor, I wanted to move my Minio instance to my Synology. Keeping the storage interface close to the storage container helps with latency and is, well, one less thing I have to worry about in my home lab.

    There are a few guides out there for installing Minio on a Synology. Jaroensak Yodkantha walks you through the full process of setting up the Synology and Minio using a docker command line. The folks over at BackupAssist show you how to configure Minio through the Diskstation Manager web portal. I used the BackupAssist article to get myself started, but found myself tweaking the setup because I want to have SSL communication available through my Nginx reverse proxy.

    The Basics

    Prep Work

    I went in to the Shared Folder section of the DSM control panel and created a new shared folder called minio. The settings on this share are pretty much up to you, but I did this so that all of my Minio data was in a known location.

    Within the minio folder, I created a data folder and a blank text file called minio. Inside the minio file, I setup my minio configuration:

    # MINIO_ROOT_USER and MINIO_ROOT_PASSWORD sets the root account for the MinIO server.
    # This user has unrestricted permissions to perform S3 and administrative API operations on any resource in the deployment.
    # Omit to use the default values 'minioadmin:minioadmin'.
    # MinIO recommends setting non-default values as a best practice, regardless of environment
    
    MINIO_ROOT_USER=myadmin
    MINIO_ROOT_PASSWORD=myadminpassword
    
    # MINIO_VOLUMES sets the storage volume or path to use for the MinIO server.
    
    MINIO_VOLUMES="/mnt/data"
    
    # MINIO_SERVER_URL sets the hostname of the local machine for use with the MinIO Server
    # MinIO assumes your network control plane can correctly resolve this hostname to the local machine
    
    # Uncomment the following line and replace the value with the correct hostname for the local machine.
    
    MINIO_SERVER_URL="https://s3.mattsdatacenter.net"
    MINIO_BROWSER_REDIRECT_URL="https://storage.mattsdatacenter.net"

    It is worth noting the URLs: I want to put this system behind my Nginx reverse proxy and let it do SSL termination, and in order to do that, I found it easiest to use two domains: one for the API and one for the Console. I will get into more details on that later.

    Also, as always, change your admin username and password!

    Setup the Container

    Following the BackupAssist article, I installed the Docker package on to my Synology and opened it up. From the Registry menu, I searched for minio and found the minio/minio image:

    Click on the row to highlight it, and click on the Download button. You will be prompted for the label to download, I chose latest. Once the image is downloaded (you can check the Image tab for progress), go to the Container tab and click Create. This will open the Create Wizard and get you started.

    • On the Image screen, select the minio/minio:latest image.
    • On the Network screen, select the bridge network that is defaulted. If you have a custom network configuration, you may have some work here.
    • On the General Settings screen, you can name the container whatever you like. I enabled the auto-restart option to keep it running. On this screen, click on the Advanced Settings button
      • In the Environment tab, change MINIO_CONFIG_ENV_FILE to /etc/config.env
      • In the Execution Command tab, change the execution command to minio server --console-address :9090
      • Click Save to close Advanced Settings
    • On the Port Settings screen, add the following mappings:
      • Local Port 39000 -> Container Port 9000 – Type TCP
      • Local Port 39090 -> Container Port 9090 – Type TCP
    • On the Volume Settings Screen, add the following mappings:
      • Click Add File, select the minio file created above, and set the mount path to /etc/config.env
      • Click Add Folder, select the data folder created above, and set the mount path to /mnt/data

    At that point, you can view the Summary and then create the container. Once the container starts, you can access your Minio instance at http://<synology_ip_or_hostname>:39090 and log in with the password saved in your config file.

    What Just Happened?

    The above steps should have worked to create a Docker container running on Synology on your Minio. Minio has two separate ports: one for the API, and one for the Console. Reviewing Minio’s documentation, adding the --console-address parameter in the container execution is required now, and that sets the container port for the console. In our case, we set it to 9090. The API port defaults to 9000.

    However, I wanted to run on non-standard ports, so I mapped ports 39090 and 39000 to port 9090 and 9000, respectively. That means that traffic coming in on 39090 and 39000 get routed to my Minio container on ports 9090 and 9000, respectively.

    Securing traffic with Nginx

    I like the ability to have SSL communication whenever possible, even if it is just within my home network. Most systems today default to expecting SSL, and sometimes it can be hard to find that switch to let them work with insecure connections.

    I was hoping to get the console and the API behind the same domain, but with SSL, that just isn’t in the cards. So, I chose s3.mattsdatacenter.net as the domain for the API, and storage.mattsdatacenter.net as the domain for the Console. No, those aren’t the real domain names.

    With that, I added the following sites to my Nginx configuration:

    storage.mattsdatacenter.net
      map $http_upgrade $connection_upgrade {
          default Upgrade;
          ''      close;
      }
    
      server {
          server_name storage.mattsdatacenter.net;
          client_max_body_size 0;
          ignore_invalid_headers off;
          location / {
              proxy_pass http://10.0.0.23:39090;
              proxy_set_header Host $host;
              proxy_set_header X-Real-IP $remote_addr;
              proxy_set_header X-Forwarded-proto $scheme;
              proxy_set_header X-Forwarded-port $server_port;
              proxy_set_header X-Forwarded-for $proxy_add_x_forwarded_for;
    
              proxy_set_header Upgrade $http_upgrade;
              proxy_set_header Connection $connection_upgrade;
    
              proxy_http_version 1.1;
              proxy_read_timeout 900s;
              proxy_buffering off;
          }
    
        listen 443 ssl; # managed by Certbot
        allow 10.0.0.0/24;
        deny all;
    
        ssl_certificate /etc/letsencrypt/live/mattsdatacenter.net/fullchain.pem; # managed by Certbot
        ssl_certificate_key /etc/letsencrypt/live/mattsdatacenter.net/privkey.pem; # managed by Certbot
    }
    s3.mattsdatacenter.net
      map $http_upgrade $connection_upgrade {
          default Upgrade;
          ''      close;
      }
    
      server {
          server_name s3.mattsdatacenter.net;
          client_max_body_size 0;
          ignore_invalid_headers off;
          location / {
              proxy_pass http://10.0.0.23:39000;
              proxy_set_header Host $host;
              proxy_set_header X-Real-IP $remote_addr;
              proxy_set_header X-Forwarded-proto $scheme;
              proxy_set_header X-Forwarded-port $server_port;
              proxy_set_header X-Forwarded-for $proxy_add_x_forwarded_for;
    
              proxy_set_header Upgrade $http_upgrade;
              proxy_set_header Connection $connection_upgrade;
    
              proxy_http_version 1.1;
              proxy_read_timeout 900s;
              proxy_buffering off;
          }
    
        listen 443 ssl; # managed by Certbot
        allow 10.0.0.0/24;
        deny all;
    
        ssl_certificate /etc/letsencrypt/live/mattsdatacenter.net/fullchain.pem; # managed by Certbot
        ssl_certificate_key /etc/letsencrypt/live/mattsdatacenter.net/privkey.pem; # managed by Certbot
    }

    This configuration allows me to access the API and Console via domains using SSL terminated on the proxy. Configuring Minio is pretty easy: set MINIO_BROWSER_REDIRECT_URL to the URL of your console (In my case, port 39090), and MINIO_SERVER_URL to the URL of your API (port 39000).

    This configuration allows me to address Minio for S3 in two ways:

    1. Use https://s3.mattsdatacenter.net for secure connectivity through the reverse proxy.
    2. Use http://<synology_ip_or_hostname>:39000 for insecure connectivity directly to the instance.

    I have not had the opportunity to test the performance difference between option 1 and option 2, but it is nice to have both available. For now, I will most likely lean towards the SSL path until I notice degradation in connection quality or speed.

    And, with that, my Minio instance is now running on my Diskstation, which means less VMs to manage and backup on my hypervisor.

  • Ubuntu and Docker…. Oh Snap!

    A few months ago, I made the decision to start building my my .NET Core side projects from as Linux-based containers instead of Windows-based containers.  These projects are mostly CRUD APIs, meaning none of them require the Windows based containers.  And, quite frankly, Linux is cheaper….

    Now, I had previously built out a few Ubuntu servers with the express purpose of hosting my Linux containers, and changing the Dockerfile was easy enough.  But I ran into a HUGE roadblock in trying to get my Octopus deployments to work.

    I was able to install Octopus Tentacles just fine, but I could NOT get the tentacle to authenticate to my private docker repository.  There were a number of warnings and errors around the docker-credential-helper and pass, and, in general, I was supremely frustrated.  I got to a point where I uninstalled everything but docker on the Ubuntu box, and that still didn’t work.  So, since it was a development box, I figured there would be no harm in uninstalling Docker…  And that is where things got interesting.

    When I provision my Ubuntu VMs, I typically let the Ubuntu setup install docker.  It does this through snaps, which, as I have seen, have some weird consequences.  One of those consequences seemed to be a weird interaction between docker login and non-interactive users.  The long and short of it was, after several hours of trying to figure out what combination of docker-credential-helper packages and configurations were required, removing EVERYTHING and installing Docker via apt (and docker-compose via a straight download from github) made everything work quite magically.

    While I would love nothing more than to recreate the issue and figure out why it was occurring, frankly, I do not have the time.  It was easier for me to swap my snap-installed Docker packages for apt-installed ones and move forward with my project.

  • Windows docker containers for TeamCity Build Agents

    I have been using TeamCity for a few years now, primarily as a build tool for some of our platforms at work.  However, because I like to have a sandbox in which to play, I have been hosting an instance of TeamCity at home for roughly the same amount of time. 

    At first, I went with a basic install on a local VM and put the database on my SQL Server, and spun up another VM (a GUI Windows Server instance) which acted as my build agent.  I installed a full-blown copy of Visual Studio Community on the build agent, which provided me the ability to pretty much run any build I wanted.

    As some of my work research turned me towards containers, I realized that this setup is probably a little to heavy, and running some of my support systems (TeamCity, Unifi, etc) in docker makes them much easier to manage and update.

    I started small, with a Linux (Ubuntu) docker server running the Unifi Controller software and the TeamCity server containers.  Then this blog, which is hosted on the same docker server.  And then, as quickly as I had started, I quit containerizing.  I left the VM with the build agent running, and it worked. 

    Then I got some updated hardware, specifically a new hypervisor.  I was trying to consolidate VMs onto the new hypervisor, and for one reason or another, the build VM did not want to move nicely.  Whether the image was corrupt or what, I don’t know, but it was stuck.   So I took the opportunity to jump into Docker on Windows (or Windows Containers, or whatever you want to call it).

    I was able to take the base docker image that JetBrains provides and add the MSBuild tools to it. That gave me an image that I could use to run as a TeamCity Build agent. You can see my Dockerfile and docker-compose.yml files in my infrastructure repository.

  • Building and deploying container applications

    On and off over the last few months, I have spent a considerable amount of time working to create a miniature version of what could be a production system at home. The goal, at least in this first phase, is to create an API which supports containerized deployment and a build and deploy pipeline to move this application through the various states of development (in my case, development, staging, and production).

    The Tools

    I chose tools that are currently being used (or may be used) at work to allow my research to be utilized in multiple areas, and it has allowed me to dive into a lot of new technologies and understand how everything works together.

    • Teamcity – I utilize TeamCity for building the API code and producing the necessary artifacts. In this case, the artifacts are docker images. Certainly, Jenkins or another build pipeline could be used here.
    • Octopus Deploy – This is the go-to deployment application for the Windows Server based applications at my current company, so I decided to utilize it to roll out the images to their respective container servers. It supports both Windows and Linux servers.
    • Proget/Docker Registry – I have an instance of ProGet which houses my internal nuget packages, and was hoping to use a repository feed to house my docker images. Alas, it doesn’t support Docker Registries properly, so I ended up standing up an instance of the Docker Registry for my private images.

    TeamCity and Docker

    The easiest of all these steps was adding Docker support to my ASP.NET Core 2.2 Web API. I followed the examples on Docker’s web site, and had a Docker file in my repository in a few minutes.

    From there, it was a matter of installing a TeamCity build agent on my internal Windows Docker Container server (windocker). Once this was done, I could use the Docker Runner plugin in TeamCity to build my docker image on windocker and then push it to my private repository.

    Proget Registry and Octopus Deployment

    This is where I ran into a snag. Originally, I created a repository on my ProGet server to house these images, and TeamCity had no problem pushing the images to the private repository. The ProGet registry feeds, however, don’t fully support the Docker API. Specifically, Octopus Deploy calls out the missing _catalog endpoint, which is required for selection of the images during release. I tried manually entering the values (which involved some guessing), but with the errors I ran into, I did not want to continue.

    So I started following the instructions for deploying an instance of Docker Registry on my internal Linux Docker Container Server (docker). The documentation is very good, and I did not have any trouble until I tried to push a Windows image… I kept getting a 500 Internal Server Error with a blob unknown to registry error in the log. I came across this open issue regarding pushing windows images. As it turns out, I had to disable validation in order to get that to work. Once I figured out that the proper environmental variable was REGISTRY_VALIDATION_DISABLED=true (REGISTRY_VALIDATION_ENABLED=false didn’t work), I was able to push my Windows images to the registry.

    Next Steps

    With that done, I can now use Octopus to deploy and promote my docker images from development to staging to production. As seen in the image, my current network topology has matching container servers for both Windows and Linux, along with servers for internal systems (including running my TeamCity Server and Build Agents as well as the Docker Registry).

    My current network topology

    My next goal is to have the Octopus Deployment projects for my APIs create and deploy APIs in my Gravitee.io instance. This will allow me to use an API management tool through all of my environments and provide a single source of documentation for these APIs. It will even allow me to expose my APIs for public usage, but with appropriate controls such as rate throttling and authentication.

    I will be maintaining my infrastructure files and scripts in a public GitHub repository.

  • Polyglot v2 and Docker

    Update: I was able to get this working on my Docker server. Check out the details here.


    Before you read through all of this:  No, I have not been able to get the Polyglot v2 server working in Docker on my docker host (Ubuntu 18.04).  I ended up following the instructions for installing on a Raspberry Pi using Raspbian.  

    Goal

    My goal was to get a Polyglot v2 server up and running so that I could run the MyQ node server and connect my ISY Home Automation controller to my garage door.  And since the ISY is connected through the ISY Portal, I could then configure my Amazon Echo to open and close the garage door.

    Approach #1 – Docker

    Since I already have an Ubuntu server running several docker containers, the most logical solution was to run the Polyglot server as a docker container.  And, according to this forum post, the author had already created a docker compose file for the Polyglot server and a MongoDB instance.  It would seem I should be running about as quickly as I could download the file and run the docker-compose command.  But it wasn’t that easy.

    The MongoDB server started, and as best I can tell, the Polyglot server started and got stuck looking for the ISY.  The compose file had the appropriate credentials and address for the ISY, so I am not sure why it was getting stuck there.  Additionally, I could not locate any logs with more detail than what I was getting from the application output, so it seemed as though I was stuck.  I tried a few things around modifying port numbers and the like, but to no avail.  In the interest of just wanting to get this to work, I attempted something else.

    Approach #2 – Small VM

    I wanted to see if I could get Polyglot running on a VM.  No, it would not be as flashy as Docker, but it should get the job done.  And it would still let me virtualize the machine on my server so I didn’t need to have a Raspberry Pi just hanging around to open and close the garage door.

    I ran into even more trouble here:  I had libcurl4 installed on the Ubuntu Server, but MongoDB wanted libcurl3.  Apparently this is something of a known issue, and there were some workarounds that I found.  But, quite frankly, I did not feel the need to dive into it.  I had a Raspberry Pi so I did the logical thing…

    Giving up…

    I gave up, so to speak.  The Polyglot v2 instructions are written for installation on a Raspberry Pi.  I have two at home, neither in much use, so I followed the installation instructions from the repository on one of my Raspberry Pi’s and a fresh SD card.  It took about 30 minutes to get everything running, MyQ installed and configured, and the ISY Portal updated to let me open the garage door with my Echo.  No special steps, just read the instructions and you are good.

    What’s next?

    Well, I would really like to get that server running in Docker.  It would make it a lot easier to manage and free up that Raspberry Pi for more hardware related things (like my home bar project!).  I posted a question to the ISY forums in the hopes that someone has seen a similar problem and might have some solutions for me.  But for now, I have a Raspberry Pi running that lets me open my garage door with the Amazon Echo.

  • Let’s try this again.

    Throughout my professional career and personal life, I have made several attempts at “resolutions” to blog more.  I’ve gone through several iterations of software, including some home-grown solutions, then wordpress, then blogger/blogspot.  I’m back to WordPress hosted on a small server here at home.

    The goal of this whole endeavor is to document some of the things that I do so that I can remember them, but also so that anyone who may stumble upon this site can use some of these posts as a reference.  So, for this post, a quick note on the setup.

    I have a small home server with Docker installed.  Using Docker Compose (specifically, this quickstart), I setup docker containers for WordPress and a MySql server.  All in all, it only took about 30 minutes to get this up and running.

    So, in the immortal words of Randy Quaid:

    Hello boys, I’m back!