Well, not really, but I find it a bit cheeky that ArgoCD’s icon is, in fact, an orange octopus.
There are so many different ways to provision and run Kubernetes clusters that, without some sense of standardization across the organization, Kubernetes can become an operations nightmare. And while a well-run operations environment can allow application developers to focus on their applications, a lack of standards and best practices for container orchestration can divert resources from application development to figuring out how a particular cluster was provisioned or deployed.
I have spent a good bit of time at work trying to consolidate our company approach to Kubernetes. Part of this is working with our infrastructure teams to level set on what a “bare” cluster looks like. The other part is standardizing how the clusters are configured and managed. For this, a colleague turned me onto GitOps.
GitOps – Declarative Management
I spent a LOT of time just trying to understand how all of this works. For me, the thought of “well, in order to deploy I have to change a Git repo” seemed, well, icky. I like the idea that code changes, things are built, tested, and deployed. I had a difficult time separating application code from declarative state. Once I removed that mental block, however, I was bound and determined to move my home environments from Octopus Deploy pipelines to ArgoCD.
I did not go on this journey alone. My colleague Oleg introduced me to ArgoCD and the External Secrets operator. I found a very detailed article with flow details by Florian Dambrine on Medium.com. Likewise, Burak Kurt showed me how to have Argo manage itself. Armed with all of this information, I started my journey.
The “short short” version
In an effort to retain the few readers I have, I will do my best to consolidate all of this into a few short paragraphs. I have two types of applications in my clusters: external tools and home-grown apps.
External Tools
Prior to all of this, my external tools were managed by running Helm upgrades periodically with values files saved to my PC. These tools included “cluster tools” which are installed on every cluster, and then things like WordPress and Proget for running the components of this site. Needless to say, this is highly dangerous: had I lost my PC, all those values files would be gone.
I now have tool definitions stored either in the cluster’s specific Git repo, or in a special “cluster-tools” repository that allows me to define applications that I want installed to all (or some) of my clusters, based on labels in the cluster’s Argo secret. This allows me to update tools by updating the version in my Git repository and committing the change.
It should be noted that, for these tools, Helm is still used to install/upgrade. More on this later.
Home Grown Applications
The home-grown apps had more of a development feel: feature branches push to a test environment, while builds from main get pushed to staging and then, upon my approval in Azure DevOps, pushed to production.
Previous to conversion, every build produced a new container image and Helm chart, both of which were published to Proget. From there, Octopus Deploy took care of deploying feature builds to the test environment only, and took care of deploying to stage and production based on nudges from Azure DevOps Tasks.
Using Florian’s described flow, I created a Helmfile repo for each of my projects, which allowed me consolidate the application charts into a single repository. Using Helmfile and Helm, I generate manifests that are then committed into the appropriate cluster’s Git repository. Each Helmfile repository has its own pipeline for generating manifests and committing them to the cluster repository, but my individual project pipelines have gotten very simple: build a new version, and change the Helmfile repository to reflect the new image version.
Once the cluster’s repository is updated, Argo takes care of syncing the resources.
Helm versus Manifests
I noted that I’m currently using Helm for external tools versus generating manifests (albeit from Helm charts, but still generating manifests) for internal applications. As long as you never actually run a helm install
command, then Argo will manage the application using manifests generated from the Helm application. However, from what I have seen, if you have previously run helm install
, that application hangs around in the cluster. However, that applications history doesn’t change with new versions. So you can get into an odd state where helm list
will show older versions than what are actually installed.
When using a tool like ArgoCD, you want to let it manage your applications from beginning to end. It will keep your cluster cleaner. For the time being, I am defining external tools using Helm templates, but using Helmfile to expand my home-built applications into manifest files.