A continuation of my series on building a non-trivial reference application, this post dives into some of the details around the backend for frontend pattern. See Part 1 for a quick recap.
Extending the BFF
In my first post, I outlined the basics of setting up a backend for frontend API in ASP.Net Core. The basic project hosts the SPA as static files, and provides a target for all calls coming from the SPA. This alleviates much of the configuration of the frontend and allows for additional security through server-rendered cookies.
If we stopped there, then the BFF API would contain endpoints for everything call our SPA makes, even if it just made a call to a backend service. That would be terribly inefficient and a lot of boilerplate coding. Now, having used Duende’s Identity Server for a while, I knew that they have coded a BFF library that takes care of proxying calls to backend services, even attaching the access tokens along with the call.
I was looking for a was to accomplish this without the Duende library, and that is when I came across Kalle Marjokorpi’s post which describes using YARP as an alternative to the Duende libraries. The basics were pretty easy: install YARP, configure it using the appsettings.json file, and configure the proxy. I went so far as to create an extension method to encapsulate the YARP configuration into one place. Locally, this all worked quite well… locally.
What’s going on in production?
The image built and deployed quite well. I was able to log in and navigate the application, getting data from the backend service.
However, at some point, the access token that was encoded into the cookie expired. And this caused all hell to break loose. The cookie was still good, so the backend for frontend assumes that the user is authenticated. But the access token is expired, so proxied calls fail. I have not put a refresh token in place, so I’m a bit stuck at the moment.
On my todo list is to add a refresh token to the cookie. This should allow the backend the ability to refresh the access token before proxying a call to the backend service.
What to do now?
As I mentioned, this work is primarily to use as a reference application for future work. Right now, the application is somewhat trivial. The goal is to build out true microservices for some of the functionality in the application.
My first target is the change tracking. Right now, the application is detecting changes and storing those changes in the application database. I would like to migrate the storage of that data to a service, and utilize MassTransit and/or NServiceBus to facilitate sending change data to that service. This will help me to define some standards for messaging in the reference architecture.
Over the last week or so, I realized that while I bang the drum of infrastructure as code very loudly, I have not been practicing it at home. I took some steps to reconcile that over the weekend.
The Goal
I have a fairly meager home presence in Azure. Primarily, I use a free version of Azure Active Directory (now Entra ID) to allow for some single sign-on capabilities in external applications like Grafana, MinIO, and ArgoCD. The setup for this differs greatly among the applications, but common to all of these is the need to create applications in Azure AD.
My goal is simple: automate provisioning of this Azure AD account so that I can manage these applications in code. My stretch goal was to get any secrets created as part of this process into my Hashicorp Vault instance.
Getting Started
The plan, in one word, is Terraform. Terraform has a number of providers, including both the azuread and vault providers. Additionally, since I have some experience in Terraform, I figured it would be a quick trip.
I started by installing all the necessary tools (specifically, the Vault CLI, the Azure CLI, and the Terraform CLI) in my WSL instance of Ubuntu. Why there instead of Powershell? Most of the tutorials and such lean towards the bash syntax, so it was a bit easier to roll through the tutorials without having to convert bash into powershell.
I used my ops-automation repository as the source for this, and started by creating a new folder structure to hold my projects. As I anticipated more Terraform projects to come up, I created a base terraform directory, and then an azuread directory under that.
Picking a Backend
Terraform relies on state storage. They use the term backend to describe this storage. By default, Terraform uses a local file backend provider. This is great for development, but knowing that I wanted to get things running in Azure DevOps immediately, I decided that I should configure a backend that I can use from my machine as well as from my pipelines.
As I have been using MinIO pretty heavily for storage, it made the most sense to configure MinIO as the backend, using the S3 backend to do this. It was “fairly” straightforward, as soon as I turned off all the nonsense:
There are some obvious things missing: I am setting environment variables for values I would like to treat as secret, or, at least not public.
MinIO Endpoint -> AWS_ENDPOINT_URL_S3 environment variable instead of endpoints.s3
Access Key -> AWS_ACCESS_KEY_ID environment variable instead of access_key
Secret Key -> AWS_SECRET_ACCESS_KEY environment variable instead of secret_key
These settings allow me to use the same storage for both my local machine and the Azure Pipeline.
Configuration Azure AD
Likewise, I needed to configure the azuread provider. I followed the steps in the documentation, choosing the environment variable route again. I configured a service principal in Azure and gave it the necessary access to manage my directory.
Using environment variables allows me to set these from variables in Azure DevOps, meaning my secrets are stored in ADO (or Vault, or both…. more on that in another post).
Importing Existing Resources
I have a few resources that already exist in my Azure AD instance, enough that I didn’t want to re-create them and then re-configure everything which uses them. Luckily, most Terraform providers allow for importing existing resources. Thankfully, most of the resources I have support this feature.
Importing is fairly simple: you create the simplest definition of a resource that you can, and then run a terraform import variant to import that resource into your project’s state. Importing an Azure AD Application, for example, looks like this:
It is worth noting that the provider is looking for the object-id, not the client ID. The provider documentation has information as to which ID each resource uses for import.
More importantly, Applications and Service Principals are different resources in Azure AD, even though they are pretty much a one to one. To import a Service Principal, you run a similar command:
But where is the service principal’s ID? I had to go to the Azure CLI to get that info:
azadsplist--displaymyappname
From this JSON, I grabbed the id value and used that to import.
From here, I ran a terraform plan to see what was going to be changed. I took a look at the differences, and even added some properties to the terraform files to maintain consistency between the app and the existing state. I ended up with a solid project full of Terraform files that reflected my current state.
Automating with Azure DevOps
There are a few extensions available to add Terraform tasks to Azure DevOps. Sadly, most rely on “standard” configurations for authentication against the backends. Since I’m using an S3 compatible backend, but not S3, I had difficulty getting those extensions to function correctly.
As the Terraform CLI is installed on my build agent, though, I only needed to run my commands from a script. I created an ADO template pipeline (planning for expansion) and extended it to create the pipeline.
All of the environment variables in the template are reflected in the variable groups defined in the extension. If a variable is not defined, it’s simply blank. That’s why you will see the AZDO_ environment variables in the template, but not in the variable groups for the Azure AD provisioning.
Stretch: Adding Hashicorp Vault
Adding HC Vault support was somewhat trivial, but another exercise in authentication. I wanted to use AppRole authentication for this, so I followed the vault provider’s instructions and added additional configuration to my provider. Note that this setup requires additional variables that now need to be set whenever I do a plan or import.
Once that was done, I had access to read and write values in Vault. I started by storing my application passwords in a new key vault. This allows me to have application passwords that rotate weekly, which is a nice security feature. Unfortunately, the rest of my infrastructure isn’t quite setup to handle such change. At least, not yet.
After a few data loss events, I took the time to automate my Grafana backups.
A bit of instability
It has been almost a year since I moved to a MySQL backend for Grafana. In that year, I’ve gotten a corrupted MySQL database twice now, forcing me to restore from a backup. I’m not sure if it is due to my setup or bad luck, but twice in less than a year is too much.
In my previous post, I mentioned the Grafana backup utility as a way to preserve this data. My short-sightedness prevented me from automating those backups, however, so I suffered some data loss. After the most recent event, I revisited the backup tool.
Keep your friends close…
My first thought was to simply write a quick Azure DevOps pipeline to pull the tool down, run a backup, and copy it to my SAN. I would have also had to have included some scripting to clean up old backups.
As I read through the grafana-backup-tool documents, though, I came across examples of running the tool as a Job in Kubernetes via a CronJob. This presented a very unique opportunity: configure the backup job as part of the Helm chart.
What would that look like? Well, I do not install any external charts directly. They are configured as dependencies for charts of my own. Now, usually, that just means a simple values file that sets the properties on the dependency. In the case of Grafana, though, I’ve already used this functionality to add two dependent charts (Grafana and MySQL) to create one larger application.
This setup also allows me to add additional templates to the Helm chart to create my own resources. I added two new resources to this chart:
grafana-backup-cron – A definition for the cronjob, using the ysde/grafana-backup-tool image.
grafana-backup-secret-es – An ExternalSecret definition to pull secrets from Hashicorp Vault and create a Secret for the job.
Since this is all built as part of the Grafana application, the secrets for Grafana were already available. I went so far as to add a section in the values file for the backup. This allowed me to enable/disable the backup and update the image tag easily.
Where to store it?
The other enhancement I noticed in the backup tool was the ability to store files in S3 compatible storage. In fact, their example showed how to connect to a MinIO instance. As fate would have it, I have a MinIO instance running on my SAN already.
So I configured a new bucket in my MinIO instance, added a new access key, and configured those secrets in Vault. After committing those changes and synchronizing in ArgoCD, the new resources were there and ready.
Can I test it?
Yes I can. Google, once again, pointed me to a way to create a Job from a CronJob:
I ran the above command to create a test job. And, viola, I have backup files in MinIO!
Cleaning up
Unfortunately, there doesn’t seem to be a retention setting in the backup tool. It looks like I’m going to have to write some code to clean up my Grafana backups bucket, especially since I have daily backups scheduled. Either that, or look at this issue and see if I can add it to the tool. Maybe I’ll brush off my Python skills…
The Bitnami Redis Helm chart has thrown me a curve ball over the last week or so, and made me look at Kubernetes NetworkPolicy resources.
Redis Chart Woes
Bitnami seems to be updating their charts to include default NetworkPolicy resources. While I don’t mind this, a jaunt through their open issues suggests that it has not been a smooth transition.
The redis chart’s initial release of NetworkPolicy objects broke the metrics container, since the default NetworkPolicy didn’t add the metrics port to allowed ingress ports.
So I sat on the old chart until the new Redis chart was available.
And now, Connection Timeouts
Once the update was released, I rolled out the new version of Redis. The containers came up, and I didn’t really think twice about it. Until, that is, I decided to do some updates to both my applications and my Kubernetes nodes.
I upgraded some of my internal applications to .Net 8. This caused all of them to restart, and, in the process, get their linkerd-proxy sidecars running. I also started cycling the nodes on my internal cluster. When it came time to call my Unifi IP Manager API to delete an old assigned IP, I got an internal server error.
A quick check of the logs showed that the pod’s Redis connection was failing. Odd, I thought, since most other connections have been working fine, at least through last week.
After a few different Google searches, I came across this section in the Linkerd.io documentation. As it turns out, when you use NetworkPolicy resources and opaque ports (like Redis), you have to make sure that Linkerd’s inbound port (which defaults to 4143) is also setup in the NetworkPolicy.
Maybe. This is my first exposure to them, so I would like to understand how they operate and what best practices are for such things. In the meantime, I’ll be a little more wary when I see NetworkPolicy resources pop up in external charts.
No, this is not a post on global warming. As it turns out, I have been provisioning my Azure DevOps build agents somewhat incorrectly, at least for certain toolsets.
Sonar kicks it off
It started with this error in my build pipeline:
ERROR: The version of Java (11.0.21) used to run this analysis is deprecated, and SonarCloud no longer supports it. Please upgrade to Java 17 or later.As a temporary measure, you can set the property 'sonar.scanner.force-deprecated-java-version' to 'true' to continue using Java 11.0.21This workaround will only be effective until January 28, 2024. After this date, all scans using the deprecated Java 11 will fail.
I provision my build agents using the Azure DevOps/GitHub Actions runner images repository, so I know it has multiple versions of Java. I logged in to the agent, and the necessary environment variables (including JAVA_HOME_17_X64) are present. However, adding the jdkVersion input made no difference.
I also tried using the JavaToolInstaller step to install Java 17 prior, and I got this error:
##[error]Java 17 is not preinstalled on this agent
Now, as I said, I KNOW it’s installed. So, what’s going on?
All about environment
The build agent has the proper environment variables set. As it turns out, however, the build agent needs some special setup. Some research on my end led me to Microsoft’s page on the Azure DevOps Linux Agents, specifically, the section on environmental variables.
I checked my .env file in my agent directory, and it had a scrawny 5-6 entries. As a test, I added JAVA_HOME_17_X64 with a proper path as an entry in that file and restarted the agent. Presto! No more errors, and Sonar Analysis ran fine.
Scripting for future agents
With this in mind, I updated the script that installs my ADO build agent to include steps to copy environment variables from the machine to the .env file for the agent, so that the agent knows what is on the machine. After a couple tests (forgot a necessary sudo), I have a working provisioning script.
What I thought was going to be a small upgrade to fix a display issue turned into a few nights of coding. Sounds like par for the course.
MD-TO-CONF
I forked RittmanMead‘s md-to-conf project about 6 months ago in order to update the tool for Confluence Cloud’s new API version and to move it to Python 3.11. I use the new tool to create build pipelines that publish Markdown documentation from various repositories into Confluence.
Why? Well, in the public space, I usually utilize GitHub Pages to publish HTML-based documentation for things, as I did with md-to-conf. But in the corporate space, we tend to use tools like Confluence or Sharepoint as spaces for documentation and collaboration. As it happens, both my previous company and my current one are heavy Confluence users.
But why two places? Well, generally, I have found that engineers don’t like to document things. Having to have them find (or create) the appropriate page in Confluence can be a painful affair. Keeping the documentation in the repository means it is at the engineer’s fingertips. However, for those that don’t want to (or don’t have access to) open GitHub, publishing the documents to Confluence means those team members have access to the documentation.
A Small Change…
As I built an example pipeline for this process, I noticed that the nested lists were not being rendered correctly. My gut reaction was, perhaps the python-markdown library needed an update. So, I updated the library, created a PR, and pushed a new release. And it broke everything.
I am no Python expert, so I am not really sure what happened, since I did not change any code. As best I can deduce, the way my module was built, with the amount of code in __init__.py, was causing running as a module to behave differently then running with the wheel based build. In any case, as I worked to change it, I figured, why not make it better.
So I spent a few evenings pulling code out of __init__.py and putting it into it’s own class. And, in doing that, SonarCloud failed most of my work because I did not have unit tests for my new code. So, yes, that took me down the rabbit hole of learning about using pytest and pytest-mock to start to get better coverage on my code.
But Did You Fix It?!
As it turns out, the python-markdown update did NOT fix the nested list issues. Apparently, all I really needed to do was make sure I configured python-markdown to use the sane_listsextension.
So after many small break-fix releases, v1.0.9 is out and working. I fixed the nested lists issue and a few other small bugs found by adding additional unit tests.
Mermaid Support
For Confluence, Mermaid support is a paid extension (of course). However, you can use the Mermaid CLI (or, in my case, the docker image) to convert any Mermaid in the MD file into an image, which is then published to Confluence. I built a small pipeline template that runs these two steps. Have a look!
While it would be nice to build the Mermaid to image conversion directly in md-to-conf, I was not able to quickly find a python library to do that work and, well, the mermaid-cli handles this conversion nicely, so I am happy with this particular two-step. Just don’t make me dance.
This is the first in a series meant to highlight my work to build a non-trivial application that can serve as a test platform and reference. When it comes to design, it is helpful to have an application with enough complexity to properly evaluate the features and functionality of proposed solutions.
Backend For Frontend
Lately, I have been somewhat obsessed with the Backend for Frontend pattern, or BFF. There are a number of benefits to the pattern, articulated well all across the internet, so I will avoid a recap. I wanted an application that took advantage of this pattern so that I could start to demonstrate the benefits.
I had previously done some work in putting a simple backend on the Zalando tech radar. It is a pretty simple Create/Retrieve/Update/Delete (CRUD) application, but complex enough that it would work in this case.
Configuring the BFF
At first, I started looking at converting the existing project, but quickly realized that this is a good time for a clean slate. I followed the MSDN tutorial to the letter to get a working sample application. From there, I moved my existing SPA to the working sample.
With that in place, I walked through Auth0’s tutorial on implementing Backend for Frontend authentication in ASP.NET Core. In this case, I substituted my Duende Identity Server for the OAuth/Okta instance used in the tutorial. This all worked great, with the notable exception that I had to ensure all my proxies were in order.
Show Your Work!
Now, admittedly, my blogging is well behind my actual work, so if you go browsing the repository, it is a little farther ahead than this post. Next in this series, I’ll discuss configuring the BFF to proxy calls to a backend service.
While the work is ahead of the post, the documentation is WAY behind, so please ignore the README.md file for now. I’ll get proper documentation completed as soon as I can.
I am working on building a set of small reference applications to demonstrate some of the patterns and practices to help modernize cloud applications. In configuring all of this in my home lab, I spent at least 3 hours fighting a problem that turned out to be a configuration issue.
Backend-for-Frontend Pattern
I will get into more details when I post the full application, but I am trying to build out a SPA with a dedicated backend API that would host the SPA and take care of authentication. As is typically the case, I was able to get all of this working on my local machine, including the necessary proxying of calls via the SPA’s development server (again, more on this later).
At some point, I had two containers ready to go: a BFF container hosting the SPA and the dedicated backend, and an API container hosting a data service. I felt ready to deploy to the Kubernetes cluster in my lab.
Let the pain begin!
I have enough samples within Helm/Helmfile that getting the items deployed was fairly simple. After fiddling with the settings of the containers, things were running well in the non-authenticated mode.
However, when I clicked login, the following happened:
I was redirected to my oAuth 2.0/OIDC provider.
I entered my username/password
I was redirected back to my application
I got a 502 Bad Gateway screen
502! But, why? I consulted Google and found any number of articles indicating that, in the authentication flow, Nginx’s default header size limit is too small to limit what might be coming back from the redirect. So, consulting the Nginx configuration documents, I changed the Nginx configuration in my reverse proxy to allow for larger headers.
No luck. Weird. In the spirit of true experimentation (change one thing at a time), I backed those changes out and tried changing the configuration of my Nginx Ingress controller. No luck. So what’s going on?
Too Many Cooks
My current implementation looks like this:
flowchart TB
A[UI] --UI Request--> B(Nginx Reverse Proxy)
B --> C("Kubernetes Ingress (Nginx)")
C --> D[UI Pod]
There are two Nginx instances between all of my traffic: an instance outside of the cluster that serves as my reverse proxy, and an Nginx ingress controller that serves as the reverse proxy within the cluster.
I tried changing both separately. Then I tried changing both at the same time. And I was still seeing this error. As it turns out, well, I was being passed some bad data as well.
Be careful what you read on the Internet
As it turns out, the issue was the difference in configuration between the two Nginx instances and some bad configuration values that I got from old internet articles.
Reverse Proxy Configuration
For the Nginx instance running on Ubuntu, I added the following to my nginx.conf file under the http section:
I am running RKE2 clusters, so configuring Nginx involves a HelmChartConfig resource being created in the kube-system namespace. My cluster configuration looks like this:
The combination of both of these settings got my redirects to work without the 502 errors.
Better living through logging
One of the things I fought with on this was finding the appropriate logs to see where the errors were occurring. I’m exporting my reverse proxy logs into Loki using a Promtail instance that listens on a syslog port. So I am “getting” the logs into Loki, but I couldn’t FIND them.
I forgot about the facility in syslog: I have the access logs sending as local5, but did configured the error logs without pointing them to local5. I learned that, by default, they go to local7.
Once I found the logs I was able to diagnose the issue, but I spent a lot of time browsing in Loki looking for those logs.
I have been spending a considerable amount of time in .Net 8 lately. In addition to some POC work, I have been transitioning some of my personal projects to .Net 8. While the details of that work will be the topic of a future post (or posts), Microsoft’s chiseled containers are worth a quick note.
In November, Microsoft released .NET Chiseled Containers into GA. These containers are slimmed-down versions of the .NET Linux containers, focused on getting a “bare bones” container that can be used as a base for a variety of containers.
If you are building containers from Microsoft’s .NET container images, chiseled containers are worth a look!
A Quick Note on Globalization
I tried moving two of my containers to the 8.0-jammy-chiseled base image. The fronted, with no database connection, worked fine. However, the API with the database connection ran into a globalization issue.
Apparently, Microsoft.Data.SqlClientrequires a few OS libraries that are not part of chiseled. Specifically the International Components for Unicode (ICU) is not included, by default, in the chiseled image. Ubuntu-rocks demonstrates how it can be added, but, for now, I am leaving that image as the standard 8.0-jammy image.
I recently fixed some synchronization issues that had been silently plaguing some of the monitoring applications I had installed, including my Loki/Grafana/Tempo/Mimir stack. Now that the applications are being updated, I ran into an issue with the latest Helm chart’s handling of secrets.
Sync Error?
After I made the change to fix synchronization of the Helm charts, I went to sync my Grafana chart, but received a sync error:
Error: execution error at (grafana/charts/grafana/templates/deployment.yaml:36:28): Sensitive key 'database.password' should not be defined explicitly in values. Use variable expansion instead.
I certainly didn’t change anything in those files, and I am already using variable expansion in the values.yaml file anyway. What does that mean? Basically, in the values.yaml file, I used ${ENV_NAME} in areas where I had a secret value, and told Grafana to expand environment variables into the configuration.
I already had a Kubernetes secret being populated from Hashicorp Vault with my secret values. I also already had envFromSecret set in the values.yaml to instruct the chart to use my secret. And, through some dumb luck, two of the three values were already named using the standards in Grafana’s documentation.
So the “fix” was to simply remove the secret expansions from the values.yaml file, and rename one of the secretKey values so that it matched Grafana’s environment variable template. You can see the diff of the change in my Github repository.
With that change, the Helm chart generated correctly, and once Argo had the changes in place, everything was up and running.