I have been using Nginx as a reverse proxy for some time. In the very first iteration of my home lab, it lived on a VM and allowed me to point my firewall rules to a single target, and then route traffic from there. It has since been promoted to a dedicated Raspberry Pi in my fight with the network gnomes.
My foray into Kubernetes in the home lab has brought Nginx in as an ingress controller. While there are many options for ingress, Nginx seems the most prevalent and, in my experience, the easiest to standardize on across a multitude of Kubernetes providers. As we drive to define what a standard K8 cluster looks like across our data centers and public cloud providers, Nginx seemed like a natural choice for our ingress provider.
Configurable to a fault
The Nginx Ingress controller is HIGHLY configurable. There are cluster-wide configuration settings that can be controlled through ConfigMap entries. Additionally, annotations can be used on specific ingress objects to control behavior on individual ingress.
As I worked with one team to setup Duende’s Identity Server, we started running into issues with the identity server endpoints using
http instead of
https in its discovery endpoints (such as
/.well-known/openid-configuration. Most of our research suggest that the X-Forwarded-* headers needed to be configured (which we did), but we were still seeing the wrong scheme in those endpoints.
It was a weird problem: I had never run into this issue in my own Identity Server instance, which is running in my home Kubernetes environment. I figured it had to do with an Nginx setting, but had a hard time figuring out which one.
One blog post pointed me in the right direction. Our Nginx ingress install did not have the
use-forwarded-headers setting configured in the ConfigMap, which meant the X-Forwarded-* headers were not being passed to the pod. A quick change of our deployment project, and the openid-configuration endpoint returned the appropriate schemes.
For reference, we are using the
ingress-nginx helm chart. Adding the following to our values file solved the issue:
controller: replicaCount: 2 service: ... several service settings config: use-forwarded-headers: "true"
What I do not yet know is, whether or not I randomly configured this at home and just forgot about it, or if it is a default of the Rancher Kubernetes Engine (RKE) installer. I use RKE at home to stand up my clusters, and one of the add-ons I have it configure is ingress with Nginx. Either I have settings in my RKE configuration to forward headers or it’s a default of RKE…. Unfortunately, I am at a soccer tournament this weekend, so the investigation will have to wait until I get home.
Apparently I did know about
use-forwarded-headers earlier: it was part of the options I had set in my home Kubernetes clusters. One of many things I have forgotten.