Category: Software

  • Automating Grafana Backups

    After a few data loss events, I took the time to automate my Grafana backups.

    A bit of instability

    It has been almost a year since I moved to a MySQL backend for Grafana. In that year, I’ve gotten a corrupted MySQL database twice now, forcing me to restore from a backup. I’m not sure if it is due to my setup or bad luck, but twice in less than a year is too much.

    In my previous post, I mentioned the Grafana backup utility as a way to preserve this data. My short-sightedness prevented me from automating those backups, however, so I suffered some data loss. After the most recent event, I revisited the backup tool.

    Keep your friends close…

    My first thought was to simply write a quick Azure DevOps pipeline to pull the tool down, run a backup, and copy it to my SAN. I would have also had to have included some scripting to clean up old backups.

    As I read through the grafana-backup-tool documents, though, I came across examples of running the tool as a Job in Kubernetes via a CronJob. This presented a very unique opportunity: configure the backup job as part of the Helm chart.

    What would that look like? Well, I do not install any external charts directly. They are configured as dependencies for charts of my own. Now, usually, that just means a simple values file that sets the properties on the dependency. In the case of Grafana, though, I’ve already used this functionality to add two dependent charts (Grafana and MySQL) to create one larger application.

    This setup also allows me to add additional templates to the Helm chart to create my own resources. I added two new resources to this chart:

    1. grafana-backup-cron – A definition for the cronjob, using the ysde/grafana-backup-tool image.
    2. grafana-backup-secret-es – An ExternalSecret definition to pull secrets from Hashicorp Vault and create a Secret for the job.

    Since this is all built as part of the Grafana application, the secrets for Grafana were already available. I went so far as to add a section in the values file for the backup. This allowed me to enable/disable the backup and update the image tag easily.

    Where to store it?

    The other enhancement I noticed in the backup tool was the ability to store files in S3 compatible storage. In fact, their example showed how to connect to a MinIO instance. As fate would have it, I have a MinIO instance running on my SAN already.

    So I configured a new bucket in my MinIO instance, added a new access key, and configured those secrets in Vault. After committing those changes and synchronizing in ArgoCD, the new resources were there and ready.

    Can I test it?

    Yes I can. Google, once again, pointed me to a way to create a Job from a CronJob:

    kubectl create job --from=cronjob/<cronjob-name> <job-name> -n <namespace-name>

    I ran the above command to create a test job. And, viola, I have backup files in MinIO!

    Cleaning up

    Unfortunately, there doesn’t seem to be a retention setting in the backup tool. It looks like I’m going to have to write some code to clean up my Grafana backups bucket, especially since I have daily backups scheduled. Either that, or look at this issue and see if I can add it to the tool. Maybe I’ll brush off my Python skills…

  • My Introduction to Kubernetes NetworkPolicy

    The Bitnami Redis Helm chart has thrown me a curve ball over the last week or so, and made me look at Kubernetes NetworkPolicy resources.

    Redis Chart Woes

    Bitnami seems to be updating their charts to include default NetworkPolicy resources. While I don’t mind this, a jaunt through their open issues suggests that it has not been a smooth transition.

    The redis chart’s initial release of NetworkPolicy objects broke the metrics container, since the default NetworkPolicy didn’t add the metrics port to allowed ingress ports.

    So I sat on the old chart until the new Redis chart was available.

    And now, Connection Timeouts

    Once the update was released, I rolled out the new version of Redis. The containers came up, and I didn’t really think twice about it. Until, that is, I decided to do some updates to both my applications and my Kubernetes nodes.

    I upgraded some of my internal applications to .Net 8. This caused all of them to restart, and, in the process, get their linkerd-proxy sidecars running. I also started cycling the nodes on my internal cluster. When it came time to call my Unifi IP Manager API to delete an old assigned IP, I got an internal server error.

    A quick check of the logs showed that the pod’s Redis connection was failing. Odd, I thought, since most other connections have been working fine, at least through last week.

    After a few different Google searches, I came across this section in the Linkerd.io documentation. As it turns out, when you use NetworkPolicy resources and opaque ports (like Redis), you have to make sure that Linkerd’s inbound port (which defaults to 4143) is also setup in the NetworkPolicy.

    Adding the Linkerd port to the extraIngress section in the Redis Helm chart worked wonders. With that section in place, connectivity was restored and I could go about my maintenance tasks.

    NetworkPolicy for all?

    Maybe. This is my first exposure to them, so I would like to understand how they operate and what best practices are for such things. In the meantime, I’ll be a little more wary when I see NetworkPolicy resources pop up in external charts.

  • Upgrades and Mermaids

    What I thought was going to be a small upgrade to fix a display issue turned into a few nights of coding. Sounds like par for the course.

    MD-TO-CONF

    I forked RittmanMead‘s md-to-conf project about 6 months ago in order to update the tool for Confluence Cloud’s new API version and to move it to Python 3.11. I use the new tool to create build pipelines that publish Markdown documentation from various repositories into Confluence.

    Why? Well, in the public space, I usually utilize GitHub Pages to publish HTML-based documentation for things, as I did with md-to-conf. But in the corporate space, we tend to use tools like Confluence or Sharepoint as spaces for documentation and collaboration. As it happens, both my previous company and my current one are heavy Confluence users.

    But why two places? Well, generally, I have found that engineers don’t like to document things. Having to have them find (or create) the appropriate page in Confluence can be a painful affair. Keeping the documentation in the repository means it is at the engineer’s fingertips. However, for those that don’t want to (or don’t have access to) open GitHub, publishing the documents to Confluence means those team members have access to the documentation.

    A Small Change…

    As I built an example pipeline for this process, I noticed that the nested lists were not being rendered correctly. My gut reaction was, perhaps the python-markdown library needed an update. So, I updated the library, created a PR, and pushed a new release. And it broke everything.

    I am no Python expert, so I am not really sure what happened, since I did not change any code. As best I can deduce, the way my module was built, with the amount of code in __init__.py, was causing running as a module to behave differently then running with the wheel based build. In any case, as I worked to change it, I figured, why not make it better.

    So I spent a few evenings pulling code out of __init__.py and putting it into it’s own class. And, in doing that, SonarCloud failed most of my work because I did not have unit tests for my new code. So, yes, that took me down the rabbit hole of learning about using pytest and pytest-mock to start to get better coverage on my code.

    But Did You Fix It?!

    As it turns out, the python-markdown update did NOT fix the nested list issues. Apparently, all I really needed to do was make sure I configured python-markdown to use the sane_lists extension.

    So after many small break-fix releases, v1.0.9 is out and working. I fixed the nested lists issue and a few other small bugs found by adding additional unit tests.

    Mermaid Support

    For Confluence, Mermaid support is a paid extension (of course). However, you can use the Mermaid CLI (or, in my case, the docker image) to convert any Mermaid in the MD file into an image, which is then published to Confluence. I built a small pipeline template that runs these two steps. Have a look!

    While it would be nice to build the Mermaid to image conversion directly in md-to-conf, I was not able to quickly find a python library to do that work and, well, the mermaid-cli handles this conversion nicely, so I am happy with this particular two-step. Just don’t make me dance.

  • Building a Radar, Part 1 – A New Pattern

    This is the first in a series meant to highlight my work to build a non-trivial application that can serve as a test platform and reference. When it comes to design, it is helpful to have an application with enough complexity to properly evaluate the features and functionality of proposed solutions.

    Backend For Frontend

    Lately, I have been somewhat obsessed with the Backend for Frontend pattern, or BFF. There are a number of benefits to the pattern, articulated well all across the internet, so I will avoid a recap. I wanted an application that took advantage of this pattern so that I could start to demonstrate the benefits.

    I had previously done some work in putting a simple backend on the Zalando tech radar. It is a pretty simple Create/Retrieve/Update/Delete (CRUD) application, but complex enough that it would work in this case.

    Configuring the BFF

    At first, I started looking at converting the existing project, but quickly realized that this is a good time for a clean slate. I followed the MSDN tutorial to the letter to get a working sample application. From there, I moved my existing SPA to the working sample.

    With that in place, I walked through Auth0’s tutorial on implementing Backend for Frontend authentication in ASP.NET Core. In this case, I substituted my Duende Identity Server for the OAuth/Okta instance used in the tutorial. This all worked great, with the notable exception that I had to ensure all my proxies were in order.

    Show Your Work!

    Now, admittedly, my blogging is well behind my actual work, so if you go browsing the repository, it is a little farther ahead than this post. Next in this series, I’ll discuss configuring the BFF to proxy calls to a backend service.

    While the work is ahead of the post, the documentation is WAY behind, so please ignore the README.md file for now. I’ll get proper documentation completed as soon as I can.

  • A Tale of Two Proxies

    I am working on building a set of small reference applications to demonstrate some of the patterns and practices to help modernize cloud applications. In configuring all of this in my home lab, I spent at least 3 hours fighting a problem that turned out to be a configuration issue.

    Backend-for-Frontend Pattern

    I will get into more details when I post the full application, but I am trying to build out a SPA with a dedicated backend API that would host the SPA and take care of authentication. As is typically the case, I was able to get all of this working on my local machine, including the necessary proxying of calls via the SPA’s development server (again, more on this later).

    At some point, I had two containers ready to go: a BFF container hosting the SPA and the dedicated backend, and an API container hosting a data service. I felt ready to deploy to the Kubernetes cluster in my lab.

    Let the pain begin!

    I have enough samples within Helm/Helmfile that getting the items deployed was fairly simple. After fiddling with the settings of the containers, things were running well in the non-authenticated mode.

    However, when I clicked login, the following happened:

    1. I was redirected to my oAuth 2.0/OIDC provider.
    2. I entered my username/password
    3. I was redirected back to my application
    4. I got a 502 Bad Gateway screen

    502! But, why? I consulted Google and found any number of articles indicating that, in the authentication flow, Nginx’s default header size limit is too small to limit what might be coming back from the redirect. So, consulting the Nginx configuration documents, I changed the Nginx configuration in my reverse proxy to allow for larger headers.

    No luck. Weird. In the spirit of true experimentation (change one thing at a time), I backed those changes out and tried changing the configuration of my Nginx Ingress controller. No luck. So what’s going on?

    Too Many Cooks

    My current implementation looks like this:

    flowchart TB
        A[UI] --UI Request--> B(Nginx Reverse Proxy)
        B --> C("Kubernetes Ingress (Nginx)")
        C --> D[UI Pod]
    

    There are two Nginx instances between all of my traffic: an instance outside of the cluster that serves as my reverse proxy, and an Nginx ingress controller that serves as the reverse proxy within the cluster.

    I tried changing both separately. Then I tried changing both at the same time. And I was still seeing this error. As it turns out, well, I was being passed some bad data as well.

    Be careful what you read on the Internet

    As it turns out, the issue was the difference in configuration between the two Nginx instances and some bad configuration values that I got from old internet articles.

    Reverse Proxy Configuration

    For the Nginx instance running on Ubuntu, I added the following to my nginx.conf file under the http section:

            proxy_buffers 4 512k;
            proxy_buffer_size 256k;
            proxy_busy_buffers_size 512k;
            client_header_buffer_size 32k;
            large_client_header_buffers 4 32k;

    Nginx Ingress Configuration

    I am running RKE2 clusters, so configuring Nginx involves a HelmChartConfig resource being created in the kube-system namespace. My cluster configuration looks like this:

    apiVersion: helm.cattle.io/v1
    kind: HelmChartConfig
    metadata:
      name: rke2-ingress-nginx
      namespace: kube-system
    spec:
      valuesContent: |-
        controller:
          kind: DaemonSet
          daemonset:
            useHostPort: true
          config:
            use-forwarded-headers: "true"
            proxy-buffer-size: "256k"
            proxy-buffers-number: "4"
            client-header-buffer-size: "256k"
            large-client-header-buffers: "4 16k"
            proxy-body-size: "10m"

    The combination of both of these settings got my redirects to work without the 502 errors.

    Better living through logging

    One of the things I fought with on this was finding the appropriate logs to see where the errors were occurring. I’m exporting my reverse proxy logs into Loki using a Promtail instance that listens on a syslog port. So I am “getting” the logs into Loki, but I couldn’t FIND them.

    I forgot about the facility in syslog: I have the access logs sending as local5, but did configured the error logs without pointing them to local5. I learned that, by default, they go to local7.

    Once I found the logs I was able to diagnose the issue, but I spent a lot of time browsing in Loki looking for those logs.

  • Tech Tip – Chiseled Images from Microsoft

    I have been spending a considerable amount of time in .Net 8 lately. In addition to some POC work, I have been transitioning some of my personal projects to .Net 8. While the details of that work will be the topic of a future post (or posts), Microsoft’s chiseled containers are worth a quick note.

    In November, Microsoft released .NET Chiseled Containers into GA. These containers are slimmed-down versions of the .NET Linux containers, focused on getting a “bare bones” container that can be used as a base for a variety of containers.

    If you are building containers from Microsoft’s .NET container images, chiseled containers are worth a look!

    A Quick Note on Globalization

    I tried moving two of my containers to the 8.0-jammy-chiseled base image. The fronted, with no database connection, worked fine. However, the API with the database connection ran into a globalization issue.

    Apparently, Microsoft.Data.SqlClient requires a few OS libraries that are not part of chiseled. Specifically the International Components for Unicode (ICU) is not included, by default, in the chiseled image. Ubuntu-rocks demonstrates how it can be added, but, for now, I am leaving that image as the standard 8.0-jammy image.

  • Mermaids!

    Whether it’s software, hardware, or real world construction, an architect’s life is about drawings. I am always on the lookout for new tools to make keeping diagrams and drawings up-to-date, and, well, I found a mermaid.

    Mermaid.js

    Mermaid is, essentially, a system to render diagrams and visualizations using text and code. According to their site, it is Javascript-based and Markdown-inspired, and allows developers to spend less time managing diagrams and more time writing code.

    It currently supports a number of different diagram types, including flow charts, sequence diagrams, and state diagrams. In addition to that, many providers (including Github and Atlassian Confluence Cloud) provide support for Mermaid charts, either free of charge (thanks Github!) or via paid add on applications (not surprised, Atlassian). I’m sure other providers have support, but those are the two I am using.

    Mermaid in Action

    As of right now, I have only had the opportunity to use Mermaid charts at work, so my examples are not publicly available. You will have to settle for my anecdotes until I get some charts and visualization into some of my open source projects.

    At work, though, I have been using the Gitgraph diagrams to visualize some of our current and proposed workflows for our development teams. Being able to visualize that Git work flow make the documentation much easier to understand for our teams.

    Additionally, I created a few sequence diagrams to illustrate a proposed flow for authentication across multiple services and applications. I could have absolutely created these diagrams in Miro (which is our current illustrating tool), but aligning the different boxes and lines would take a tremendous amount of time. By comparison, my Mermaid diagrams were around 20 lines and fully illustrated my scenarios.

    In WordPress?

    Obviously, I would really like to be able to use Mermaid charts in my blog to add visualizations to posts. Since Mermaid is Javascript-based, I figured there would be a plugin for to render Mermaid code to blog post.

    WP-Mermaid should, in theory, make this work. However…. well, it doesn’t. I’m not the only person with the issue. A quick bit of research shows that the issue is how WordPress is “cleaning up” the code that is put in, since it’s not tagged as preformatted (using the pre tag). I was able to hack in a test to see if adding pre and then changing the rendering in the plugin would work. It works just fine…

    And so my to-do list grows. I would like to use Mermaid charts in WordPress, but I have to fix it first.

  • More GitOps Fun!

    I have been curating some scripts that help me manage version updates in my GitOps repositories… It’s about time they get shared with the world.

    What’s Going On?

    I manage the applications in my Kubernetes clusters using Argo CD and a number of Git repositories. Most of the ops- repositories act as “desired state” repositories.

    As part of this management, I have a number of external tools running in my clusters that are installed using their Helm charts. Since I want to keep my installs up to date, I needed a way to update the Helm chart versions as new releases came out.

    However.. some external tools do not have their own Helm charts. For that, I have been using a Helm library chart from bjw-s. In that case, I have had to manually find new releases and update my values.yaml file.

    While I have had the Helm chart version updates automated for some time, I just recently got around to updating the values.yaml file from external sources. Now is a good time to share!

    The Scripts

    I put the scripts in the ops-automation repository in the Spydersoft organization. I’ll outline the basics of each script, but if you are interested in the details, check out the scripts themselves.

    It is worth nothing that these scripts require the git and helm command line tools to be installed, in addition to the Powershell Yaml module.

    Also, since I manage more than one repository, all of these scripts are designed to be given a basePath and then a list of directory names for the folders that are the Git repositories I want to update.

    Update-HelmRepositoryList

    This script iterates through the given folders to find the chart.yaml files in it. For every dependency in the found chart files, it adds the repository to the local helm if the URL does not already exist.

    Since I have been running this on my local machine, I only have to do this once. But, on a build agent, this script should be run every time to make sure the repository list contains all the necessary repositories for an update.

    Update-HelmCharts

    This script iterates through the given folders to find the chart.yaml files in it. For every dependency, the script determines if there is an updated version of the dependency available.

    If there is an update available, the Chart.yaml file is updated, and helm dependency update is run to update the Chart.lock file. Additionally, commit comments are created to note the version changes.

    For each chart.yaml file, a call to Update-FromAutoUpdate will be made to make additional updates if necessary.

    Update-FromAutoUpdate

    This script looks for a file called auto-update.json in the path given. The file has the following format:

    {
        "repository": "redis-stack/redis-stack",
        "stripVFromVersion": false,
        "tagPath": "redis.image.tag"
    }

    The script looks for the latest release from the repository in Github, using tag_name from Github as the version. If the latest release is newer than the current tagPath in values.yaml, the script then updates the tagPath in the values.yaml file to the new version. The script returns an object indicating whether or not an update was made, as well as a commit comment indicating the version jump.

    Right now, the auto-update only works for images that come from Github releases. I have one item (Proget) that needs to search a docker API directly, but that will be a future enhancement.

    Future Tasks

    Now that these are automated tasks, I will most likely create an Azure Pipeline that runs weekly to get these changes made and committed to Git.

    I have Argo configured to not auto-sync these applications, so even though the changes are made in Git, I still have to manually apply the updates. And I am ok with that. I like to stagger application updates, and, in some cases, make sure I have the appropriate backups before running an update. But this gets me to a place where I can log in to Argo and sync apps as I desire.

  • Distributing C# Code Standards with a Nuget Package

    My deep dive into Nuget packages made me ask a new question: Can I use Nuget packages to deploy code standards for C#?

    It’s Good To Have Standards

    Generally, the goal of coding standards is to produce a uniform code base. My go-to explanation of what coding standards are is this: “If you look at the code, you should not be able to tell who wrote it.”

    Linting is, to some degree, an extension of formatting. Linting applies a level of analysis to your code, identifying potential runtime problems. Linting rules typically derive from various best practices which can (and do) change over time, especially for languages in active development. Thing of it this way: C++ has not changed much in the last 20 years, but C#, JavaScript, and, by association, TypeScript, have all seen active changes in recent history.

    There are many tools for each language that offer linting and formatting. I have developed some favorites:

    • Formatting
      • dotnet format – C#
      • prettier – Javascript/TypeScript
      • black – Python
    • Linting
      • Sonar – support for various languages
      • eslint – Javascript/TypeScript
      • flake8 – Python

    Regardless of your tool of choice, being able to deploy standards across multiple projects has been difficult. If your projects live in the same repository, you can store settings files in a common location, but for projects across multiple repositories, the task becomes harder.

    Nuget to the Rescue!

    I spent a good amount of time in the last week relearning Nuget. In this journey, it occurred to me that I could create a Nuget package that only contained references and settings for our C# standards. So I figured I’d give it a shot. Turns out, it was easier than I anticipated.

    There are 3 files in the project:

    • The csproj file – MyStandards.CSharp.csproj
    • main.editorconfig
    • MyStandards.CSharp.props

    I put the content of the CSProj and Props files below. main.editorconfig will get copied into the project where this package is referenced as .editorconfig. You can read all about customizing .editorconfig here.

    MyStandards.CSharp.csproj

    <Project Sdk="Microsoft.NET.Sdk">
    
    	<PropertyGroup>
    		<TargetFramework>netstandard2.1</TargetFramework>
    		<Nullable>enable</Nullable>
    		<IncludeBuildOutput>false</IncludeBuildOutput>
    		<DevelopmentDependency>true</DevelopmentDependency>
    	</PropertyGroup>
    
    	<ItemGroup>
    		<Content Include="main.editorconfig">
    			<CopyToOutputDirectory>Always</CopyToOutputDirectory>
    			<PackagePath>build</PackagePath>
    		</Content>
    		<Content Include="MyStandards.CSharp.props">
    			<CopyToOutputDirectory>Always</CopyToOutputDirectory>
    			<PackagePath>build</PackagePath>
    		</Content>
    	</ItemGroup>
    
    	<ItemGroup>
    		<PackageReference Include="SonarAnalyzer.CSharp" Version="9.15.0.81779" PrivateAssets="None" />
    	</ItemGroup>
    
    </Project>
    

    MyStandards.CSharp.props

    <Project>
    	<Target Name="CopyFiles" BeforeTargets="Build">
    		<ItemGroup>
    			<SourceFile Include="$(MSBuildThisFileDirectory)main.editorconfig"></SourceFile>
    			<DestFile Include="$(ProjectDir)\.editorconfig"></DestFile>
    		</ItemGroup>
    		<Copy SourceFiles="@(SourceFile)" DestinationFiles="@(DestFile)"></Copy>
    	</Target>
    </Project>
    

    What does it do?

    When this project is built as a Nuget package, it contains the .props and main.editorconfig files in the build folder of the package. Additionally, it has a dependency on SonarAnalyzer.CSharp.

    Make it a Development Dependency

    Note that IncludeBuildOutput is set to false, and DevelopmentDependency is set to true. This tells Nuget to not include the empty MyStandards.CSharp.dll file that gets built, and to package this Nuget file as a development only dependency.

    When referenced by another project, the reference will be added with the following defaults:

    <PackageReference Include="MyStandards.CSharp" Version="1.0.0">
      <PrivateAssets>all</PrivateAssets>
      <IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
    </PackageReference>

    So, the consuming package won’t include a dependency for my standards, meaning no “chain” of references.

    Set the Settings

    The .props file does the yeoman’s work: it copies main.editorconfig into the target project’s folder as .editorconfig. Notice that this happens before every build, which prevents individual projects from overriding the .editorconfig file.

    Referencing Additional Development Tools

    As I mentioned, I’m a fan of the Sonar toolset for linting. As such, I like to add their built-in analyzers wherever I can. This includes adding a reference to SonarAnalyzer.CSharp to get some built-in analysis.

    Since SonarAnalyzer.CSharp is a development dependency, when installed, it is not included in the package. It would be nice if my standards package would carry this over to projects which use it. To do this, I set PrivateAssets="None" in the PackageReference, and removed the default settings.

    The Road Ahead

    With my C# standards wrapped nicely in a Nuget package, my next steps will be to automate the build pipeline for the packages so that changes can be tracked.

    For my next trick, I would like to dig in to eslint and prettier, to see if there is a way to create my own extensions for those tools to distribute our standards for JavaScript/TypeScript. However, that task may have to wait a bit, as I have some build pipelines to create.

  • Pack it up, Pack it In…

    I dove very deeply into Nuget packages this week, and found some pretty useful steps in controlling how my package can control properties in the consuming project.

    Problem at Hand

    It’s been a long time since I built anything more than a trivial Nuget package. By “trivial” I mean telling the .Net project to package the version, and letting it take care of generating the Nuget package based on the project. If I added a .nuspec file, it was for things like description or copyright.

    I am working on building a package that contains some of our common settings for Unit Test execution, including Coverlet settings and XUnit runner settings. Additionally, I wanted the references in my package to be used in the consuming package, so that we did not need to update xunit or coverlet packages across all of our Unit Test projects.

    With these requirements, I needed to build a package that does two things:

    1. Copy settings files to the appropriate location in the consuming project.
    2. Make sure references in the package were added to the consuming project.

    Content Files… What happened??

    It would seem that the last time I used content files, packages.config was the way to include Nuget packages in a project. However, with the migration to using the PackageReference attribute, some things changed, including the way content files are handled.

    The long and short of it is, packages are no longer “copied on install,” so additional steps are required to move files around. However, I wanted those additional steps to be part of my package, rather than added to every referencing project.

    Out comes “props and targets.” You can add .props and/or .targets files to your package, using the same name as the package, and those props will be included in the consuming project. I have to admit, I spent a great deal of time researching MSBuild and how to actually make this do what I wanted. But, I will give you a quick summary and solution.

    My Solution

    myCompany.Test.UnitTest.Core.props

    <?xml version="1.0" encoding="utf-8"?>
    <Project ToolsVersion="12.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
    	<PropertyGroup>
    		<RunSettingsFilePath>$(MSBuildThisFileDirectory)\coverlet.runsettings</RunSettingsFilePath>
    	</PropertyGroup>
    	<ItemGroup>
    		<Content Include="$(MSBuildThisFileDirectory)xunit.runner.json">
    			<Link>xunit.runner.json</Link>
    			<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
    			<Visible>False</Visible>
    		</Content>
    	</ItemGroup>
    </Project>

    myCompany.Test.UnitTest.Core.csproj

    <Project Sdk="Microsoft.NET.Sdk">
    
    	<PropertyGroup>
    		<TargetFramework>net6.0</TargetFramework>
    		<OutputType>Library</OutputType>
    		<IsPackable>true</IsPackable>
    	</PropertyGroup>
    
    	<ItemGroup>
    		<PackageReference Include="JetBrains.Annotations" Version="2023.3.0" PrivateAssets="None" />
    		<PackageReference Include="Newtonsoft.Json" Version="13.0.3" PrivateAssets="None" />
    		<PackageReference Include="xunit" Version="2.6.3" PrivateAssets="None" />
    		<PackageReference Include="xunit.runner.utility" Version="2.6.3" PrivateAssets="None" />
    		<PackageReference Include="JustMock" Version="2023.3.1122.188" PrivateAssets="None" />
    		<PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.8.0" PrivateAssets="None" />
    		<PackageReference Include="coverlet.collector" Version="6.0.0" PrivateAssets="None" />
    		<PackageReference Include="xunit.runner.visualstudio" Version="2.5.5" PrivateAssets="None" />
    	</ItemGroup>
    
    	<ItemGroup>
    		<Content Include="xunit.runner.json" CopyToOutputDirectory="Always">
    			<PackagePath>build</PackagePath>
    		</Content>
    		<Content Include="coverlet.runsettings" CopyToOutputDirectory="Always">
    			<PackagePath>build</PackagePath>
    		</Content>
    		<Content Include="myCompany.Test.UnitTest.Core.props" CopyToOutputDirectory="Always">
    			<PackagePath>build</PackagePath>
    		</Content>
    	</ItemGroup>
    
    </Project>

    Using Content Files

    We have a common settings file (xunit.runner.json) that needs to be copied to the output directory of the Unit Tests, so that XUnit runs with common settings. To accomplish this, I added the file to my package’s .csproj file as a content file (lines 21-23 in the .csproj file). Then, in the .props file (lines 7-11), I added a step to ensure that the file is copied to the output directory.

    For the coverlet.runsettings file, all I need is to make sure that the consuming project uses that file as for its run settings. I set the RunSettingsFilePath in the .props file, and pointed it to the file in the package.

    Using References

    In the .csproj file, you’ll notice that I have a number of references to packages with PrivateAssets="None" configured. These packages have some assets that are “hidden” from the consuming project, meaning when these projects are installed as transitive packages, those assets are hidden.

    In this case, I do not want to hide any assets from these packages, I want them to flow through to the consuming project. Setting PrivateAssets="None" tells the consuming project that these packages should be used in full. This post and the MSDN documentation shed most of the light on this particular topic for me.

    Miscellaneous Notes

    You may have noticed that, for the settings files, I sent them to the build folder alongside the .props file, rather than in content or contentFiles. Since these things are only used during build or execution of unit tests, this seemed the appropriate place.

    It’s also worth noting that I went through SEVERAL iterations of solutions on this. There is a ton of flexibility in MSBuild and its interactions with the packages installed, but I ended up with a solution which requires minimal effort on the package consumer side. And, for me, this is the intent: keep it DRY (Don’t Repeat Yourself).