Tag: .Net

  • Platform Engineering

    As I continue to build out some reference architecture applications, I realized that there was a great deal of boilerplate code that I add to my APIs to get things running. Time for a library!

    Enter the “Platform”

    I am generally terrible at naming things, but Spydersoft.Platform seemed like a good base namespace for this one. The intent is to put the majority of my boilerplate code into a set of libraries that can be referenced to make adding stuff easier.

    But, what kind of “stuff?” Well, for starters

    • Support for OpenTelemetry trace, metrics, and logging
    • Serilog logging for console logging
    • Simple JWT identity authentication (for my APIs)
    • Default Health Check endpoints

    Going deep with Health Checks

    The first three were pretty easy: just some POCOs for options and then startup extensions to add the necessary items with the proper configuration. With health checks, however, I went a little overboard.

    My goal was to be able to implement IHealthCheck anywhere and decorate it in such a way that it would be added to the health check framework and could be tagged. Furthermore, I wanted to use tags to drive standard endpoints.

    In the end, I used a custom attribute and some reflection to add the checks that are found in the loaded AppDomain. I won’t bore you: the documentation should do that just fine.

    But can we test it?

    Testing startup extensions is, well, interesting. Technically, it is an integration test, but I did not want to setup playwright tests to execute the API tests. Why? Well, usually API integration tests are run again a particular configuration, but in this case, I needed to run the reference application with a lot of different configurations in order to fully test the extensions. Enter WebApplicationFactory.

    With WebApplicationFactory, I was able to configure tests to stand up a copy of the reference application with different configurations. I could then verify the configuration using some custom health checks.

    I am on the fence as to whether or not this is a “unit” test or an “integration” test. I’m not calling out to any other application, which is usually the definition of an integration test. But I did have to configure a reference application in order to get things tested.

    Whatever you call it, I have coverage on my startup extensions, and even caught a few bugs while I was writing the tests.

    Make it truly public?

    Right now, the build publishes the Nuget package to my private nuget feed. I am debating on moving it to Nuget (or maybe Github’s package feeds). While the code is open source, I want to make the library openly available. But until I make the decision on where to put it, I will keep it in my private feed. If you have any interest in it, watch or star the repo in GitHub: it will help me gauge the level of interest.

  • Isolating your Azure Functions

    I spent a good bit of time over the last two weeks converting our Azure functions from the in-process to the isolated worker process model. Overall the transition was fairly simple, but there were a few bumps in the proverbial road worth noting.

    Migration Process

    Microsoft Learn has a very detailed How To Guide for this migration. The guide includes steps for updating the project file and references, as well as additional packages that are required based on various trigger types.

    Since I had a number of functions to process, I followed the guide for the first one, and that worked swimmingly. However, then I got lazy and started the “copy-paste” conversion. In that laziness, I missed a particular section of the project file:

    <ItemGroup>
      <Using Include="System.Threading.ExecutionContext" Alias="ExecutionContext"/>
    </ItemGroup>

    Unfortunately, if you forget this, you will not break your local development environment. However, when you publish to a function, it will not execute correctly.

    Fixing Dependency Injection

    When using the in-process model, there are some “freebies” that get added to the dependency injection system, as if by magic. ILogger, in particular, was allowed to be automatically injected into the function (as a function parameter). However, in the in-process model, you must get ILogger from either the FunctionContext or through dependency injection into the class.

    As part of our conversion, we removed the function parameters for ILogger and replaced them with service instances retrieved through dependency injection at the class level.

    What we did not realize until we got our functions into the test environments was that IHttpContextAccessor was not available in the isolated model. Apparently, that particular interface is available as part of the in-process model automatically, but is not added as part of the isolated model. So we had to add an instance of IHttpContextAccessor to our services collection in the Program.cs file.

    It is never easy

    Upgrades or migrations are never just “change this and go.” as much as we try to make it easy, there always seems to be a little change here or there that end up being a fly in the ointment. In our case, we simply assumed that IHttpContextAccessor was there because in-process put it there, and the code which needed that was a few layers deep in the dependency tree. The only way to find it was to make the change and see what breaks. And that is what keeps quality engineers up at night.

  • Building a Radar, Part 1 – A New Pattern

    This is the first in a series meant to highlight my work to build a non-trivial application that can serve as a test platform and reference. When it comes to design, it is helpful to have an application with enough complexity to properly evaluate the features and functionality of proposed solutions.

    Backend For Frontend

    Lately, I have been somewhat obsessed with the Backend for Frontend pattern, or BFF. There are a number of benefits to the pattern, articulated well all across the internet, so I will avoid a recap. I wanted an application that took advantage of this pattern so that I could start to demonstrate the benefits.

    I had previously done some work in putting a simple backend on the Zalando tech radar. It is a pretty simple Create/Retrieve/Update/Delete (CRUD) application, but complex enough that it would work in this case.

    Configuring the BFF

    At first, I started looking at converting the existing project, but quickly realized that this is a good time for a clean slate. I followed the MSDN tutorial to the letter to get a working sample application. From there, I moved my existing SPA to the working sample.

    With that in place, I walked through Auth0’s tutorial on implementing Backend for Frontend authentication in ASP.NET Core. In this case, I substituted my Duende Identity Server for the OAuth/Okta instance used in the tutorial. This all worked great, with the notable exception that I had to ensure all my proxies were in order.

    Show Your Work!

    Now, admittedly, my blogging is well behind my actual work, so if you go browsing the repository, it is a little farther ahead than this post. Next in this series, I’ll discuss configuring the BFF to proxy calls to a backend service.

    While the work is ahead of the post, the documentation is WAY behind, so please ignore the README.md file for now. I’ll get proper documentation completed as soon as I can.

  • Tech Tip – Chiseled Images from Microsoft

    I have been spending a considerable amount of time in .Net 8 lately. In addition to some POC work, I have been transitioning some of my personal projects to .Net 8. While the details of that work will be the topic of a future post (or posts), Microsoft’s chiseled containers are worth a quick note.

    In November, Microsoft released .NET Chiseled Containers into GA. These containers are slimmed-down versions of the .NET Linux containers, focused on getting a “bare bones” container that can be used as a base for a variety of containers.

    If you are building containers from Microsoft’s .NET container images, chiseled containers are worth a look!

    A Quick Note on Globalization

    I tried moving two of my containers to the 8.0-jammy-chiseled base image. The fronted, with no database connection, worked fine. However, the API with the database connection ran into a globalization issue.

    Apparently, Microsoft.Data.SqlClient requires a few OS libraries that are not part of chiseled. Specifically the International Components for Unicode (ICU) is not included, by default, in the chiseled image. Ubuntu-rocks demonstrates how it can be added, but, for now, I am leaving that image as the standard 8.0-jammy image.

  • Distributing C# Code Standards with a Nuget Package

    My deep dive into Nuget packages made me ask a new question: Can I use Nuget packages to deploy code standards for C#?

    It’s Good To Have Standards

    Generally, the goal of coding standards is to produce a uniform code base. My go-to explanation of what coding standards are is this: “If you look at the code, you should not be able to tell who wrote it.”

    Linting is, to some degree, an extension of formatting. Linting applies a level of analysis to your code, identifying potential runtime problems. Linting rules typically derive from various best practices which can (and do) change over time, especially for languages in active development. Thing of it this way: C++ has not changed much in the last 20 years, but C#, JavaScript, and, by association, TypeScript, have all seen active changes in recent history.

    There are many tools for each language that offer linting and formatting. I have developed some favorites:

    • Formatting
      • dotnet format – C#
      • prettier – Javascript/TypeScript
      • black – Python
    • Linting
      • Sonar – support for various languages
      • eslint – Javascript/TypeScript
      • flake8 – Python

    Regardless of your tool of choice, being able to deploy standards across multiple projects has been difficult. If your projects live in the same repository, you can store settings files in a common location, but for projects across multiple repositories, the task becomes harder.

    Nuget to the Rescue!

    I spent a good amount of time in the last week relearning Nuget. In this journey, it occurred to me that I could create a Nuget package that only contained references and settings for our C# standards. So I figured I’d give it a shot. Turns out, it was easier than I anticipated.

    There are 3 files in the project:

    • The csproj file – MyStandards.CSharp.csproj
    • main.editorconfig
    • MyStandards.CSharp.props

    I put the content of the CSProj and Props files below. main.editorconfig will get copied into the project where this package is referenced as .editorconfig. You can read all about customizing .editorconfig here.

    MyStandards.CSharp.csproj

    <Project Sdk="Microsoft.NET.Sdk">
    
    	<PropertyGroup>
    		<TargetFramework>netstandard2.1</TargetFramework>
    		<Nullable>enable</Nullable>
    		<IncludeBuildOutput>false</IncludeBuildOutput>
    		<DevelopmentDependency>true</DevelopmentDependency>
    	</PropertyGroup>
    
    	<ItemGroup>
    		<Content Include="main.editorconfig">
    			<CopyToOutputDirectory>Always</CopyToOutputDirectory>
    			<PackagePath>build</PackagePath>
    		</Content>
    		<Content Include="MyStandards.CSharp.props">
    			<CopyToOutputDirectory>Always</CopyToOutputDirectory>
    			<PackagePath>build</PackagePath>
    		</Content>
    	</ItemGroup>
    
    	<ItemGroup>
    		<PackageReference Include="SonarAnalyzer.CSharp" Version="9.15.0.81779" PrivateAssets="None" />
    	</ItemGroup>
    
    </Project>
    

    MyStandards.CSharp.props

    <Project>
    	<Target Name="CopyFiles" BeforeTargets="Build">
    		<ItemGroup>
    			<SourceFile Include="$(MSBuildThisFileDirectory)main.editorconfig"></SourceFile>
    			<DestFile Include="$(ProjectDir)\.editorconfig"></DestFile>
    		</ItemGroup>
    		<Copy SourceFiles="@(SourceFile)" DestinationFiles="@(DestFile)"></Copy>
    	</Target>
    </Project>
    

    What does it do?

    When this project is built as a Nuget package, it contains the .props and main.editorconfig files in the build folder of the package. Additionally, it has a dependency on SonarAnalyzer.CSharp.

    Make it a Development Dependency

    Note that IncludeBuildOutput is set to false, and DevelopmentDependency is set to true. This tells Nuget to not include the empty MyStandards.CSharp.dll file that gets built, and to package this Nuget file as a development only dependency.

    When referenced by another project, the reference will be added with the following defaults:

    <PackageReference Include="MyStandards.CSharp" Version="1.0.0">
      <PrivateAssets>all</PrivateAssets>
      <IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
    </PackageReference>

    So, the consuming package won’t include a dependency for my standards, meaning no “chain” of references.

    Set the Settings

    The .props file does the yeoman’s work: it copies main.editorconfig into the target project’s folder as .editorconfig. Notice that this happens before every build, which prevents individual projects from overriding the .editorconfig file.

    Referencing Additional Development Tools

    As I mentioned, I’m a fan of the Sonar toolset for linting. As such, I like to add their built-in analyzers wherever I can. This includes adding a reference to SonarAnalyzer.CSharp to get some built-in analysis.

    Since SonarAnalyzer.CSharp is a development dependency, when installed, it is not included in the package. It would be nice if my standards package would carry this over to projects which use it. To do this, I set PrivateAssets="None" in the PackageReference, and removed the default settings.

    The Road Ahead

    With my C# standards wrapped nicely in a Nuget package, my next steps will be to automate the build pipeline for the packages so that changes can be tracked.

    For my next trick, I would like to dig in to eslint and prettier, to see if there is a way to create my own extensions for those tools to distribute our standards for JavaScript/TypeScript. However, that task may have to wait a bit, as I have some build pipelines to create.

  • Pack it up, Pack it In…

    I dove very deeply into Nuget packages this week, and found some pretty useful steps in controlling how my package can control properties in the consuming project.

    Problem at Hand

    It’s been a long time since I built anything more than a trivial Nuget package. By “trivial” I mean telling the .Net project to package the version, and letting it take care of generating the Nuget package based on the project. If I added a .nuspec file, it was for things like description or copyright.

    I am working on building a package that contains some of our common settings for Unit Test execution, including Coverlet settings and XUnit runner settings. Additionally, I wanted the references in my package to be used in the consuming package, so that we did not need to update xunit or coverlet packages across all of our Unit Test projects.

    With these requirements, I needed to build a package that does two things:

    1. Copy settings files to the appropriate location in the consuming project.
    2. Make sure references in the package were added to the consuming project.

    Content Files… What happened??

    It would seem that the last time I used content files, packages.config was the way to include Nuget packages in a project. However, with the migration to using the PackageReference attribute, some things changed, including the way content files are handled.

    The long and short of it is, packages are no longer “copied on install,” so additional steps are required to move files around. However, I wanted those additional steps to be part of my package, rather than added to every referencing project.

    Out comes “props and targets.” You can add .props and/or .targets files to your package, using the same name as the package, and those props will be included in the consuming project. I have to admit, I spent a great deal of time researching MSBuild and how to actually make this do what I wanted. But, I will give you a quick summary and solution.

    My Solution

    myCompany.Test.UnitTest.Core.props

    <?xml version="1.0" encoding="utf-8"?>
    <Project ToolsVersion="12.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
    	<PropertyGroup>
    		<RunSettingsFilePath>$(MSBuildThisFileDirectory)\coverlet.runsettings</RunSettingsFilePath>
    	</PropertyGroup>
    	<ItemGroup>
    		<Content Include="$(MSBuildThisFileDirectory)xunit.runner.json">
    			<Link>xunit.runner.json</Link>
    			<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
    			<Visible>False</Visible>
    		</Content>
    	</ItemGroup>
    </Project>

    myCompany.Test.UnitTest.Core.csproj

    <Project Sdk="Microsoft.NET.Sdk">
    
    	<PropertyGroup>
    		<TargetFramework>net6.0</TargetFramework>
    		<OutputType>Library</OutputType>
    		<IsPackable>true</IsPackable>
    	</PropertyGroup>
    
    	<ItemGroup>
    		<PackageReference Include="JetBrains.Annotations" Version="2023.3.0" PrivateAssets="None" />
    		<PackageReference Include="Newtonsoft.Json" Version="13.0.3" PrivateAssets="None" />
    		<PackageReference Include="xunit" Version="2.6.3" PrivateAssets="None" />
    		<PackageReference Include="xunit.runner.utility" Version="2.6.3" PrivateAssets="None" />
    		<PackageReference Include="JustMock" Version="2023.3.1122.188" PrivateAssets="None" />
    		<PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.8.0" PrivateAssets="None" />
    		<PackageReference Include="coverlet.collector" Version="6.0.0" PrivateAssets="None" />
    		<PackageReference Include="xunit.runner.visualstudio" Version="2.5.5" PrivateAssets="None" />
    	</ItemGroup>
    
    	<ItemGroup>
    		<Content Include="xunit.runner.json" CopyToOutputDirectory="Always">
    			<PackagePath>build</PackagePath>
    		</Content>
    		<Content Include="coverlet.runsettings" CopyToOutputDirectory="Always">
    			<PackagePath>build</PackagePath>
    		</Content>
    		<Content Include="myCompany.Test.UnitTest.Core.props" CopyToOutputDirectory="Always">
    			<PackagePath>build</PackagePath>
    		</Content>
    	</ItemGroup>
    
    </Project>

    Using Content Files

    We have a common settings file (xunit.runner.json) that needs to be copied to the output directory of the Unit Tests, so that XUnit runs with common settings. To accomplish this, I added the file to my package’s .csproj file as a content file (lines 21-23 in the .csproj file). Then, in the .props file (lines 7-11), I added a step to ensure that the file is copied to the output directory.

    For the coverlet.runsettings file, all I need is to make sure that the consuming project uses that file as for its run settings. I set the RunSettingsFilePath in the .props file, and pointed it to the file in the package.

    Using References

    In the .csproj file, you’ll notice that I have a number of references to packages with PrivateAssets="None" configured. These packages have some assets that are “hidden” from the consuming project, meaning when these projects are installed as transitive packages, those assets are hidden.

    In this case, I do not want to hide any assets from these packages, I want them to flow through to the consuming project. Setting PrivateAssets="None" tells the consuming project that these packages should be used in full. This post and the MSDN documentation shed most of the light on this particular topic for me.

    Miscellaneous Notes

    You may have noticed that, for the settings files, I sent them to the build folder alongside the .props file, rather than in content or contentFiles. Since these things are only used during build or execution of unit tests, this seemed the appropriate place.

    It’s also worth noting that I went through SEVERAL iterations of solutions on this. There is a ton of flexibility in MSBuild and its interactions with the packages installed, but I ended up with a solution which requires minimal effort on the package consumer side. And, for me, this is the intent: keep it DRY (Don’t Repeat Yourself).

  • When a Build is not a Build

    In my experience, the best way to learn a new software system is to see how it is built. That path lead me to some warnings housekeeping, which lead me to a weird issue with Azure DevOps.

    Housekeeping!

    In starting my new role, I have been trying to get a lay of the land. That includes cloning repositories, configuring authentication, and getting local builds running. As I was doing that, I noticed some build warnings.

    Now, I will be the first to admit, in the physical world, I am not a “neat freak.” But, in my little digital world, I appreciate a world of builds free of warnings, unit tests where appropriate, and a well-organized distribution system.

    So with that, I spent a little time fixing the build warnings or entering new technical debt tickets for them to be addressed at a later date. I also spent some time digging into the build process to start making these more apparent to the teams.

    Where are the Warnings?

    In digging through our builds, I noticed that none of them listed the warnings that I was seeing locally. I found that odd, so I started looking at my local build to see what was wrong.

    And I did, in fact, find some issues with my local environment. After some troubleshooting, I literally deleted everything (Nuget caches and all my bin/obj folders), and it did get rid of some of the warnings.

    However, there were still general warnings that the compiler was showing that I was not seeing in the build pipeline. I noticed that we were using the publish command of the DotNetCoreCli@2 task to effectively build and publish the projects, creating build artifacts. Looking at the logs for those publish tasks, I saw the warnings I expected. But Azure DevOps was not picking them up in the pipeline.

    Some Google searches led me to this issue. As it turns out, it is a known problem that warnings output using the publish command do not get picked up by Azure DevOps. So what do we do?

    Build it!

    While publish does not output the warnings correctly, the build command for the same task does. So, instead of one step to build and publish, we have to separate:

    - task: DotNetCoreCLI@2
      displayName: 'Build Project'
      inputs:
        command: 'build'
        publishWebProjects: false
        projects: '${{ parameters.project }}'
        ${{ if eq(parameters.forceRestore, true) }}:
          arguments: '-c ${{ parameters.configuration }}'
        ${{ else }}:
          arguments: '-c ${{ parameters.configuration }} --no-restore' 
    
    - task: DotNetCoreCLI@2
    
      displayName: 'Publish Project'
      inputs:
        command: 'publish'
        publishWebProjects: false
        nobuild: true
        projects: '${{ parameters.project }}'
        arguments: '-c ${{ parameters.configuration }} --output ${{ parameters.outputDir }} --no-build --no-restore' 
        zipAfterPublish: '${{ parameters.zipAfterPublish }}'
        modifyOutputPath: '${{ parameters.modifyOutputPath }}'

    Notice in the build task, I have a parameter to optionally pass --no-restore to the build, and in the publish I am passing in --no-build and --no-restore. Since these steps are running on the same build agent, we can do a build and then a publish with a --no-build parameter to prevent a double build scenario. Our pipeline also performs package restores separately, so most of the time, a restore is not necessary on the build. For the times that it is required, we can pass in the forceRestore parameter to configure that project.

    With this change, all the warnings that were not showing in the pipeline are now showing. Now it is time for me to get at fixing those warnings!

  • Don’t Mock Me!

    I spent almost two hours on a unit test issue yesterday, walking away with the issue unresolved and myself frustrated. I came back to it this morning and fixed it in 2 minutes. Remember, you don’t always need a mock library to create fakes.

    The Task at Hand

    In the process of removing some obsolete warnings from our builds, I came across a few areas where the change was less than trivial. Before making the changes, I decided it would be a good idea to write some unit tests to ensure that my changes did not affect functionality.

    The class to be tested, however, took IConfiguration in the constructor. Our current project does not make use of the Options pattern in .Net Core, meaning anything that needs configuration values has to carry around a reference to IConfiguration and then extract the values manually. Yes, I will want to change that, but not right now.

    So, in order to write these unit tests, I had to create a mock of IConfiguration that returned the values this class needed. Our project currently uses Telerik JustMock, so I figured it would be a fairly easy task to mock. However, I ran into a number of problems that had me going down the path of creating multiple mock classes for different interfaces, including IConfigurationSection. I immediately thought “There has to be a better way.”

    The Better Way

    Some quick Google research led me to this gem of a post on StackOverflow. In all my time with .Net configuration, I never knew about or used the AddInMemoryCollection extension. And that led me to the simplest solution: create an “real boy” instance of IConfiguration with the properties my class needs, and pass that to the class in testing.

    I suppose this is “dumb mocking” in the sense that it doesn’t use libraries written and dedicated to mocking objects. But it gets the job done in the simplest method possible.

  • Tech Tip – Options Pattern in ASP.NET Core

    I have looked this up at least twice this year. Maybe if I write about it, it will stick with me. If it doesn’t, well, at least I can look here.

    Options Pattern

    The Options pattern is a set of interfaces that allow you to read options into classes in your ASP.NET application. This allows you to configure options classes which are strongly typed with default values and attributes for option validation. It also removes most of the “magic strings” that can come along with reading configuration settings. I will do you all a favor and not regurgitate the documentation, but rather leave a link so you can read all about the pattern.

    A Small Sample

    Let’s assume I have a small class called HostSettings to store my options:

     public class HostSettings
     {
         public const string SectionName = "HostSettings";
         public string Host { get; set; } = string.Empty;
         public int Port { get; set; } = 5000;
     }

    And my appsettings.json file looks like this:

    {
      "HostSettings": {
        "Host": "http://0.0.0.0",
        "Port": 5000
      },
      /// More settings here
    }

    Using Dependency Injection

    For whatever reason, I always seem to remember how to configure options using the dependency injector. Assuming the above, adding options to the store looks something like this:

    var builder = WebApplication.CreateBuilder(options);
    builder.Services.Configure<HostSettings>(builder.Configuration.GetSection(HostSettings.SectionName));

    From here, to get HostSettings into your class, add an IOptions<HostSettings> parameter to your class, and access the options using the IOptions.Value implementation.

    public class MyService
    {
       private readonly HostSettings _settings;
    
       public MyService(IOptions<HostSettings) options)
       {
          _settings = options.Value;
       }
    }

    Options without Dependency Injection

    What I always, always forget about is how to get options without using the DI pattern. Every time I look it up, I have that “oh, that’s right” moment.

    var hostSettings = new HostSettings();
    builder.Configuration.GetSection(HostSEttings.SectionName).Bind(hostSettings);

    Yup. That’s it. Seems silly that I forget that, but I do. Pretty much every time I need to use it.

    A Note on SectionName

    You may notice the SectionName constant that I add to the class that holds the settings. This allows me to keep the name/location of the settings in the appsettings.json file within the class itself.

    Since I only have a few classes which house these options, I load them manually. It would not be a stretch, however, to create a simple interface and use reflection to load options classes dynamically. It could even be encapsulated into a small package for distribution across applications… Perhaps an idea for an open source package.

  • Collecting WCF Telemetry with Application Insights

    I got pulled back in to diagnosing performance issues on an old friend, and it lead me into some dead code with a chance for resuscitation.

    Trouble with the WCF

    One of our applications has a set of WCF Services that serve the client. We had not instrumented them manually for performance metrics, although we found that System.ServiceModel is well decorated for trace, so adding a trace listener gives us a bevy of information. However, it’s all in *.svclog files (using an XML Trace Listener), so there is still a lot to do in order to find out what’s wrong. A colleague asked if Application Insights could work. And that suggested started me down a path.

    I found the lab

    I reached out to some contacts at Microsoft, and they pointed me to the Application Insights SDK Labs. In that lab is a library that instruments WCF services and applications. The problem: it hasn’t been updated in about 5 years.

    I figured, our application is based on technology about that old, I suppose I could try it. So I followed the instructions, and I started getting telemetry in my Application Insights instance! However, I did notice a few things:

    1. The library as published relies on an old Application Insights version (2.5.0, I believe). The Application Insights libraries are up to 2.21, which means we may not see everything we can.
    2. The library is based on .Net Framework 4.5, not 4.6.2, which is what we are using.

    So I did what any sensible developer does…. fork it!

    I’ll do it myself!

    I forked the repository, created a branch, and got to work upgrading. Since this is just for work (right now), I did not bother to setup CI/CD yet, and took some liberties in assuming we would be running .Net Framework 4.6.2 for a while.

    I had to wade through the repository setup a bit: There is a lot of configuration around Nuget and versioning, and, frankly, I wasn’t looking to figure that out right now. However, I did manage to get new library built, including updating the references to Microsoft.Application insights.

    Again, in the interest of time and test, I manually pushed the package I built to our internal Nuget feed. I’m going to get this pushed to a development environment so that I can get some better telemetry than just the few calls that get made in my local environment.

    Next Steps?

    If this were an active project, I would probably have made a little more effort to do things the way they were originally structured and “play in the sandbox.” However, with no contribution rules or guidelines and no visible build pipeline, I am on my own with this one.

    If this turns out to be reasonably useful, I will probably take a look at the rest of the projects in that repository. I may also tap my contacts at Microsoft for some potential “next steps” with this one: while I do not relish the thought of owning a public archive, perhaps I could at least get a copy of their build pipeline so that I can replicate it on my end. Maybe a GitHub Actions pipeline with a push to Nuget?? Who knows.