Tag: SonarLint

  • Publishing Code Coverage in both Azure DevOps and SonarQube

    I spent more time than I care to admit trying to get the proper configuration for reporting code coverage to both the Azure DevOps pipeline and SonarQube. The solution was, well, fairly simple, but it is worth me writing down.

    Testing, Testing…

    After fumbling around with some of the linting and publishing to SonarQube’s Community Edition, I succeeded in creating build pipelines which, when building from the main branch, will run SonarQube analysis and publish to the project.

    I modified my build template as follows:

    - ${{ if eq(parameters.execute_sonar, true) }}:
        # Prepare Analysis Configuration task
        - task: SonarQubePrepare@5
          inputs:
            SonarQube: ${{ parameters.sonar_endpoint_name }}
            scannerMode: 'MSBuild'
            projectKey: ${{ parameters.sonar_project_key }}
    
      - task: DotNetCoreCLI@2
        displayName: 'DotNet restore packages (dotnet restore)'
        inputs:
          command: 'restore'
          feedsToUse: config
          nugetConfigPath: "$(Build.SourcesDirectory)/nuget.config"
          projects: "**/*.csproj"
          externalFeedCredentials: 'feedName'
    
      - task: DotNetCoreCLI@2
        displayName: Build (dotnet build)
        inputs:
          command: build
          projects: ${{ parameters.publishProject }}
          arguments: '--no-restore --configuration ${{ parameters.BUILD_CONFIGURATION }} /p:InformationalVersion=$(fullSemVer) /p:AssemblyVersion=$(AssemblySemVer) /p:AssemblyFileVersion=$(AssemblySemFileVer)'
          
    
    ## Test steps are here, details below
    
      - ${{ if eq(parameters.execute_sonar, true) }}:
        - powershell: |
            $params = "$env:SONARQUBE_SCANNER_PARAMS" -replace '"sonar.branch.name":"[\w,/,-]*"\,?'
            Write-Host "##vso[task.setvariable variable=SONARQUBE_SCANNER_PARAMS]$params"
    
        # Run Code Analysis task
        - task: SonarQubeAnalyze@5
    
        # Publish Quality Gate Result task
        - task: SonarQubePublish@5
          inputs:
            pollingTimeoutSec: '300'

    In the code above, the execute_sonar parameter allows me to execute the Sonarqube steps only in the main branch, which allows me to keep the community edition happy but retain the rest of my pipeline definition on feature branches.

    This configuration worked and my project’s analysis showed in Sonarqube.

    Testing, Testing, 1, 2, 3…

    I went about adding some trivial unit tests in order to verify that I could get code coverage publishing. I have had experience with Coverlet in the past, and it allows for generation of various coverage report formats, so I went about adding it to the project using the simple collector.

    Within my build pipeline, I added the following (if you are referencing the snippet above, this is in place of the comment):

    - ${{ if eq(parameters.execute_tests, true) }}:
        - task: DotNetCoreCLI@2
          displayName: Test (dotnet test)
          inputs:
            command: test
            projects: '**/*tests/*.csproj'
            arguments: '--no-restore --configuration ${{ parameters.BUILD_CONFIGURATION }} --collect:"XPlat Code Coverage"'
        

    Out of the box, this command worked well: tests were executed, and both Tests and Coverage information were published to Azure DevOps. Apparently, the DotNetCoreCLI@2 task defaults publishTestResults to true.

    While the tests ran, the coverage was not published to Sonarqube. I had hoped that the Sonarqube Extension for Azure DevOps would pick this up, but, at last, this was not the case.

    Coverlet, DevOps, and Sonarqube… oh my.

    As it turns out, you have to tell Sonarqube explicitly where to find the coverage report. And, while the Sonarqube documentation is pretty good at describing how to report coverage to Sonarqube from the CLI, the Azure DevOps integration documentation does not specify how to accomplish this, at least not outright. Also, while Azure DevOps recognizes cobertura coverage reports, Sonarqube prefers opencover reports.

    I had a few tasks ahead of me:

    1. Generate my coverage reports in two formats
    2. Get a specific location for the coverage reports in order to pass that information to Sonarqube.
    3. Tell the Sonarqube analyzer where to find the opencover reports

    Generating Multiple Coverage Reports

    As mentioned, I am using Coverlet to collect code coverage. Coverlet allows for the usage of RunSettings files, which helps to standardize various settings within the test project. It also allowed me to generate two coverage reports in different formats. I created a coverlet.runsettings file in my test project’s directory and added this content:

    <?xml version="1.0" encoding="utf-8" ?>
    <RunSettings>
      <DataCollectionRunSettings>
        <DataCollectors>
          <DataCollector friendlyName="XPlat code coverage">
            <Configuration>
              <Format>opencover,cobertura</Format>          
            </Configuration>
          </DataCollector>
        </DataCollectors>
      </DataCollectionRunSettings>
    </RunSettings>

    To make it easy on my build and test pipeline, I added the following project to my test csproj file:

    <RunSettingsFilePath>$(MSBuildProjectDirectory)\coverlet.runsettings</RunSettingsFilePath>

    There are a number of settings for coverlet that can be set in the runsettings file, the full list can be found in their VSTest Integration documentation.

    Getting Test Result Files

    As mentioned above, the DotNetCoreCLI@2 action defaults publishTestResults to true. This setting adds some arguments to the test command to output trx logging and setting a results directory. However, this means I am not able to specify a results directory on my own.

    Even specifying the directory myself does not fully solve the problem: Running the tests with coverage and trx logging generates a trx file using username/computer name/timestamp AND two sets of coverage reports: one set is stored under the username/computer name/timestamp folder, and the other under a random Guid.

    To ensure I only pulled one set of tests, and that Azure DevOps didn’t complain, I updated my test execution to look like this:

      - ${{ if eq(parameters.execute_tests, true) }}:
        - task: DotNetCoreCLI@2
          displayName: Test (dotnet test)
          inputs:
            command: test
            projects: '**/*tests/*.csproj'
            publishTestResults: false
            arguments: '--no-restore --configuration ${{ parameters.BUILD_CONFIGURATION }} --collect:"XPlat Code Coverage" --logger trx --results-directory "$(Agent.TempDirectory)"'
        
        - pwsh:
            Push-Location $(Agent.TempDirectory);
            mkdir "ResultFiles";
            $resultFiles = Get-ChildItem -Directory -Filter resultfiles;
    
            $trxFile = Get-ChildItem *.trx;
            $trxFileName = [System.IO.Path]::GetFileNameWithoutExtension($trxFile);
                  
            Push-Location $trxFilename;
            $coverageFiles = Get-ChildItem -Recurse -filter coverage.*.xml;
            foreach ($coverageFile in $coverageFiles) 
            {
              Copy-Item $coverageFile $resultFiles.FullName;
            }
            Pop-Location;
          displayName: Copy Test Files
    
        - task: PublishTestResults@2
          inputs:
            testResultsFormat: 'VSTest' # 'JUnit' | 'NUnit' | 'VSTest' | 'XUnit' | 'CTest'. Alias: testRunner. Required. Test result format. Default: JUnit.
            testResultsFiles: '$(Agent.TempDirectory)/*.trx'
    
        - task: PublishCodeCoverageResults@1
          condition: true # always try publish coverage results, even if unit tests fail
          inputs:
            codeCoverageTool: 'cobertura' # Options: cobertura, jaCoCo
            summaryFileLocation: '$(Agent.TempDirectory)/ResultFiles/**/coverage.cobertura.xml'

    I ran the dotnet test command with custom arguments to log to trx and set my own results directory. The Powershell script uses the name of the TRX file to find the coverage files, and copies them to a ResultsFiles folder. Then I added tasks to publish test results and code coverage results to the Azure DevOps pipeline.

    Pushing coverage results to Sonarqube

    I admittedly spent a lot of time chasing down what, in reality, was a very simple change:

      - ${{ if eq(parameters.execute_sonar, true) }}:
        # Prepare Analysis Configuration task
        - task: SonarQubePrepare@5
          inputs:
            SonarQube: ${{ parameters.sonar_endpoint_name }}
            scannerMode: 'MSBuild'
            projectKey: ${{ parameters.sonar_project_key }}
            extraProperties: |
              sonar.cs.opencover.reportsPaths="$(Agent.TempDirectory)/ResultFiles/coverage.opencover.xml"
    
    ### Build and test here
    
      - ${{ if eq(parameters.execute_sonar, true) }}:
        - powershell: |
            $params = "$env:SONARQUBE_SCANNER_PARAMS" -replace '"sonar.branch.name":"[\w,/,-]*"\,?'
            Write-Host "##vso[task.setvariable variable=SONARQUBE_SCANNER_PARAMS]$params"
        # Run Code Analysis task
        - task: SonarQubeAnalyze@5
        # Publish Quality Gate Result task
        - task: SonarQubePublish@5
          inputs:
            pollingTimeoutSec: '300'
    

    That’s literally it. I experimented with a lot of different settings, but, in the end, simply setting sonar.cs.opencover.reportsPaths in the extraProperties input of the SonarQubePrepare task.

    SonarQube Success!

    Sample SonarQube Report

    In the small project that I tested, I was able to get analysis and code coverage published to my SonarQube instance. Unfortunately, this means that I know have technical debt to fix and unit tests to write in order to improve my code coverage, but, overall, this was a very successful venture.

  • Tech Tips – Adding Linting to C# Projects

    Among the Javascript/Typescript community, ESlint and Prettier are very popular ways to enforce some standards and formatting within your code. In trying to find similar functionality for C#, I did not find anything as ubiquitous as ESLint/Prettier, but there are some front runners.

    Roslyn Analyzers and Dotnet Format

    John Reilly has a great post on enabling Roslyn Analyzers in your .Net applications. He also posted some instructions on using the dotnet format tool as a “Prettier for C#” tool.

    I will not bore you by re-hashing his posts, but following those posts allowed me to apply some basic formatting and linting rules to my projects. Additionally, the Roslyn Analyzers can be made to generate build warnings and errors, so any build worth its salt (builds that fail with warnings) will be free of undesirable code.

    SonarLint

    I was not really content to stop there, and a quick Google search led me to an interesting article around linting options for C#. One of those was SonarLint. While SonarLint bills itself as an IDE plugin, it has a Roslyn Analyzer package (SonarAnalyzer.CSharp) that can be added and configured in a similar fashion to the built-in Roslyn Analyzers.

    Following the instructions in the article, I installed SonarAnalyzer and configured it alongside the base Roslyn Analyzers. It produced a few more warnings, particularly around some best practices from Sonar that go beyond what the Microsoft standards apply.

    SonarQube, my old friend

    Getting into SonarLint brought be back to SonarQube. What seems like forever ago, but really was only a few years ago, SonarQube was something of a go-to tool in my position. We had hoped to gather a portfolio-wide view of our bugs, vulnerabilities, and code smells. For one reason or another, we abandoned that particular tool set.

    After putting SonarLint in place, I was interested in jumping back in, at least in my home lab, to see what kind of information I could get out of Sonar. I found the Kubernetes instructions and got to work setting up a quick instance on my production instance, alongside my Proget instance.

    Once installed, I have to say, the application has done well to improve the user experience. Tying in to my Azure DevOps instance was quick and easy, with very good in-application tutorials for that configuration. I setup a project based on the pipeline for my test application, made my pipeline changes, and waited for results…

    Failed! I kept getting errors about not being allowed to set the branch name in the Community edition. That is fair, and for my projects, I only really need analysis on the main branch, so I setup analysis to only happen on builds of main. Failed again!

    There seems to be a known issue around this, but thanks to the SonarSource community, I found a workaround for my pipeline. With that in place, I had my code analysis in place, but, well, what do I do with it? Well, I can add quality gates to fail builds based on missing code coverage, tweak my rule sets, and have a “portfolio wide” view of my private projects.

    Setting the Standard

    For any open source C# projects, simply building the linting/formatting into the build/commit process might be enough. If project maintainers are so inclined, they can add their projects to SonarCloud and get the benefits of SonarQube (including adding quality gates).

    For enterprise customers, the move to a paid tier depends on how much visibility you want in your code base. Sonar can be an expensive endeavor, but provides a lot of quality and tech debt tracking that you may find useful. My suggestion? Start with a trial or the community version, and see if you like it before you start requesting budget.

    Either way, setting standards for formatting and analysis on your C# projects make contributions across teams much easier and safer. I suggest you try it!