Tag: git

  • Supporting a GitHub Release Flow with Azure DevOps Builds

    It has been a busy few months, and with the weather changing, I have a little more time in front of the computer for hobby work. Some of my public projects were in need of a few package updates, so I started down that road. Most of the updates were pretty simple: a few package updates and some Azure DevOps step template updates and I was ready to go. However, I had been delaying my upgrade to GitVersion 6, and in taking that leap, I changed my deployment process slightly.

    Original State

    My current development process supports three environments: test, stage, and production. Commits to feature/* branches are automatically deployed to the test environment, and any builds from main are first deployed to stage and then can be deployed to production.

    For me, this works: I am usually only working on one branch at a time, so publishing feature branches to the test environment works. When I am done with a branch, I merge it into main and get it deployed.

    New State

    As I have been working through some processes at work, it occurred to me that versions are about release, not necessarily commits. While commits can help us number releases, they shouldn’t be the driving force. GitVersion 6 and its new workflow defaults drive this home.

    So my new state would be pretty similar: feature/* branches get deployed to the test environment automatically. The difference lies in main: I no longer want to release with every commit to main. I want to be able to control releases through the use of tags (and GitHub releases, which generate tags.

    So I flipped over to GitVersion 6 and modified my GitVersion.yml file:

    workflow: GitHubFlow/v1
    merge-message-formats:
      pull-request: 'Merge pull request \#(?<PullRequestNumber>\d+) from'
    branches:
      feature:
        mode: ContinuousDelivery

    I modified my build pipeline to always build, but only trigger code release for feature/* branch builds and builds from a tag. I figured this would work fine, but Azure DevOps threw me a curve ball.

    Azure DevOps Checkouts

    When you build from a tag, Azure DevOps checks that tag out directly, using the /tags/<tagname> branch reference. When I tried to run GitVersion on this, I got a weird branch number: A build on tag 1.3.0 resulted in 1.3.1-tags-1-3-0.1.

    I dug into GitVersion’s default configuration, and noticed this corresponded with the unknown branch configuration. To get around Azure Devops, I had to do configure the tags/ branches:

    workflow: GitHubFlow/v1
    merge-message-formats:
      pull-request: 'Merge pull request \#(?<PullRequestNumber>\d+) from'
    branches:
      feature:
        mode: ContinuousDelivery
      tags:
        mode: ManualDeployment
        label: ''
        increment: Inherit
        prevent-increment:
          when-current-commit-tagged: true
        source-branches:
        - main
        track-merge-message: true
        regex: ^tags?[/-](?<BranchName>.+)
        is-main-branch: true

    This treats tags as main branches when calculating the version.

    Caveat Emptor

    This works if you ONLY tag your main branch. If you are in the habit of tagging other branches, this will not work for you. However, I only ever release from main branches, and I am in a fix-forward scenario, so this works for me. If you use release/* branches and need builds from there, you may need additional GitVersion configuration to get the correct version numbers to generate.

  • Using Git Hooks on heterogenous repositories

    I have had great luck with using git hooks to perform tool executions before commits or pushes. Running a linter on staged changes before the code is committed and verifying that tests run before the code is pushed makes it easier for developers to write clean code.

    Doing this with heterogenous repositories, or repos which contain projects of different tech stacks, can be a bit daunting. The tools you want for one repository aren’t the tools you want for another.

    How to “Hook”?

    Hooks can be created directly in your repository following Git’s instructions. However, these scripts are seldom cross-OS compatible, so running your script will need some “help” in terms of compatibility. Additionally, the scripts themselves can be harder to find depending on your environment. VS Code, for example, hides the .git folder by default.

    Having used NPM in the past, Husky has always been at the forefront of my mind when it comes to tooling around Git hooks. It helps by providing some cross-platform compatibility and easier visibility, as all scripts are in the .husky folder in your repository. However, it requires some things that a pure .Net developer may not have (like NPM or some other package manager).

    In my current position, though, our front ends rely on either Angular or React Native, so the chance that our developers have NPM installed are 100%. With that in mind, I put some automated linting and building into our projects.

    Linting Different Projects

    For this article, assume I have a repository with the following outline:

    • docs/
      • General Markdown documentation
    • /source
      • frontend/ – .Net API project which hosts my SPA
      • ui/ – The SPA project (in my case, Angular)

    I like lint-staged as a tool to execute linting on staged files. Why only staged files? Generally, large projects are going to have a legacy of files with formatting issues. Going all in and formatting everything all at once may not be possible. But if you format as you make changes, eventually most everything should be formatted well.

    With the outline above, I want different tools to run based on which files need linted. For source/frontend, I want to use dotnet format, but for source/ui, I want to use ESLint and prettier.

    With lint-staged, you can configure individual folders using a configuration file. I was able to add a .lintstagedrc file in each folder, and specify the appropriate linter for the folder. for the .Net project:

    {
        "*.cs": "dotnet format --include"
    }

    And for the Angular project:

    {
        "*": ["prettier", "eslint --fix"]
    }

    Also, since I do have some documentation files, I added a .lintstagedrc file to the repository to run prettier on all my Markdown files.

    {
        "*.md": "prettier"
    }

    A Note on Settings

    Each linter has its own settings, so follow the instructions for whatever linter you may be running. Yes, I know, for the .Net project, I’m only running it on *.cs files. This may change in the future, but as of right now, I’m just getting to know what dotnet format does and how much I want to use it.

    Setting Up the Hooks

    The hooks are, in fact, very easy to configure: follow the instructions on getting started from Husky. The configured hooks for pre-commit and pre-push are below, respectively:

    npx lint-staged --relative
    dotnet build source/mySolution.sln

    The pre-commit hook utilizes lint-staged to execute the appropriate linter. The pre-push hook simply runs a build of the solution which, because of Microsoft’s .esproj project type, means I get an NPM build and a .Net build in the same step.

    What’s next?

    I will be updating the pre-push hook to include testing for both the Angular app and the .Net API. The goal is to provide our teams with a template to write their own tests, and have those be executed before they push their code. This level of automation will help our engineers produce cleaner code from the start, alleviating the need for massive cleanup efforts down the line.

  • Git, you are messing with my flow!

    The variety of “flows” for developing using Git makes choosing the right one for your team difficult. When you throw true continuous integration and delivery into that, and add a requirement for immutable build objects, well…. you get a heaping mess.

    Infrastructure As Code

    Some of my recent work to help one of our teams has been creating a Terraform project and Azure DevOps pipeline based on a colleague’s work in standardizing Kubernetes cluster deployments in AKS. This work will eventually get its own post, but suffice to say that I have a repository which is a mix of Terraform files, Kubernetes manifests, Helm value files, and Azure DevOps pipeline definitions to execute them all.

    As I started looking into this, it occurred to me that there really is clear way to manage this project. For example, do I use a single pipeline definition with stages for each environment (dev/stage/production)? This would mean the Git repo would have one and only one branch (main), and each stage would need some checks (manual or automatic) to ensure rollout is controlled.

    Alternatively, I can have an Azure Pipeline for each environment. This would mean that each pipeline could trigger on its own branch. However, it also means that no standard Git Flow seems to work well with this.

    Code Flows

    Quite separately, as the team dove into creating new repositories, the question again came up around branching strategy and subsequent environment strategies for CI/CD. I am a vocal proponent of immutable build objects, but how each team chooses to get there is up to them.

    In some MS Teams channel discussion, there are pros and cons to nearly all of our current methods, and the team seems stuck on the best way to build and develop.

    The Internet is not helping

    Although it is a few years old, Scott Shipp’s War of the Git Flows article highlights the ongoing “flow wars.” One of the problems with Git is the ease with which all of these variations can be implemented. I am not blaming Git, per say, but because it is easy for nearly anyone to suggest a branching strategy, things get muddled quickly.

    What to do?

    Unfortunately, as with many things in software, there is no one right answer. The branch strategy you use depends on the requirements of not just the team, but the individual repository’s purpose. Additional requirements may come from the desired testing, integration, and deployment process.

    With that in mind, I am going to make two slightly radical suggestions

    1. Select a flow that is appropriate for the REPOSITORY, not your team.
    2. Start backwards. Working on identifying the requirements for your deployment process first. Answer questions like this:
    • How will artifacts, once created, be tested?
    • Will artifacts progress through lower environments?
    • Do you have to support old releases?
    • Where (what branch) can a release candidate artifacts come from? Or, what code will ultimately be applied to production?

    That last one is vital: defining on paper which branches will either be applied directly to production (in the case of Infrastructure as Code) or generate artifacts that can end up in production (in the case of application builds) will help outline the requirements for the project’s branching strategy.

    Once you have identified that, the next step is defining the team’s requirements WITHIN the above. In other words, try not to have the team hammer a square peg into a round hole. They have a branch or two that will generate release code, how they get their code into that branch should be as quick and direct as possible while supporting the collaboration necessary.

    What’s next

    What’s that mean for me? Well, for the infrastructure project, I am leaning towards a single pipeline, but I need to talk to the Infrastructure team to make sure they agree. As to the software team, I am going to encourage them to apply the process above for each repository in the project.