Search code examples
azure-devopsazure-pipelinesazure-pipelines-yaml

Is there any way to create reusable and scalable build and deployment pipeline(s) for similar services in an Azure DevOps environment?


and apoligies in advance for not being able to keep this any shorter.
I'm working with my team on a relatively large structure of related services (/ microservices) all hosted on Docker containers, working in a tandem. As our work has progressed, progressively more services have been added, and old legacy systems have even been rebuilt to fit our newer standards, making more of our internal IT structure transparent to people working on our development team.

My biggest issue in this has been how much identical new boilerplate stuff has to be added every time a new service is introduced. Of course this is a design problem, and something we can deal with in a variety of ways on different key points in our design. But my biggest issue right now is that i can't work out an elegant solution for build and deployment pipelines.

Our current design uses standard YAML pipelines for Builds, and the Devops "Releases" pipeline system for deployments. The latter unfortunately doesn't seem to be much supported anymore, and has a bunch of annoying caveats. The biggest being that there is no way to even use a shared template between them. Instead, every pipeline needs to be maintained individually, even if they are nearly identical.

I've succesfully converted these into a YAML pipeline, that can reuse the same template file between every deployment pipeline. But the problem still exists, that every new service added necessitates the setup of various tedious things across both our code repository AND the DevOps environment itself. And the more things you need to set up in the same way, the bigger the risk that some things will not be maintained properly, even if most of that is now handled by reused templates.

I would really like it to work in one of three ways, but attempting each I run into what I perceive as limitations in DevOps:

  1. A single BUILD pipeline shared between services, and a single DEPLOYMENT pipeline
    This would be the most elegant solution because it would be able to scale completely seamlessly. Everything unique about each of our services is defined in our docker-compose file, so essentially the build pipeline needs to just be fed a couple of parameters defining which service to build, and it will produce the correct artifacts for the deployment pipeline to use (using the build pipeline as a Resource).
    This works perfectly with manually triggered builds, but the issue I run into here is automatically triggered ones. Since a build of a specific service needs to happen based on code changes pushed to our main or release branches, all paths causing a trigger need to be added. But as far as I can tell, the pipeline has no way of identifying which changes caused the trigger.
    Ideally I'd need to be able to define multiple triggers (some of which match overlapping paths, too), each triggering individual runs with different parameter values set. But I have not been able to identify any way to do this. DevOps does allow defining triggers externally, which sounds useful in this case, but they have the same limitation. Same problem also exists with Pull Requests, which need to trigger test builds of any service affected by the changes.

  2. One combined CI/CD pipeline per service with multiple inter-dependent stages
    This looks really neat on paper, and even displays a nice visual for the user. You got one stage for building the artifacts, and that then faciliates three parallel stages for each of our currently online environments (Dev, Test, and Production), with continuous deployment happening automatically on Dev, and the others being blocked either by parameters or a deployment gate in the pipeline.
    The issue with this design is that a pipeline "Run" is designed to execute exactly once, as part of a standard CI/CD process. That means a build is created and it's either deployed or not. Ideally we want to be able to take an old/pre-existing build (or specific build from a feature branch, etc.) and deploy at will, in the same manner you would with the Release Pipelines in DevOps (where a build triggers a release, and a deployment can be executed individually from the release, as many times as you want).
    In other words, our typical approach demands that deployments are separated from builds. While we could always just create a new build from a specific commit, that would mean massive useless overhead. Our build pipelines take considerably more time than our deployments.

  3. One reused deployment pipeline across every build pipeline
    Given each build pipeline requires its own unique triggers and its own unique parameter(s) which can just be defined as a variable in a pipeline, I think that still excuses making one build pipeline for each available service, featuring just the things that separate them, such as trigger paths. Since we have no rules that services need to be 100% similar, this would also make it easier to support potential variations between them.
    Deployment, however, remains basically identical between all of them. If we separate build and deployment pipelines, a build pipeline can be defined in the deployment pipeline as a Resource similar to Solution #1, allowing deployment to download the correct artifacts and figure out which online resources need to be updated.
    But a Pipeline Resource in a YAML pipeline for some reason cannot be defined dynamically, and parameters or variables cannot be used in the "source" property of one. It's hard locked into the one pipeline defined in the YAML source.
    If it were possible to define the source in an API call that runs the pipeline, that would solve this issue, but the only value that can be set is which run/version to use from the resource pipeline.

So in "short" - What are we doing wrong in our work process that causes it to not be supported by the ways DevOps pipelines operate? I feel like it's really close, but we're just lacking either one thing or another to make this work.
Or maybe there is some "magic" feature that I'm not aware of. Or is there anything we could do to actually get what we need, that I didn't think of? I would ideally like a solution that's not too elaborate.
For example, one way to go about this would of course be automating a lot of the process that creates new pipelines, but my personal belief is that the more you have of that kind of non-standard automation, the less transperent your project becomes. If possible I'd much rather rely on standard recognizable approaches and best practice, rather than trying to make workarounds.

* Note: I realise it's possible to make one workaround using tagged container images in our online container repository instead of build artifacts. But due to other design choices, our build pipelines DO need to create artifacts, and our container image is stored alongside those artifacts to ensure that every result of a build always matches the same code version.


Solution

  • One combined CI/CD pipeline per service with multiple inter-dependent stages is a good starting point in my opinion:

    • You can easily customize triggers for each service
    • You can define a base pipeline with a standard structure (stages, etc) for all services
    • You can rerun a single stage (see below)

    Example - dedicated pipeline for each service

    Consider a pipeline with the following stages:

    Pipeline stages

    You can implement it as follows:

    Pipeline for service foo:

    # /pipelines/foo-pipeline.yaml
    
    name: foo_$(Date:yyMMdd)$(Rev:rr)
    
    # Add specific triggers, resources, etc for foo service here
    
    extends:
      template: /pipelines/base-pipeline.yaml
      parameters:
        serviceName: foo # <----------- service name is hard-coded
    

    Pipeline for service bar:

    # /pipelines/bar-pipeline.yaml
    
    name: bar_$(Date:yyMMdd)$(Rev:rr)
    
    # Add specific triggers, resources, etc for bar service here
    
    extends:
      template: /pipelines/base-pipeline.yaml
      parameters:
        serviceName: bar # <----------- service name is hard-coded
    

    Base pipeline:

    # /pipelines/base-pipeline.yaml
    
    parameters:
      - name: serviceName
        displayName: Name of the service to build and deploy
        type: string
    
      # As an alternative, and in case the stages are always the same for all services,
      # consider removing this parameter and hard-code the stages in the pipeline.
      - name: environments
        displayName: List of environments to deploy to
        type: object
        default: 
          - name: dev
            dependsOn: build
          - name: qa
            dependsOn: dev
          - name: prod
            dependsOn: qa
    
    variables:
      # Common variables that are used by all services/environments
      - template: /pipelines/variables/common-variables.yaml
    
    # add common resources (used by all services) here
    
    stages:
      - stage: Build
        dependsOn: []
        jobs:
          - job: Build
            displayName: Build
            steps:
              - script: echo Building the service
                displayName: 'Build the service'
    
      - ${{ each environment in parameters.environments }}:
        - stage: ${{ environment.name }}
          displayName: Deploy ${{ environment.name }}
          dependsOn: ${{ environment.dependsOn }}
          jobs:
            - deployment: Deploy${{ environment.name }}
              displayName: Build and Deploy to ${{ environment.name }}
              environment: ${{ environment.name }} # Azure DevOps environment. Use one per environment or service/environment
              variables:
                # Get the variables for the specific service and environment
                - template: /pipelines/variables/${{ parameters.serviceName }}/${{ environment.name }}-variables.yaml
              strategy:
                runOnce:
                  deploy:
                    steps:
                      - script: echo Deploying to ${{ environment.name }} Environment
                        displayName: 'Deploy to ${{ environment.name }}'
    

    Note:

    • Variable templates are being referenced dynamically for each service/environment, assuming that variables are organized by component and environment. You can dynamically reference stage, job or steps templates based on a parameter (like the service name) as well

    Reruning a specific stage

    Click on the top-right button to expand the stage and then choose one of the below options:

    Rerun specific stage