Software delivery pipeline - Considerations for infrastructure improvements

Sunday, January 27, 2019 | Posted in Delivering software DevOps Immutable infrastructure Infrastructure as code Software delivery pipeline

In one of the post from a while ago we discussed what a software development pipeline is and what the most important characteristics are. Given that the pipeline is used during the large majority of the development, test and release process it is fair to say that for a software company the build and deployment pipeline infrastructure should be considered critical infrastructure because without it the development team will be more limited in their ability to perform their tasks. Note that at no stage should any specific tool, including the pipeline, be the single point of failure. More on how to reduce the dependency on CI systems and the pipeline will follow in another post.

Just like any other piece of infrastructure the development pipeline will need to be updated and improved on a regular basis, either to fix bugs, patch security issues or to add new features that will make the development team more productive. Because the pipeline falls in the cricital infrastructure category it is important to keep disturbances to a minimum while performing these changes. There are two main parts to providing (nearly) continuous service while still providing improvements and updates. The first is to ensure that the changes are tracked and tested properly, the second is to deploy the exact changes that were tested to the production system in a way that no or minimal interruptions occur. A sensible approach to the first part is to follow a solid software development process so that the changes are controlled, verified and monitored which can be achieved by creating infrastructure resources completely from information stored in source control, i.e. using infrastructure-as-code, making the resources as immutable as possible and performing automated tests on these resources after deploying them in a test environment using the same deployment process that will be used to deploy the resources to the production environment.

Using this approach should allow the creation of resources that are thoroughly tested and can be deployed in a sensible fashion. It should be noted that no amount of automated testing will guarantee that the new resources are free of any issues so it will always be important use deployment techniques that allow for example quick roll-back or staged roll-outs. Additionally deployed resources should be carefully monitored so that issues will be discovered quickly.

To achieve the goal of being able to deploy updates and improvements to the development infrastructure the following steps can be taken

  • Using infrastructure-as-code to create new resource images each time a resource needs to be updated. Trying to create resources by hand will drastically reduce the ease at which they can be build and be made consistently. Resources that are deployed into an environment should never be changed. If bugs need to be fixed or new features need to be added then a new version of the resource image should be created, tested and deployed. That way changes can be tested before deployment and the configuration of the deployed resources will be known.
  • Resources should be placed on virtual machines or in (Docker) containers. Both technologies provide an easy way to create one or more instances of the resource which is required in order to test or scale a service. The general idea is to have one resource per VM / container instance. One resource may contain multiple services or daemons but it always serves a single goal. Note that in some cases people will state that you should only use containers and not VMs but there are still cases where a VM works better, e.g. in some cases executing software builds works better in a VM or running a service that stores large quantities of data. Additionally if all or a large part of the infrastructure is running on VMs then using VMs might make more sense. In all cases the correct approach, container or VMs, is the one that makes sense for the environment the resources will be deployed into.
  • Some way of getting configurations into the resource. Some configurations can be hard-coded into the resource, if they are never expected to be changed. The draw-back of encoding a configuration into a resource is that this configuration cannot be changed if the resource is used in different environments, e.g. a test environment and a production environment. Configurations which are different between environments should not be encoded in the resource since that may prevent the resource from being deployed in a test environment for testing. Provisioning a resource requires that you can apply all the environment specific information to a resource which is a difficult problem to solve especially for the initial set of configurations, e.g. the configurations which determine where to get the remaining configurations. Several options are:
    • For VMs you can use DVD / ISO files that are linked on first start-up of the resource.
    • Systems like consul-template can generate configurations from a distributed key-value store.
    • Resources can be pull their own configurations from a shared store.
    • For containers often environment variables are used. These might be sufficient but note that they are not secure, both inside the container and outside the container.
  • Configurations that should be provided when a resource is provisioned should be stored in source control, just like the resource code is, in order to be able to automate the verification and delivery of the configuration values.
    • The infrastructure should have it's own shared storage for configurations so that the ‘build’ process can push to the shared storage and configurations are distributed from there. That ensures that the build process doesn't need to know where to deliver exactly (which can change as the infrastructure changes). One option is to use SQL / no-SQL type storage (e.g. Elasticsearch), another option is to use a system like consul which has a distributed key-value store
  • Automatic testing of a resource once it is deployed into an environment. For the very least the smoke tests should be run automatically when the resource is deployed to a test environment.
  • Automatic deployments when a new resource becomes available or approved for an environment, for the very least to the test environment but ideally to all environments. Using the same deployment system for all environments is highly recommended because this allows testing the deployment process as well as the resource.

A general workflow for the creation of a new resource or to update a resource could be

  • Update the code for the resource. This code can consist of Docker files, Chef or Puppet, scripts etc.. The most important thing is that the files are stored in source control and a sensible source control strategy is used.
  • Once the changes are made a new resource can be created from the code.
    • It is sensible to validate the sources using one or more suitable linters. Especially for infrastructure resources it is sensible to validate the sources before trying to create the resource because it potentially takes a long time to build a resource. Any errors that can be found sooner in the process will reduce the cycle time.
    • Execute unit tests, e.g. ChefSpec, against the sources. Again, building a resource can take a long time so validation before trying to create the resource will reduce the cycle time.
    • Actually create the new resource. For Docker containers this can be done from a docker file. For a VM this can be done with Packer. Building a VM will take longer than building a docker container in most cases. Note that building resources will in general take longer than building applications it is sensible to use the build / deployment pipeline to build the resources that make up the build / deployment pipeline. By using the pipeline it is possible to create the artefacts for the services and then use these artefacts to create the resource.
  • Deploy the resource to a (small) test environment and execute the tests against the newly created resource.
  • Once the tests have passed the newly made image can be ‘promoted’, i.e. approved for use in the production environment.

Using the approaches mentioned above it is possible to improve the development pipeline without causing unnecessary disturbances for the development team.

Software development pipeline - In the build system or not

Monday, December 3, 2018 | Posted in Delivering software DevOps Pipeline design Software development pipeline

Over the last few years the use of build pipelines has been gaining traction backed by the ever growing use of Continuous Integration (CI) and Continuous Delivery and Deployment (CD) processes. By using a build pipeline the development team get benefits like being able to execute parts of the build, test, release and deployment processes in parallel, being able to restart the process part way through in case of environmental issues, and vastly improved feedback cycles which improve the velocity at which features can be delivered to the customer.

Most modern build systems have the ability to create a build pipelines in one form or another, e.g. VSTS / Azure Devops builds, Jenkins pipeline, GitLab, BitBucket and TeamCity. With these capabilities built into the build system it is easy for developers to quickly create a new pipeline from scratch. While this is quick and easy often the pipeline for a product is created by the development team without considering if this is the best way to achieve their goal, which is to deliver their product faster with higher quality. Before using the built-in pipeline capability in the build system the second question a development team should ask is when should one use this ability and when should one not use this ability? Obviously the first question is, do we need a pipeline at all, which is a question for another post.

The advantages of creating a pipeline in your build system are:

  • It is easy to quickly create pipelines. Either the is a click and point UI of some form or the pipeline is defined by a, relatively, simple configuration file. This means that a development team can configure a new build pipeline quickly when one is desired.
  • Pipelines created in a build system can often use multiple build executors or have a job move from one executor to another if different capabilities are required for a new step, for instance if different steps in the pipeline need different operating systems to be executed.
  • In many cases, but not all, the build system provides a way for humans to interact with a running pipeline, for instance to approve the continuation of the pipeline in case of deployments or to mark a manual test phase as passed or failed.
  • If the configuration of the pipeline is stored in a file it can generally be stored in a source control system, thus providing all the benefits of using a source control system. In these cases the build system can generally update the build configurations in response to a commit / push notification from the version control system. Thus ensuring that the active build configuration is always up to date.
  • The development team has nearly complete control over the build configuration which ensures that it is easy for the development teams to have a pipeline that suits their needs.

Based on the advantages of having a pipeline in the build system it seems pretty straight forward to say that having the pipeline in the build system is a good thing. However as with all things there are also drawbacks to having the pipeline in the build system.

  • Having the pipeline in the build system makes some assumptions that may not be correct in certain cases.

    • The first assumption is that the build system is the center of all the work being done because the pipeline is controlled by the build system, thus requiring that all actions feed back into said build system. This however shouldn't be a given, after all why would the build system be the core system and not the source control system or the issue tracker. In reality all systems are required to deliver high quality software. This means in most cases that none of these systems have enough knowledge by themselves to make decisions about the complete state of the pipeline. By making the assumption that the build system is at the core of the pipeline the result will be that the knowledge of the pipeline work flow will end up being encoded in the build configurations and the build scripts. For simple pipelines this is a sensible thing to do but as the pipeline gets more complex this approach will be sub-optimal at best and more likely detrimental due to the complexity of providing all users with the overview of how the pipeline functions.
    • The second, but potentially more important, assumption is that the item the development teams care most about is ‘build’ or 'build job'. This however is not the case most of the time because a ‘build’ is just a way to create or alter an artefact, i.e. the package, container, installer etc.. It is artefacts that people care about most because artefacts are the carrying vehicle for the features and bug fixes that the customer cares about. From this perspective it makes sense to track the artefacts instead of builds because the artefact flows through the entire pipeline while builds are only part of the pipeline.
    • A third assumption is that every task can somehow be run through the build system, but this is not always the case and even when it is possible it is not necessarily sensible. For instance builds and deploys are fundamentally different things, one should be repeatable (builds) and can just be stopped on failure and restarted if necesary and the other is often not exactly repeatable (because artefacts can only be moved from a location once etc.) and should often not just be stopped (but rolled-back or not ‘committed’). Another example is long running tests for which the results may be fed back into the build system if required but that doesn't necessarily make sense.
  • If the build system is the the center of the pipeline then that means that the build system has to start storing persistent data about the state of the pipeline with all the issues that come with this kind of data, for instance:

    • The data stored in the pipeline is valuable to the development team both at the current time and in the future when the development team needs to determine where an artefact comes from. This means that the data potentially needs to be kept safe for much longer than build information is normally kept. In order to achieve this the standard data protection rules apply for instance access controls and backups.
    • The information about the pipeline needs to be easily accessible and changable both by the build system and by systems external to the build system. It should be possible to add additional information, e.g. the versions / names of artefacts created by a build. The status of the artefact as it progresses through the pipeline etc.. All this information is important either during the pipeline process or after the artefacts have been delivered to the customer. Often build systems don't have this capability, they store just enough information that they can do what they need to do, and in general they are not database systems (and if they are it is recommended that you don't tinker with them and in general it is made difficult to append or add information).
    • Build systems work much better if they are immutable, i.e. created from standard components (e.g. controller and agents) with automatically generated build jobs (more about the reasons both of these will follow in future posts). This allows a build system to be expanded or replaced really easily (cattle not pets even for build systems). That is much harder if the build system is the core of your pipeline and stores all the data for it.
  • Having the pipeline in the build system in general provides more control for the development teams, which is a great benefit, but less control for the administrators. Because the pipeline provides the development teams with all the abilities there is, in general, less ability for the admins to guide things in the right direction or to block developers from doing things that they shouldn't be doing or have access to. While this may seem to be a benefit for the developers, no more annoying admins getting in the way, it is in fact a drawback because this behaviour means that the developers take on the responsibility to administer some or all of the underlying build system. Examples of the change of control are for instance in the Jenkins pipeline it is possible for developers to use all the credentials that jenkins has access to. However this might not be desirable for high power credentials or credentials for highly restricted resources. An other example is that the selection of the build executor is done in the pipeline configuration, however in some cases it may make sense to limit access to executors, after all having a build that can migrate from node to node makes sense in some cases but it's not free. Further the ease with which parallel steps can be created will lead to many parallel jobs. This might be great for one pipeline but isn't necessarily the best for the overall system. In some cases serializing the steps for a single pipeline can lead to greater overall throughput if there are many different jobs for many different teams.

Based on all the advantages and disadvantages that are listed here it may be difficult to decide whether or not a development team should use the pipeline in their build system or not. In general it will be sensible to use the pipeline capabilities that are build into your build system in cases where you either have a fairly simple pipeline that is easy to reason about or where no external systems need to interact with the data in the pipeline.

Once the pipeline gets more complicated, external systems need access to the metadata describing the pipeline or the pipeline gets stages that are incompatible with being executed by a build system it will be time to migrate to a different approach to the build and deployment pipeline. In this case it is worth it to develop some custom software that tracks artefacts through the pipeline. This makes it possible to treat the pipeline system as the critical infrastructure that it is, with the appropriate separation of data and business rule processing, data security and controlled access to the data for external systems.

Software development pipeline - Design flexibility

Wednesday, October 31, 2018 | Posted in Delivering software DevOps Pipeline design Software development pipeline

The fourth property to consider is flexibility, i.e. the ability of the pipeline to be able to be modified or adapted without requiring large changes to be made to the underlying pipeline code and services.

A pipeline should be flexible because the products being build, tested and deployed with that pipeline may require different workflows or processes in order for them to complete all the stages in the pipeline. For example building and packaging a library will require a different approach then building, testing and deploying a cloud service. Additionally the different stages in the pipeline will require different approaches, e.g. build steps will in general be executed by a build system returning the results in a synchronous way, however test steps might run on a different machine from the process that controls the test steps so those results might come back via an asynchronous route. Finally flexibility in the pipeline also improves resilience since in case of a disruption an adaptable or flexible pipeline will allow restoring services through alternate means.

Making a flexible pipeline is achieved in the same way flexibility is achieved in other software products, by using modular parts, standard inputs and outputs and carefully considered design. Some of the appropriate options are for instance:

  • Split the pipeline into stages that take standard inputs and deliver standard outputs. There might be many different types of inputs and outputs but they should be known and easily shared between processes and applications. There can be one or more stages, e.g. build, test and deploy, which are dependent on each other only through their inputs and outputs. This allows adding more stages if required.
  • Allow steps or stages in the pipeline to be started through a response to a standard notification. That allows each step to determine what information it needs to start execution. Additional information can be downloaded from the appropriate sources upon receiving a notification. This approach allows notifications to be generic while steps can still acquire the information they need to execute. Additionally having pipeline steps respond to notifications means that it is very easy to add new steps in the process because a new executor only has to be instantiated and connected to the message source, e.g. a distributed queue.
  • If a stage consists of multiple, dependent steps, then it should be easy to add and remove steps based on the requirements. In these cases it would generally be preferred that a stage like this executes one or more scripts as they are easier to extend than services. As with the stages steps should ideally use well-known inputs and produce well-known outputs.
  • Inputs for stages and steps are for instance
    • Source information, e.g. a commit ID
    • Artefacts, e.g. packages installers, zip files etc.
    • Meta data, additional information attached to a given output or input, e.g. build or test results
  • Outputs generated by stages and steps are for instance

Flexibility of the workflow is can further be improved by making sure that the artefacts generated in the pipeline are not created, tested and deployed in a single monolithic process even if the end result should be a single artefact. In many cases artefacts can be assembled from smaller components. Using this approach improves the workflow for the development teams because smaller components can be created much quicker and in general assembly of a larger piece from components is quicker and more flexible than regeneration of the entire piece from scratch. In many cases only a few components will be recreated which both saves time and allows much of the process to be executed in parallel.

The exact implementation of the pipeline determines how flexible and easy to extend it will be. Given that the use and implementation of the pipeline vary quite a lot it is hard to provide detailed implementation details, however some standard suggestions are:

  • Keep the build part of the pipeline described in the scripts given that scripts are, in general, easier to adapt. By pulling the scripts from a package, e.g. a NuGet or NPM package, it is quick and easy to update to a later version of these scripts. An additional benefit of keeping the process in the scripts is that developers can execute the individual steps of the pipeline from their local machines. That allows them to ensure builds / tests work before pushing to the pipeline and provides a means of building things if the pipeline is not available.
  • Any part of the process that cannot be done by a script, e.g. test systems, items that need services, e.g. certificate signing, which require that the certificates are present on the current machine, something which might not be possible to do on every machine etc., should have a service that is available to both the pipeline and the developers executing the scripts locally. For any services that should only be provided to the build server, e.g. signing, the scripts should allow skipping the steps that need the service.
  • For stages that execute scripts, e.g. the build stage, jobs can be automatically generated from information stored in source control. This makes it easy to update the actions executed by these stages without requiring developers to perform the configuration manually.

As a final note one should consider how the pipeline will be described. It is easier to reason about a pipeline if the entire description of that pipeline is stored in a single file, ideally in source control. However as the pipeline evolves and more steps and stages are executed in parallel it will become increasingly difficult to capture the entire pipeline in a single file. While harder to reason about it is in the end simpler and more flexible to let the pipeline layout, as in the stages, steps and orders of these items, be determined by the executors that are available and listening for notifications. That way it's easy to change the layout of the pipeline.

And with that we have come to the end of this journey into the guiding principles of designing a build and release pipeline. There are of course many additions that can be made with regards to the general design process and even more additions for specific use cases. Those however will have to wait until another post.

Edits

  • December 3rd 2018: Fixed a typo in the post title

Software development pipeline - Design resilience

Tuesday, December 19, 2017 | Posted in Delivering software DevOps Pipeline design Software development pipeline

The third property to consider is resilience, which in this case means that the pipeline should be able to cope with expected and unexpected changes to the environment it executes in and uses.

David Woods defines four different types of ‘resilience’ in a paper in the journal of reliability engineering and system safety. One of the different types is the generally well known form of robustness, i.e. the ability to absorb pertubations or disturbances. In order to be robust for given disturbances one has to know in advance where the disturbances will come from, e.g. in the case of a development pipeline it might be expected that pipeline stages will fail and will polute or damage parts or all of the executor it was running on. Robustness in this case would be defined as the ability of the pipeline to handle this damage, for instance by repairing or replacing the executor. The other definitions for resilience are:

  • Rebound, the ability to recover from trauma: In order to achieve this capacity ahead of time is required, i.e. in order to recover from a disturbance one needs to be able to deploy capabilities and capacity that was available in excess before the issues occurred.
  • Graceful extensibility, the ability to extend adaptive capacity in the face of surprise. This is the ability to stretch resources and capabilities in the face of surprises.
  • Sustained adaptibility, which is the ability to adapt and grow new capabilities in the face of unexpected issues. In general this definition applies more to systems / layered networks where the loss of sub-systems can be compensated.

Which ever definition of resilience is used in general the goal is to be able to recover from unexpected changes and return back to the normal state, ideally with minimal intervention. An interesting side note is that returning back to normal after major trauma can be deceiving because the ‘normal’ as experienced before the trauma will be different from the 'normal' experienced after the trauma due to the lessons learned from the trauma and permanent changed caused by the trauma.

Additionally it is not just the unexpected or traumatic changes that are interesting in the case of a development pipeline but also the expected ones, e.g. upgrades, maintenance etc., because in general it is important for the pipeline to continue functioning while those changes are happening.

For a development pipeline resilience can be approached on different levels. For instance the pipeline should be resilient against:

  • Changes in the environment which range from small changes, e.g. additional tools being deployed, to big changes, e.g. migration of many of the services, and from expected, i.e. maintenance or planned upgrades, to unexpected
  • Changes in the inputs and the results of processing those inputs which may range from build and test errors to issues with executors
  • Invalid or incorrect configurations.

Once it is known what resilience actually means and what type of situations the pipeline is expected to be able to handle the next question is how the pipeline can handle these situations, both in terms of what the expected responses are and in terms of how the pipeline should be designed.

There are a mirriad of simple steps that can be taken to provide a base level of resilience. None of these simple steps will guard against major trauma but they will be able to either prevent or smooth out many of the smaller issues that would otherwise cause the development team to lose faith in the pipeline outputs. Some examples of simple steps that can be taken to improve resilience in a development pipeline are:

  • For each pipeline step ensure that it is executed in a clean ‘workspace’, i.e. a directory or drive, that will only ever be used by that specific single step. This workspace should be 'private' to the specific pipeline step and no other processes should be allowed to execute in this workspace. This prevents issues with unexpected changes to the file system. There are still cases where ‘unexpected’ changes to the file system can occur, for instance when running parallel executions within the same pipeline step in the same workspace. This type of behaviour should therefore be avoided as much as possible
  • Do not depend on global, i.e. machine, container or network, state. Global state has a tendency to change in random ways at random times.
  • Avoid using source which are external to the pipeline infrastructure becaues these are prone to unspected random changes. If a build step requires data from an external source then the external source should be mirrored and mirrors should be carefully controled for their content. This should prevent issues with external packages and inputs changing or disappearing, e.g.leftpad.
  • If external sources are suitably mirrored inside the pipeline infrastructure then it is possible to remove the caches for these external sources on the executors. By pulling data in fresh from the local data store cache polution issues can be prevented
  • Ensure that each resource is appropriately secured against undesirable acces. This is especially true for the executor resources. It is important to note that pipeline steps are essentially random scripts from an unknown source, even if the scripts are pulled from internal sources, because the scripts will not be security verified before being used. This means that the pipeline scripts should not be allowed to to make any changes or to obtain secrets that they shouldn't have access to.

As mentioned the afforementioned steps form a decent base for improving resilience and they are fairly easy to implement, hence they make a good first stage in the improvement of the resilience of the development pipeline. Once these steps have been implemented more complex steps can be taken to futher improve the state of the development pipeline. These additional steps can be divided into items that help prevent issues, items that test and verify the current state, items that aid in recovery and finally items, like logging and metrics, that help during post-mortems of failure cases.

One way prevention of trauma / outages can partially be improved is by ensuring that all parts of the development pipeline are able to handle different error states which can be achieved by building in extensive error handling capabilities, both for known cases, e.g. service offline, and general error handling for unexpected cases. For the tooling / script side of the pipeline this means for instance adding error handling structures nearly everywhere and providing the ability to retry actions. For the infrastructure side of the pipeline this could mean providng highly available services and ensuring that service delivery gracefully degrades if it can no longer be provided at the required standard.

Even if every possible precaution is taken it is not possible to prevent all modes of failure. Unexpected failures will always occur no matter what the capabilities of the development pipeline are. This means that some of the way to improve resilience is to provide capabilities to recover from failures and to recognise that unexpected conditions exist and to notify the users and administrators of this situation. It should be noted that providing these capabilities may be much harder to implement due to the flexible nature of the issues that are being solved for these cases.

By exposing the system continuously to semi-controlled unexpected conditions it is possible to provide early and controlled feedback to the operators and administrators regarding the resilience of the development pipeline. One example of this is the chaos monkey approach which tests the resilience of a system by randomly taking down parts of the system. In a well designed system this should result in a response of the system in order to restore the now missing capabilities.

The actual handling of unexpected conditions requires that the system has some capability to instigate recovery which can for instance consist of having fall-back options for the different sub-systems, providing automatic remediation services which monitor the system state and apply different standard recovery techniques like restarting failing services or machines or creating new resources to replace missing ones.

From the high level descriptions given above it is hopefully clear that it will not be easy to create a resilient development pipeline and depending on the demands placed on the pipeline many hours of work will be consumed by improving the current state and learning from failures. In order to ensure that this effort is not a wasted effort it is definitely worth applying iterative improvement approaches and only continuing the improvement process if there is actual demand for improvements.

Software development pipeline - Design performance

Sunday, November 5, 2017 | Posted in Delivering software DevOps Pipeline design Software development pipeline

The second property to consider is performance, which in this case means that the pipeline should provide feedback on the quality of the current input set as soon as possible in order to reduce the length of the feedback cycle. As is known having a short feedback cycle makes it easier for the development teams to make improvements and fix issues.

There are two main components to development pipeline performance:

  • How quickly can one specific input set be processed completely by the pipeline. In other words how much time does it take to push a single input set through the pipeline from the initial change to the delivery of the final artefacts. This depends on the number of steps in the development pipeline and how quickly each step can be executed.
  • How quickly can a large set of input sets be processed. The maximum number of executors will most likely be limited to some maximum value. The pipeline is limited in the number of simultaneous input sets it can process by the number of available executors. How quickly the pipeline can process large number of input sets depends both on the time necessary to process a single input set and the relation between the total number of input sets and the number of executors

Optimizing the combination of these two components will lead to a development pipeline which is designed for maximum throughput for the development team. One important note to make is that a high performing pipeline will not not necessarily be the most resource efficient pipeline. For instance the development pipeline may only be fully loaded a few times a week. From a resource perspective the pipeline components are more than capable of dealing with the load, in fact the components may even be oversized. However because one of the main goals of the pipeline is to deliver fast feedback to the development teams the actual sizing of the pipeline and its components depends more on the way the pipeline will be loaded over time, e.g. will the jobs come as a constant stream or in blocks, will the jobs be small or large or will it be a mixture of both. In some cases the loading pattern can be accurately predicted while in other cases it is completely unpredictable. In general the pattern will depend on the workflow followed by the development team and the geographical distribution of the team. For instance when the team follows the Scrum methodology it is likely, though not necessary, that there will be more builds in the middle of the sprint than at the start or end. On the other hand when using the Kanban methodology the load on the system should be fairly consistent. Additionally geographical distribution of the development team influences the times that the pipeline will be loaded. If all of the team is in a single geographical location then higher loads can be expected during the day while lighter loads are be expected during the evening and night. However if the team is distributed across the globe it is more likely that the loading will be more consistent across the day due to the fact that the different locations have ‘office hours’ at different times in the day, as seen from the perspective of the different servers which are part of the development pipeline. Taking these issues into account when sizing the capacity of the development pipeline may lead to increasing the capacity of the pipeline because the the current peak loading during working hours results in wait times which are too large.

With this high level information it is possible to start improving the performance of the development pipeline. This obviously leads to the question: “What practical steps can we take”. As per normal when dealing with performance improvements it is hard to provide solutions because these depend on the specific situation. It is however possible to provide some more general advise.

The very first step to take when dealing with performance is always to measure everything. In the case of the development pipeline it will be useful to gather metrics constantly and to automatically process these metrics into several key performance indicators, e.g. the number of input sets per time span, which describes the loading pattern, the waiting times for each input set before it is processed and the time taken to process each input set. These key performance indicators can then be used to keep track of performance improvements as changes are made to the pipeline.

One important issue to keep in mind with regards to performance is that unlike with accuracy performance may change over time even if there are no changes to the system because the performance of the underlying infrastructure might change, for instance when disks fill up, the network load changes or the hardware ages. This means it will be important to track performance trends over longer periods of time to average out the influences of temporary infrastructure changes, e.g. network loading.

With all that out of the way some of the standard steps that can be taken are:

  • Each pipeline stage should only perform the necessary steps to achieve the desired goal. This for instance means that partial builds are better than full rebuilds, from a performance perspective.
  • Only gather data that will be used during the current stage. Gathering data that is not required wastes time, thus smaller input sets are quicker to process.
  • When pulling data locality matters. Pulling data off the local disk is faster than pulling it off the network, pulling data off the local network is faster than pulling it from the WAN or the internet. Additionally data that is not local should be cached so that it only needs to be retrieved once.
  • Ensure that pipeline stages run on suitable ‘hardware’, either physical or virtual. Ideally the stage is executed on hardware that is optimized for the performance demands of the step, e.g. execute I/O bound steps on hardware that has fast I/O etc.

In addition to these improvements it will be important to review and improve the ability of the pipeline to execute many input sets in parallel.

  • Ensure that the pipeline applications which deal with the distribution of input sets are efficient at this task. It's not very useful to start processing an input set only to find out that there are no executors which can process this given input set (I'm looking at you TFS2013).
  • Splitting a single stage into multiple parallel stages will improve throughput for a single input set. However it might decrease overall throughput due to the fact that a single input set requires the use of multiple executors. Note that splitting a single stage into many parallel stages might lead to reductions in performance due to the overhead of transitioning between stages.

The mentioned improvements form a start for improving the performance of the pipeline. Depending on the specific characteristics of a given pipeline other improvements and design choices may be valid.

Finally it must be mentioned that some performance improvements will have negative influences on the other properties. For instance using partial builds may influence accuracy. In the end a trade-off will need to be made when it comes to changes that influence multiple properties.