Build Concurrency

You are viewing an old version (v. 1) of this page.
The latest version is v. 4, last edited on Aug 19, 2010 (view differences | )
view page history | view next version >>

It is good pratice that each change set submitted by the developer is verified/tested at build server to ensure it does not break the basic functionalities. In a busy team, developers may submit their code changes frequently to cause many builds being requested at the build server. It is important to be able to run these builds concurrently to get fast feedback for the submitted changes.

Build concurrency in QuickBuild is determined by workspace access: if two builds tries to access the same workspace in the same time, only one build will be allowed to use that workspace. The second build will wait until the first build has finished using that workspace. Since different configurations use different workspaces on the same node, two builds from different configurations can always run concurrently unless you limit the concurrency by controlling number of workers in the build queue.

Things get complicated for builds from the same configuration. The level of concurrency will be impacted by how steps are organized to execute on different nodes. When a step starts to execute, it tries to lock the configuration workspace on the node it is currently running. If the workspace is already locked by other builds, the step will wait until it is unlocked. After workspace is locked, the step continues to execute and will only unlock the workspace before it finishes. We'll use below step graph to explain it in more detail:

  • When step master starts to run, it tries to lock the configuration workspace on server node lark:8810. If the same workspace is locked by other builds, it will wait until this workspace is unlocked by that build.
  • Step master continues to run and trigger step1. When step1 starts to run, it tries to lock workspace on agent node lark:8811. If the same workspace is locked by other builds, it will wait until this workspace is unlocked by that build.
  • step1 continues and finishes. The workspace on node lark:8811 hold by this step will be unlocked.
  • step2 is triggered, and it tries to lock workspace on server node lark:8810. Since this workspace is already locked by current build (although by the master step), step2 runs directly and finishes.
  • The master step finishes and the workspace on server node lark:8810 will be unlocked.

For a particular configuration, the master step runs on server node by default (unless you've changed the node match condition). If a build of the configuration is already running, another build of the same configuration will be put into running status as long as the queue has free workers. However its master step will wait until the first build finishes since it tries to lock the configuration workspace at server node which has already been locked by master step of the first build. To work around this issue, you may tune node match condition of the master step to have it randomly choosing appropriate agent nodes instead of always running on the same node, so that workspaces on different nodes are used for the master step to avoid locking on the same workspace. This technique also applies to any child steps and the rule to achieve high concurrency for builds in the same configuration is: For any steps taking long time to run (including composite step), configure the node match condition so that they can run on multiple agent nodes.

Enter labels to add to this page:
Please wait 
Looking for a label? Just start typing.