Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-37718

Block Pipeline job while upstream or downstream projects are building

    XMLWordPrintable

    Details

    • Similar Issues:

      Description

      Maybe it's just me, but I think having a possibility on Pipeline and Multibranch Pipeline projects to block execution while up- or downstream-projects are building would be beneficial, especially when migrating existing installations (see my posts in stackoverflow and in Jenkins CI).

      With Freestyle and Matrix projects there was the option of going to the "Advanced Project Options" and specify that the build should be blocked when upstream/downstream project is building. For Pipeline and Multibranch this option is unavailable, and I did not find a means of synchronizing a Pipeline/Multibranch project to existing freestyle/matrix projects without orchestrating this in another pipeline script. However, having to do that with an installation of about 400 inter-dependent projects is a huge block of work that has to be sorted out in one go. Having those "block when upstream/downstream project is building" options would allow for a gradual and smoother migration.

        Attachments

          Issue Links

            Activity

            Hide
            jglick Jesse Glick added a comment -

            Unlikely to implement anything like this.

            Show
            jglick Jesse Glick added a comment - Unlikely to implement anything like this.
            Hide
            vgimple Volker Gimple added a comment -

            Hi Jesse,

            First of all: Thanks for looking into this!

            What I can't get my head around is how to handle the problem of potential file overwrites without the ability to define mutually exclusive execution of different projects. Consider the simplified scenario where project A builds a shared object file and after a successful build stores the results (shared object file plus header file(s) to go with it) on a file server (simple file copy). Project B builds an executable that uses the shared object and header file(s) built and copied by project A. If I cannot make sure that project A and project B never build simultaneously it may (and therefore sooner or later will) happen that project A updates the files on the server while project B is accessing them. If that happens on a Linux node it'll lead to build errors in project B, while on a Windows node the deployment step of project A will fail.

            The only real workaround I have found so far (except for merging project A and project B into one project - an approach that becomes less of an option if you have not just one but numerous projects of type A) is to split the definition of either project into one project project that holds the pipeline definition (but does not trigger on SCM changes or upstream builds) and define the build trigger(s), dependenc(ies) and synchronization responsibilities in a freestyle project that just triggers the pipeline project and blocks until its execution finishes. This works for me and the overhead caused by the additional freestyle project is manageable, it just feels like an unnecessary detour given the fact that in principle the blocking functionality is available on all the other project types.

            Show
            vgimple Volker Gimple added a comment - Hi Jesse, First of all: Thanks for looking into this! What I can't get my head around is how to handle the problem of potential file overwrites without the ability to define mutually exclusive execution of different projects. Consider the simplified scenario where project A builds a shared object file and after a successful build stores the results (shared object file plus header file(s) to go with it) on a file server (simple file copy). Project B builds an executable that uses the shared object and header file(s) built and copied by project A. If I cannot make sure that project A and project B never build simultaneously it may (and therefore sooner or later will) happen that project A updates the files on the server while project B is accessing them. If that happens on a Linux node it'll lead to build errors in project B, while on a Windows node the deployment step of project A will fail. The only real workaround I have found so far (except for merging project A and project B into one project - an approach that becomes less of an option if you have not just one but numerous projects of type A) is to split the definition of either project into one project project that holds the pipeline definition (but does not trigger on SCM changes or upstream builds) and define the build trigger(s), dependenc(ies) and synchronization responsibilities in a freestyle project that just triggers the pipeline project and blocks until its execution finishes. This works for me and the overhead caused by the additional freestyle project is manageable, it just feels like an unnecessary detour given the fact that in principle the blocking functionality is available on all the other project types.
            Hide
            jhoblitt Joshua Hoblitt added a comment -

            I echo all of Volker Gimple's comments. A synchronization primitive would be really useful.

            Show
            jhoblitt Joshua Hoblitt added a comment - I echo all of Volker Gimple 's comments. A synchronization primitive would be really useful.
            Hide
            jglick Jesse Glick added a comment -

            A synchronization primitive would be really useful.

            Did not read any of the preceding long paragraph, but to this: install the Lockable Resources plugin and you will get the lock step.

            Show
            jglick Jesse Glick added a comment - A synchronization primitive would be really useful. Did not read any of the preceding long paragraph, but to this: install the Lockable Resources plugin and you will get the lock step.
            Hide
            vgimple Volker Gimple added a comment -

            The Lockable Resources plugin is a fine tool. However in the scenario described in the preceding long paragraphs it's not much use because one would either have to use a very broad lock (bad for build efficiency) or define a lock for every identifiable file group (hard to maintain if there are many).

            Show
            vgimple Volker Gimple added a comment - The Lockable Resources plugin is a fine tool. However in the scenario described in the preceding long paragraphs it's not much use because one would either have to use a very broad lock (bad for build efficiency) or define a lock for every identifiable file group (hard to maintain if there are many).
            Hide
            jglick Jesse Glick added a comment -

            hard to maintain

            lock can take a computed lock name, if that helps.

            Show
            jglick Jesse Glick added a comment - hard to maintain lock can take a computed lock name, if that helps.
            Hide
            gaalandr Andras Gaal added a comment - - edited

            I am affraid lock does not help.

            Lets have following jobs: A, B, C, D.

            A, B, C should block D, but D must not lock A, B, or C. A, B, C are different jobs and can run paralell.

            Show
            gaalandr Andras Gaal added a comment - - edited I am affraid lock does not help. Lets have following jobs: A, B, C, D. A, B, C should block D, but D must not lock A, B, or C. A, B, C are different jobs and can run paralell.
            Hide
            jglick Jesse Glick added a comment -

            Did not understand the last comment at all.

            Show
            jglick Jesse Glick added a comment - Did not understand the last comment at all.
            Hide
            sithmein Thorsten Meinl added a comment -

            I also don't see how Lockable Resources may replace this very useful functionality of FreeStyle jobs. Is there any reason, why you don't want to implement this for Pipeline jobs?
            Our use case is a chain of jobs such as

            B-C-D
            / \
            A - E - F - G

            G combines the results from D and F and it doesn't make sense if either D or F (or any of their upstream jobs) are currently executing, because the current results will be replaced quite soon by new results. It doesn't hurt but it's a huge waste of resources.

            Show
            sithmein Thorsten Meinl added a comment - I also don't see how Lockable Resources may replace this very useful functionality of FreeStyle jobs. Is there any reason, why you don't want to implement this for Pipeline jobs? Our use case is a chain of jobs such as B-C-D / \ A - E - F - G G combines the results from D and F and it doesn't make sense if either D or F (or any of their upstream jobs) are currently executing, because the current results will be replaced quite soon by new results. It doesn't hurt but it's a huge waste of resources.
            Hide
            vgimple Volker Gimple added a comment -

            Hi Thorsten!

            Thanks for chiming in
            What I ended up doing was this: For every pipeline job I needed I created a freestyle job as the "trigger buddy" for the pipeline job. The pipeline job does not get any build triggers at all, while the "trigger buddy" has a combination of upstream dependencies and scm triggers that determine when the pipeline should run. As it is a freestyle job, you have a possibility to block it while its up/downstream dependencies are building. The only build step the trigger buddy has is to invoke the pipeline project and wait for its completion. It's a bit of an overhead to manage, but as long as pipelines do not have the blocking options I don't see an easier alternative.

            Most of the time this approach works fine, but when the load on the Jenkins server became very high we've seen synchronization problems (Pipeline steps hanging or the trigger buddy waiting for the pipeline to finish while it already has), but it's probably unreasonable to expect Jenkins to cope with extreme situations and we managed to throttle the builds to the point where these problems disappeared.

            If you are concerned about wasting resources because Jenkins tends to pick sub optimum paths through your dependency tree you might want to have a look at the Dependency Queue Plugin. The last maintained version seems incompatible with Jenkins 2.x, but if you go ahead and build the latest sources from github you'll get it working. It also helps to reduce the dependency tree as much as possible - I initially had ours reflect the source code dependencies one by one, but if you reduce it to only absolutely the necessary paths (e.g. you have 3 projects A, B and C; A triggers B and C and B triggers C then it's better to not list A in the upstream dependencies of C because A will implicitly trigger C anyway) then Jenkins will run through your build a lot smoother.

            Show
            vgimple Volker Gimple added a comment - Hi Thorsten! Thanks for chiming in What I ended up doing was this: For every pipeline job I needed I created a freestyle job as the "trigger buddy" for the pipeline job. The pipeline job does not get any build triggers at all, while the "trigger buddy" has a combination of upstream dependencies and scm triggers that determine when the pipeline should run. As it is a freestyle job, you have a possibility to block it while its up/downstream dependencies are building. The only build step the trigger buddy has is to invoke the pipeline project and wait for its completion. It's a bit of an overhead to manage, but as long as pipelines do not have the blocking options I don't see an easier alternative. Most of the time this approach works fine, but when the load on the Jenkins server became very high we've seen synchronization problems (Pipeline steps hanging or the trigger buddy waiting for the pipeline to finish while it already has), but it's probably unreasonable to expect Jenkins to cope with extreme situations and we managed to throttle the builds to the point where these problems disappeared. If you are concerned about wasting resources because Jenkins tends to pick sub optimum paths through your dependency tree you might want to have a look at the Dependency Queue Plugin. The last maintained version seems incompatible with Jenkins 2.x, but if you go ahead and build the latest sources from github you'll get it working. It also helps to reduce the dependency tree as much as possible - I initially had ours reflect the source code dependencies one by one, but if you reduce it to only absolutely the necessary paths (e.g. you have 3 projects A, B and C; A triggers B and C and B triggers C then it's better to not list A in the upstream dependencies of C because A will implicitly trigger C anyway) then Jenkins will run through your build a lot smoother.
            Hide
            jglick Jesse Glick added a comment -

            G combines the results from D and F

            Generally better to make this all one Pipeline job.

            Show
            jglick Jesse Glick added a comment - G combines the results from D and F Generally better to make this all one Pipeline job.
            Hide
            sithmein Thorsten Meinl added a comment - - edited

            No, there are good reason to split this into multiple jobs. One being that the support for multiple SCM checkouts from Git in Jenkins completely sucks. And maintainability. Every job has a Jenkinsfile that is about a hundred lines long. Combining it into one job makes it unreadable. Also the parallel step mixes the output of all nodes which renders it close to unreadable.
            Also being able to start the intermediate jobs individually isn't possible with one single pipeline job. Shall I continue

            Show
            sithmein Thorsten Meinl added a comment - - edited No, there are good reason to split this into multiple jobs. One being that the support for multiple SCM checkouts from Git in Jenkins completely sucks. And maintainability. Every job has a Jenkinsfile that is about a hundred lines long. Combining it into one job makes it unreadable. Also the parallel step mixes the output of all nodes which renders it close to unreadable. Also being able to start the intermediate jobs individually isn't possible with one single pipeline job. Shall I continue
            Hide
            jhoblitt Joshua Hoblitt added a comment - - edited

            I believe the advice was to use a pipeline job to orchestrate other jobs. E.g.

                stage('foo') {
                  retry(retries) {
                    build job: 'bar'
                  }
                }
            
            Show
            jhoblitt Joshua Hoblitt added a comment - - edited I believe the advice was to use a pipeline job to orchestrate other jobs. E.g. stage( 'foo' ) { retry(retries) { build job: 'bar' } }
            Hide
            vgimple Volker Gimple added a comment -

            Sorry if that's a dumb question, but I cannot yet see how these orchestration pipelines help address the issue at hand. I do get that they help define the build sequence nicely and without the need for the two options that spawned this discussion thread. However, as far as I understood it, I would either

            • have to run the entire pipeline whenever one of the orchestrated jobs' source code changes or...
            • still need dependency triggers on the individual projects

            The first scenario is not desirable as it tends to become a significant waste of resources once we are looking at more than just some five to ten jobs. And with the second scenario we're back at the point where having these blocking check boxes become a bit of a necessity. Of course one could start breaking things down into several smaller job groups so that the orchestration jobs make more sense, but then again this is an extra configuration effort that simply wasn't necessary when only using freestyle and matrix jobs and therefore feels like a bit of an overhead.

            Show
            vgimple Volker Gimple added a comment - Sorry if that's a dumb question, but I cannot yet see how these orchestration pipelines help address the issue at hand. I do get that they help define the build sequence nicely and without the need for the two options that spawned this discussion thread. However, as far as I understood it, I would either have to run the entire pipeline whenever one of the orchestrated jobs' source code changes or... still need dependency triggers on the individual projects The first scenario is not desirable as it tends to become a significant waste of resources once we are looking at more than just some five to ten jobs. And with the second scenario we're back at the point where having these blocking check boxes become a bit of a necessity. Of course one could start breaking things down into several smaller job groups so that the orchestration jobs make more sense, but then again this is an extra configuration effort that simply wasn't necessary when only using freestyle and matrix jobs and therefore feels like a bit of an overhead.
            Hide
            jglick Jesse Glick added a comment - - edited

            Every job has a Jenkinsfile that is about a hundred lines long. Combining it into one job makes it unreadable.

            Libraries, loadFile

            Also the parallel step mixes the output of all nodes which renders it close to unreadable.

            Fix in progress.

            Also being able to start the intermediate jobs individually isn't possible with one single pipeline job.

            Actually it is possible, though we do not yet provide a convenient framework for it; possibly Declarative will in the future.

            Show
            jglick Jesse Glick added a comment - - edited Every job has a Jenkinsfile that is about a hundred lines long. Combining it into one job makes it unreadable. Libraries, loadFile Also the parallel step mixes the output of all nodes which renders it close to unreadable. Fix in progress. Also being able to start the intermediate jobs individually isn't possible with one single pipeline job. Actually it is possible, though we do not yet provide a convenient framework for it; possibly Declarative will in the future.
            Hide
            dario_simonetti Dario Simonetti added a comment -

            What about this situation?

              A
            / | \
            B | C
            \ | /
              D
            

            A depends on B, C and D, while B and C depend on D. When build D is successful, it'll trigger a build of A, B and C. But you don't want A to build straight away as it'll get triggered again as soon as B and C are built. So A should block until both B and C are built. Freestyle allow doing this and it works very well, I don't understand why this is not possible for pipeline projects?

            Show
            dario_simonetti Dario Simonetti added a comment - What about this situation? A / | \ B | C \ | / D A depends on B, C and D, while B and C depend on D. When build D is successful, it'll trigger a build of A, B and C. But you don't want A to build straight away as it'll get triggered again as soon as B and C are built. So A should block until both B and C are built. Freestyle allow doing this and it works very well, I don't understand why this is not possible for pipeline projects?
            Hide
            chris_mh3 chris_mh3 added a comment -

            We have exactly the situation Dario described with about 60 interdependent jobs. The excessive triggering of downstream jobs makes pipeline jobs practically unusable for our builds. (In Darios example A is triggered 3 times instead of once).

            The total build time is about 6 hours with pipeline jobs instead of less than 1 hour with freestyle jobs because of the missing blocking feature.

            Show
            chris_mh3 chris_mh3 added a comment - We have exactly the situation Dario described with about 60 interdependent jobs. The excessive triggering of downstream jobs makes pipeline jobs practically unusable for our builds. (In Darios example A is triggered 3 times instead of once). The total build time is about 6 hours with pipeline jobs instead of less than 1 hour with freestyle jobs because of the missing blocking feature.
            Hide
            dario_simonetti Dario Simonetti added a comment -

            Same, we sometimes end up in a situation where there are 20 projects in the queue, most of which are duplicates. I think I'm going to work on a fix for this in the next few weeks and report back here

            Show
            dario_simonetti Dario Simonetti added a comment - Same, we sometimes end up in a situation where there are 20 projects in the queue, most of which are duplicates. I think I'm going to work on a fix for this in the next few weeks and report back here
            Hide
            vgimple Volker Gimple added a comment - - edited

            Hi Dario!

            Two days ago a comment in a related StackOverflow question popped up that you might be interested in: http://stackoverflow.com/questions/38845882/how-can-i-block-a-jenkins-2-x-pipeline-job-while-dependent-jobs-are-building?noredirect=1#comment74131469_38845882

            It seems somebody is already looking into fixing this, maybe you want to get in touch with him.

             

            Show
            vgimple Volker Gimple added a comment - - edited Hi Dario! Two days ago a comment in a related StackOverflow question popped up that you might be interested in: http://stackoverflow.com/questions/38845882/how-can-i-block-a-jenkins-2-x-pipeline-job-while-dependent-jobs-are-building?noredirect=1#comment74131469_38845882 It seems somebody is already looking into fixing this, maybe you want to get in touch with him.  
            Hide
            dario_simonetti Dario Simonetti added a comment -

            That person is me haha! satoshi is my nickname I use sometimes... But thanks for the heads up and apologies for confusion!

            Show
            dario_simonetti Dario Simonetti added a comment - That person is me haha! satoshi is my nickname I use sometimes... But thanks for the heads up and apologies for confusion!
            Hide
            vgimple Volker Gimple added a comment -

             In that case I should apologize as well: StuporMundi is my nickname... Thanks a lot for looking into this!!!

            Show
            vgimple Volker Gimple added a comment -  In that case I should apologize as well: StuporMundi is my nickname... Thanks a lot for looking into this!!!
            Hide
            gaalandr Andras Gaal added a comment -

            Hi Dario,

            did you manage to work on this? Do you have any updates please?

             

             

            Show
            gaalandr Andras Gaal added a comment - Hi Dario, did you manage to work on this? Do you have any updates please?    
            Hide
            dario_simonetti Dario Simonetti added a comment -

            Hi Andras Gaal, unfortunately I haven't yet found time to work on this. Obviously if anyone else wants to work on this please feel free and let people know here.

            Show
            dario_simonetti Dario Simonetti added a comment - Hi Andras Gaal , unfortunately I haven't yet found time to work on this. Obviously if anyone else wants to work on this please feel free and let people know here.
            Hide
            jglick Jesse Glick added a comment -

            As stated, depends on nontrivial core refactoring.

            Generally this part of core has always been rather fragile and should arguably be deprecated en masse in favor of some plugin with a fresh design.

            Show
            jglick Jesse Glick added a comment - As stated, depends on nontrivial core refactoring. Generally this part of core has always been rather fragile and should arguably be deprecated en masse in favor of some plugin with a fresh design.
            Hide
            markusdlugi Markus Dlugi added a comment -

            Just a heads up for people who also have the issue that too many builds are triggered when using pipelines: we implemented something similar to what the old Maven project type does in the Pipeline Maven Plugin as part of JENKINS-46313. You will need to set up your pipelines to use the withMaven() step in conjunction with its PipelineGraphPublisher feature to trigger pipeline builds when a snapshot dependency has been built. Then your pipelines will trigger each other, but the plugin will notice when a job will be triggered later on in the dependency chain, so the scenario described by Dario Simonetti won't happen anymore.

            Of course, this is not a general solution since it only applies to Maven and not to other build tools such as Gradle, but it might be useful for people who are still using the Maven project type and are looking into migrating to pipeline.

            Show
            markusdlugi Markus Dlugi added a comment - Just a heads up for people who also have the issue that too many builds are triggered when using pipelines: we implemented something similar to what the old Maven project type does in the  Pipeline Maven Plugin as part of JENKINS-46313 . You will need to set up your pipelines to use the withMaven() step in conjunction with its PipelineGraphPublisher feature to trigger pipeline builds when a snapshot dependency has been built. Then your pipelines will trigger each other, but the plugin will notice when a job will be triggered later on in the dependency chain, so the scenario described by Dario Simonetti won't happen anymore. Of course, this is not a general solution since it only applies to Maven and not to other build tools such as Gradle, but it might be useful for people who are still using the Maven project type and are looking into migrating to pipeline.
            Hide
            sberube Steve Berube added a comment -

            This would be very useful for us as well.

            Show
            sberube Steve Berube added a comment - This would be very useful for us as well.
            Hide
            laeubi Christoph Läubrich added a comment - - edited

            This would be usefull fur us as well. I'm wondering if this could be integrated in the trigger part:

             

             triggers {
                    pollSCM('H/5 * * * *')
                    upstream(upstreamProjects: "myprojectA,myprojectB", threshold: hudson.model.Result.SUCCESS), blocking: true
             }

             

            So in this case, if a scm change is detected but any of A or B are running the job is paused untill A+B finished.

            Same if A triggers the build but B is still running.

            Would this be possible? I can even try to provide a patch if someone can head me to the right direction. Where would such a change be placed?

            Show
            laeubi Christoph Läubrich added a comment - - edited This would be usefull fur us as well. I'm wondering if this could be integrated in the trigger part:    triggers {         pollSCM('H/5 * * * *')         upstream(upstreamProjects: "myprojectA,myprojectB", threshold: hudson.model.Result.SUCCESS), blocking: true   }   So in this case, if a scm change is detected but any of A or B are running the job is paused untill A+B finished. Same if A triggers the build but B is still running. Would this be possible? I can even try to provide a patch if someone can head me to the right direction. Where would such a change be placed?
            Hide
            jglick Jesse Glick added a comment -

            This is unlikely to be a simple patch. A major chunk of Jenkins core APIs would need to be refactored.

            I do not think it is worth doing anyway. Particular use cases are better handled by newer idioms, existing or to be built.

            Show
            jglick Jesse Glick added a comment - This is unlikely to be a simple patch. A major chunk of Jenkins core APIs would need to be refactored. I do not think it is worth doing anyway. Particular use cases are better handled by newer idioms, existing or to be built.

              People

              • Assignee:
                Unassigned
                Reporter:
                vgimple Volker Gimple
              • Votes:
                26 Vote for this issue
                Watchers:
                35 Start watching this issue

                Dates

                • Created:
                  Updated: