Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-45571

"likely stuck" job is not actually stuck.

    Details

    • Similar Issues:

      Description

      I doubt this one is reproductible. 

      Going to yourjiraurlhere/computer/api/json?pretty=true&tree=computer[oneOffExecutors[likelyStuck,currentExecutable[result,url]]]{0}

      gives you the jobs currently running.

      One of my job is marked as "likely stuck", but his state is result is "SUCCESS" (and has been "SUCCESS" since 2h30, making me doubt about the veracity of the "likely stuck". 

      The job isn't running either. It's completed, but is still somehow showing as "likely stuck". 

        Attachments

          Issue Links

            Activity

            zeal_iskander Stark Gabriel created issue -
            Hide
            danielbeck Daniel Beck added a comment -

            Builds are successful until they're not, even while running. It doesn't mean it finished. In fact, while an executor is in use, it's very likely to not be finished.

            Show
            danielbeck Daniel Beck added a comment - Builds are successful until they're not, even while running. It doesn't mean it finished. In fact, while an executor is in use, it's very likely to not be finished.
            Hide
            danielbeck Daniel Beck added a comment -

            From another source I got a few logs of something that looks a lot like this – could you check whether the listed builds appear in the Jenkins system log as having failed to resume after restart, or similar?

            Show
            danielbeck Daniel Beck added a comment - From another source I got a few logs of something that looks a lot like this – could you check whether the listed builds appear in the Jenkins system log as having failed to resume after restart, or similar?
            Hide
            dnusbaum Devin Nusbaum added a comment -

            The best theory we have is that something is causing the OneOffExecutors to not be cleaned up correctly, and that it might be related to resuming pipelines at startup. Stark Gabriel Are you still seeing this issue? If so, what versions of the workflow-cps and workflow-job plugins do you have installed? Do you see any log messages about the builds that completed but are showing as likely stuck?

            Show
            dnusbaum Devin Nusbaum added a comment - The best theory we have is that something is causing the OneOffExecutors to not be cleaned up correctly, and that it might be related to resuming pipelines at startup. Stark Gabriel Are you still seeing this issue? If so, what versions of the workflow-cps and workflow-job plugins do you have installed? Do you see any log messages about the builds that completed but are showing as likely stuck?
            Hide
            zeal_iskander Stark Gabriel added a comment - - edited

            I wouldn't know, I don't work at that company anymore. Sorry!

            Show
            zeal_iskander Stark Gabriel added a comment - - edited I wouldn't know, I don't work at that company anymore. Sorry!
            Hide
            dnusbaum Devin Nusbaum added a comment -

            Stark Gabriel No problem, thanks for replying!

            Show
            dnusbaum Devin Nusbaum added a comment - Stark Gabriel No problem, thanks for replying!
            svanoort Sam Van Oort made changes -
            Field Original Value New Value
            Component/s workflow-cps-plugin [ 21713 ]
            Component/s workflow-job-plugin [ 21716 ]
            svanoort Sam Van Oort made changes -
            Assignee Stark Gabriel [ zeal_iskander ] Sam Van Oort [ svanoort ]
            Hide
            svanoort Sam Van Oort added a comment -

            Devin Nusbaum Daniel Beck IIUC what is being described, this actually maps to a really obnoxious bug I've been investigating for several weeks that has been blocking release of a fix/improvement to persistence (with threading implications due to the use of synchronization during I/O).

            It seems to be on the Pipeline end itself though – it relates to how the Pipeline job interacts with the OneOffExecutor created when it throws an AsynchronousExecution upon running. The Pipeline may even get marked as completed, but somehow the listener that terminates the Pipeline is not invoked – I'm simplifying grossly here, of course, in reality there's a very complex asynchronous chain of events with a complex threading model underlying all this.

            The behavior can be traced to the upon-resume situation if state was incompletely persisted AFAICT, but it requires somewhat precise timing to trigger the events.

            Some situations that cause this behavior have probably been solved by prior fixes, but clearly not all of them.

            Show
            svanoort Sam Van Oort added a comment - Devin Nusbaum Daniel Beck IIUC what is being described, this actually maps to a really obnoxious bug I've been investigating for several weeks that has been blocking release of a fix/improvement to persistence (with threading implications due to the use of synchronization during I/O). It seems to be on the Pipeline end itself though – it relates to how the Pipeline job interacts with the OneOffExecutor created when it throws an AsynchronousExecution upon running. The Pipeline may even get marked as completed, but somehow the listener that terminates the Pipeline is not invoked – I'm simplifying grossly here, of course, in reality there's a very complex asynchronous chain of events with a complex threading model underlying all this. The behavior can be traced to the upon-resume situation if state was incompletely persisted AFAICT, but it requires somewhat precise timing to trigger the events. Some situations that cause this behavior have probably been solved by prior fixes, but clearly not all of them.
            dnusbaum Devin Nusbaum made changes -
            Remote Link This issue links to "jenkinsci/workflow-cps-plugin#234 (Web Link)" [ 21237 ]
            svanoort Sam Van Oort made changes -
            Link This issue is related to JENKINS-50199 [ JENKINS-50199 ]
            Hide
            atikhonova Anna Tikhonova added a comment -

            I'm seeing this issue as well. Lots of executors listed in /computer/api/json?pretty=true&tree=computer[oneOffExecutors[likelyStuck,currentExecutable[building,result,url]]]{0} in the following state:

            {
             "currentExecutable" : {
             "_class" : "org.jenkinsci.plugins.workflow.job.WorkflowRun",
             "building" : false,
             "result" : "SUCCESS",
             "url" : url
             },
             "likelyStuck" : true
            }

            However, in my case it doesn't seem to be related to resuming pipelines at Jenkins startup. I have written a script to cleanup such executors. Haven't restarted Jenkins since the script has run, and still I see the new executors like those.

            Show
            atikhonova Anna Tikhonova added a comment - I'm seeing this issue as well. Lots of executors listed in /computer/api/json?pretty=true&tree=computer[oneOffExecutors[likelyStuck,currentExecutable [building,result,url] ]]{0} in the following state: { "currentExecutable" : { "_class" : "org.jenkinsci.plugins.workflow.job.WorkflowRun" , "building" : false , "result" : "SUCCESS" , "url" : url }, "likelyStuck" : true } However, in my case it doesn't seem to be related to resuming pipelines at Jenkins startup. I have written a script to cleanup such executors. Haven't restarted Jenkins since the script has run, and still I see the new executors like those.
            Hide
            atikhonova Anna Tikhonova added a comment -

            Why this bug could be of more interest is that it intervenes Throttle Concurrent Build plugin scheduling. TCP prevents scheduling more builds because it considers those hanging executors. Once there are more hanging executors than maximum total concurrent builds configured for a job (N), the job is forever stuck ("pending—Already running N builds across all nodes").

            Show
            atikhonova Anna Tikhonova added a comment - Why this bug could be of more interest is that it intervenes Throttle Concurrent Build plugin scheduling. TCP prevents scheduling more builds because it considers those hanging executors. Once there are more hanging executors than maximum total concurrent builds configured for a job (N), the job is forever stuck ("pending—Already running N builds across all nodes").
            Hide
            dnusbaum Devin Nusbaum added a comment -

            Anna Tikhonova The fact that you are seeing the issue without restarting Jenkins is very interesting. Do you have a pipeline which is able to reproduce the problem consistently?

            Show
            dnusbaum Devin Nusbaum added a comment - Anna Tikhonova The fact that you are seeing the issue without restarting Jenkins is very interesting. Do you have a pipeline which is able to reproduce the problem consistently?
            Hide
            svanoort Sam Van Oort added a comment -

            Note from investigation: so, separate from JENKINS-50199 there appears to be a different but related failure mode:

            1. The symptoms described by Anna will be reproduced if the build completes (WorkflowRun#finish is called), but the copyLogsTask never gets invoked or fails, since that is what actually removes the FlyWeightTask and kills the OneOffExecutor. See the CopyLogsTask logic - https://github.com/jenkinsci/workflow-job-plugin/blob/master/src/main/java/org/jenkinsci/plugins/workflow/job/WorkflowRun.java#L403
            2. If the AsynchronousExecution is never completed, we'll see a "likelyStuck" executor for each OneOffExecutor

            Show
            svanoort Sam Van Oort added a comment - Note from investigation: so, separate from JENKINS-50199 there appears to be a different but related failure mode: 1. The symptoms described by Anna will be reproduced if the build completes (WorkflowRun#finish is called), but the copyLogsTask never gets invoked or fails, since that is what actually removes the FlyWeightTask and kills the OneOffExecutor. See the CopyLogsTask logic - https://github.com/jenkinsci/workflow-job-plugin/blob/master/src/main/java/org/jenkinsci/plugins/workflow/job/WorkflowRun.java#L403 2. If the AsynchronousExecution is never completed, we'll see a "likelyStuck" executor for each OneOffExecutor
            Hide
            atikhonova Anna Tikhonova added a comment - - edited

            Devin Nusbaum unfortunately, I don't. I've got a few 1000+ LOC pipelines running continuously. I do not know how to tell which one leaves executors and when.

            Pipeline build that has such "likelyStuck" executor looks completed on its build page (no progress bars, build status is set). But I still can see a matching OneOffExecutor on master:

                  "_class" : "hudson.model.Hudson$MasterComputer",
                  "oneOffExecutors" : [
                    {
                      "currentExecutable" : {
                        "_class" : "org.jenkinsci.plugins.workflow.job.WorkflowRun",
                        "building" : false,    // always false for these lost executors
                        "result" : "SUCCESS",    // always set to some valid build status != null
                        "url" : "JENKINS/job/PIPELINE/BUILD_NUMBER/"
                      },
                      "likelyStuck" : false    // can be true or false
                    }, ...
            
            Show
            atikhonova Anna Tikhonova added a comment - - edited Devin Nusbaum unfortunately, I don't. I've got a few 1000+ LOC pipelines running continuously. I do not know how to tell which one leaves executors and when. Pipeline build that has such "likelyStuck" executor looks completed on its build page (no progress bars, build status is set). But I still can see a matching OneOffExecutor on master: "_class" : "hudson.model.Hudson$MasterComputer" , "oneOffExecutors" : [ { "currentExecutable" : { "_class" : "org.jenkinsci.plugins.workflow.job.WorkflowRun" , "building" : false , // always false for these lost executors "result" : "SUCCESS" , // always set to some valid build status != null "url" : "JENKINS/job/PIPELINE/BUILD_NUMBER/" }, "likelyStuck" : false // can be true or false }, ...
            Hide
            dnusbaum Devin Nusbaum added a comment - - edited

            Anna Tikhonova Are you able upload the build directory of the build matching the stuck executor? Specifically, it would be helpful to see build.xml and the xml file(s) in the workflow directory. EDIT: I see now that you can't easily tell which are stuck and which are good. If you can find an executor with likelyStuck: true, and whose build looks like it has otherwise completed or is suck, that would be a great candidate.

            Another note: JENKINS-38381 will change the control flow here significantly.

            Show
            dnusbaum Devin Nusbaum added a comment - - edited Anna Tikhonova  Are you able upload the build directory of the build matching the stuck executor? Specifically, it would be helpful to see build.xml and the xml file(s) in the workflow directory. EDIT: I see now that you can't easily tell which are stuck and which are good. If you can find an executor with likelyStuck: true , and whose build looks like it has otherwise completed or is suck, that would be a great candidate. Another note:  JENKINS-38381 will change the control flow here significantly.
            Hide
            jglick Jesse Glick added a comment -

            gives you the jobs currently running

            This is not really an appropriate API query to use for that question. If your interest is limited to all Pipeline builds, FlowExecutionList is likely to be more useful. If you are looking at builds of a particular job (Pipeline or not), I think that information is available from the endpoint for that job.

            Show
            jglick Jesse Glick added a comment - gives you the jobs currently running This is not really an appropriate API query to use for that question. If your interest is limited to all Pipeline builds, FlowExecutionList is likely to be more useful. If you are looking at builds of a particular job (Pipeline or not), I think that information is available from the endpoint for that job.
            Hide
            jglick Jesse Glick added a comment -

            TCP prevents scheduling more builds because it considers those hanging executors.

            Offhand this sounds like a flaw in TCB. This PR introduced that behavior, purportedly to support the build-flow plugin (a conceptual predecessor of Pipeline née Workflow). If TCB intends to throttle builds per se (rather than work done by those builds—typically node blocks for Pipeline), then there are more direct ways of doing this than counting Executor slots.

            Show
            jglick Jesse Glick added a comment - TCP prevents scheduling more builds because it considers those hanging executors. Offhand this sounds like a flaw in TCB. This PR introduced that behavior, purportedly to support the build-flow plugin (a conceptual predecessor of Pipeline née Workflow). If TCB intends to throttle builds per se (rather than work done by those builds—typically node blocks for Pipeline), then there are more direct ways of doing this than counting Executor slots.
            vivek Vivek Pandey made changes -
            Labels api api triaged-2018-11
            Hide
            basil Basil Crow added a comment -

            Offhand this sounds like a flaw in TCB.

            I am attempting to fix this flaw in jenkinsci/throttle-concurrent-builds-plugin#57.

            Show
            basil Basil Crow added a comment - Offhand this sounds like a flaw in TCB. I am attempting to fix this flaw in jenkinsci/throttle-concurrent-builds-plugin#57 .
            dnusbaum Devin Nusbaum made changes -
            Link This issue relates to JENKINS-53158 [ JENKINS-53158 ]
            Hide
            basil Basil Crow added a comment -

            I am attempting to fix this flaw in jenkinsci/throttle-concurrent-builds-plugin#57.

            This PR has been merged, and the master branch of Throttle Concurrent Builds now uses FlowExecutionList to calculate the number of running Pipeline jobs, which should work around the issue described in this bug. I have yet to release a new version of Throttle Concurrent Builds with this fix, but there is an incremental build available here. Anna Tikhonova, are you interested in testing this incremental build before I do an official release?

            Show
            basil Basil Crow added a comment - I am attempting to fix this flaw in jenkinsci/throttle-concurrent-builds-plugin#57 . This PR has been merged, and the master branch of Throttle Concurrent Builds now uses FlowExecutionList to calculate the number of running Pipeline jobs, which should work around the issue described in this bug. I have yet to release a new version of Throttle Concurrent Builds with this fix, but there is an incremental build available here . Anna Tikhonova , are you interested in testing this incremental build before I do an official release?
            basil Basil Crow made changes -
            Link This issue relates to JENKINS-61087 [ JENKINS-61087 ]
            dnusbaum Devin Nusbaum made changes -
            Link This issue relates to JENKINS-60348 [ JENKINS-60348 ]

              People

              • Assignee:
                svanoort Sam Van Oort
                Reporter:
                zeal_iskander Stark Gabriel
              • Votes:
                0 Vote for this issue
                Watchers:
                10 Start watching this issue

                Dates

                • Created:
                  Updated: