Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-5413

SCM polling getting hung

    XMLWordPrintable

    Details

    • Type: Bug
    • Status: Open (View Workflow)
    • Priority: Critical
    • Resolution: Unresolved
    • Component/s: remoting
    • Labels:
      None
    • Similar Issues:

      Description

      This is to track the problem originally reported here: http://n4.nabble.com/Polling-hung-td1310838.html#a1310838
      The referenced thread is relocated to http://jenkins.361315.n4.nabble.com/Polling-hung-td1310838.html

      What the problem boils down to is that many remote operations are performed synchronously causing the channel object to be locked while a response returns. In situations where a lengthy remote operations is using the channel, SCM polling can be blocked waiting for the monitor on the channel to be released. In extreme situations, all the polling threads can wind up waiting on object monitors for the channel objects, preventing further processing of polling tasks.

      Furthermore, if the slave dies, the locked channel object still exists in the master JVM. If no IOException is thrown to indicate the termination of the connection to the pipe, the channel can never be closed because Channel.close() itself is a sychronized operation.

        Attachments

        1. DUMP1.txt
          57 kB
        2. hung_scm_pollers_02.PNG
          hung_scm_pollers_02.PNG
          145 kB
        3. thread_dump_02.txt
          92 kB
        4. threads.vetted.txt
          163 kB

          Issue Links

            Activity

            dty Dean Yu created issue -
            Hide
            rsteele rsteele added a comment -

            I believe I'm seeing this problem as well and have been for a couple (maybe more?) weeks. One difference: I'm using the ClearCase plugin as my SCM provider, but otherwise the symptoms seem to be the same: one of my slaves, though still "online", seems to get stuck while polling for changes (though I don't see any ClearCase processes running in Process Explorer). Furthermore, killing the slave doesn't seem to do any good and the master doesn't even notice the slave has died.

            Show
            rsteele rsteele added a comment - I believe I'm seeing this problem as well and have been for a couple (maybe more?) weeks. One difference: I'm using the ClearCase plugin as my SCM provider, but otherwise the symptoms seem to be the same: one of my slaves, though still "online", seems to get stuck while polling for changes (though I don't see any ClearCase processes running in Process Explorer). Furthermore, killing the slave doesn't seem to do any good and the master doesn't even notice the slave has died.
            Hide
            javadude Carl Quinn added a comment -

            And I'm seeing it as well with a Perforce SCM.

            Show
            javadude Carl Quinn added a comment - And I'm seeing it as well with a Perforce SCM.
            Hide
            mdillon mdillon added a comment -

            Here is a stack dump from a Hudson master we were running after all 10 asychronous polling threads were hung. The job names, executor names, and internal classes names have been munged just in case. This thread dump appears to be missing the stack for the main thread for some reason, but I don't think that is a big deal.

            Once our server got itself into this state, we were not able to unstick polling without a restart. Disconnecting the affected executor did not cause these threads to go away and reconnecting the executor did not cause polling to resume.

            Our workaround has been to add a setting to revert the subversion plugin back to master-only polling on the affected installations. FWIW, we seem to see this on high-load Hudson installations.

            Show
            mdillon mdillon added a comment - Here is a stack dump from a Hudson master we were running after all 10 asychronous polling threads were hung. The job names, executor names, and internal classes names have been munged just in case. This thread dump appears to be missing the stack for the main thread for some reason, but I don't think that is a big deal. Once our server got itself into this state, we were not able to unstick polling without a restart. Disconnecting the affected executor did not cause these threads to go away and reconnecting the executor did not cause polling to resume. Our workaround has been to add a setting to revert the subversion plugin back to master-only polling on the affected installations. FWIW, we seem to see this on high-load Hudson installations.
            mdillon mdillon made changes -
            Field Original Value New Value
            Attachment threads.vetted.txt [ 19185 ]
            Hide
            mdillon mdillon added a comment -

            BTW, that thread dump was from a Hudson master running the equivalent of Hudson 1.322. I don't know if anyone else in the company has a thread dump from a more recent Hudson version.

            Show
            mdillon mdillon added a comment - BTW, that thread dump was from a Hudson master running the equivalent of Hudson 1.322. I don't know if anyone else in the company has a thread dump from a more recent Hudson version.
            Hide
            dshields777 dshields777 added a comment -

            I'm seeing the same behavior.

            mdillon, you mention reverting to master-only polling as a workaround. How did you do that? Is there a config setting that I'm missing? Or do you mean you went back to an earlier version of the SVN plugin?

            Show
            dshields777 dshields777 added a comment - I'm seeing the same behavior. mdillon, you mention reverting to master-only polling as a workaround. How did you do that? Is there a config setting that I'm missing? Or do you mean you went back to an earlier version of the SVN plugin?
            Hide
            dty Dean Yu added a comment -

            @dshields777: We build our Hudson installation from source with some modifications. We added a switch to the Subversion plugin to poll from the master. We can certainly contribute this patch upstream so other people can use the same workaround.

            Show
            dty Dean Yu added a comment - @dshields777: We build our Hudson installation from source with some modifications. We added a switch to the Subversion plugin to poll from the master. We can certainly contribute this patch upstream so other people can use the same workaround.
            Hide
            wgracelee wgracelee added a comment -

            Hi, we have the same problem on Hudson 1.352 on using subversion. Is the patch available for download now?

            Show
            wgracelee wgracelee added a comment - Hi, we have the same problem on Hudson 1.352 on using subversion. Is the patch available for download now?
            Hide
            daniel_franzen daniel_franzen added a comment -

            Hi, I've been seeing this issue too (since late January - I wish I could give you an exact version number). I haven't found any way to reliably reproduce it (it happens at random every other day). It occurred as late as yesterday, running Hudson 1.353 with Subversion Plugin 1.16.

            The typical scenario in our case is as follows
            1) A job hangs.*
            2) The node becomes unusable. A job starting on the node stops at "Building remotely on MySlave".
            3) SVN polling gets stuck.
            4) The node can be made usable by disconnecting and reconnecting in Hudson's node management.
            5) Polling only resumes after a Hudson restart.

            *) Some background on how these job hang-ups manifest themselves:
            There's one particular job that hangs often, but I can't determine what's special about it. It typically hangs at "Recording plot data". In other words, it's not during actual job execution, but just at the end. This occurs on any of our slave nodes - nodes that are running other Hudson jobs without a hitch. When I've removed plotting it hangs at archiving/fingerprinting instead. I suspect one of the code analysis plugins we run at the end (e.g. Findbugs) might be responsible. If you believe this is of interest to the SVN polling issue I'll be happy to provide more detailed information.

            Show
            daniel_franzen daniel_franzen added a comment - Hi, I've been seeing this issue too (since late January - I wish I could give you an exact version number). I haven't found any way to reliably reproduce it (it happens at random every other day). It occurred as late as yesterday, running Hudson 1.353 with Subversion Plugin 1.16. The typical scenario in our case is as follows 1) A job hangs.* 2) The node becomes unusable. A job starting on the node stops at "Building remotely on MySlave". 3) SVN polling gets stuck. 4) The node can be made usable by disconnecting and reconnecting in Hudson's node management. 5) Polling only resumes after a Hudson restart. *) Some background on how these job hang-ups manifest themselves: There's one particular job that hangs often, but I can't determine what's special about it. It typically hangs at "Recording plot data". In other words, it's not during actual job execution, but just at the end. This occurs on any of our slave nodes - nodes that are running other Hudson jobs without a hitch. When I've removed plotting it hangs at archiving/fingerprinting instead. I suspect one of the code analysis plugins we run at the end (e.g. Findbugs) might be responsible. If you believe this is of interest to the SVN polling issue I'll be happy to provide more detailed information.
            Hide
            sweeney Tony Sweeney added a comment - - edited

            We get this multiple times per day. The only solution is a server master restart, losing any builds currently in progress. This is making Hudson damn near unusable. Would the guy who made the master-only polling fix be prepared to make it available here?

            Show
            sweeney Tony Sweeney added a comment - - edited We get this multiple times per day. The only solution is a server master restart, losing any builds currently in progress. This is making Hudson damn near unusable. Would the guy who made the master-only polling fix be prepared to make it available here?
            Hide
            lkishalmi lkishalmi added a comment -

            We are also fighting with this issue however we are using Perforce as SCM, so I think this issue goes down to the core of Hudson master-slave communication. Unfortunately it seems there is only one reliable slave configuration for Hudson and it is SSH slave with Solaris OS. It was the only configuration which has never failed so far, however we have a Linux master with several Windows one Linux one Mac and one Solaris slaves.

            In order to improve the stability of your system you might configure your slaves on-demand. With this we are running without this issue for a few days now.

            Show
            lkishalmi lkishalmi added a comment - We are also fighting with this issue however we are using Perforce as SCM, so I think this issue goes down to the core of Hudson master-slave communication. Unfortunately it seems there is only one reliable slave configuration for Hudson and it is SSH slave with Solaris OS. It was the only configuration which has never failed so far, however we have a Linux master with several Windows one Linux one Mac and one Solaris slaves. In order to improve the stability of your system you might configure your slaves on-demand. With this we are running without this issue for a few days now.
            Hide
            lkishalmi lkishalmi added a comment -

            Changed the component and the title as it seems this issue is not just related to Subversion.

            Show
            lkishalmi lkishalmi added a comment - Changed the component and the title as it seems this issue is not just related to Subversion.
            lkishalmi lkishalmi made changes -
            Summary SVN polling on slaves getting hung SCM polling on slaves getting hung
            Component/s master-slave [ 15489 ]
            Component/s subversion [ 15485 ]
            Hide
            jpadmana jpadmana added a comment -

            We are seeing the same issue with Hudson 1.353 and Subversion.

            Current SCM Polling Activities
            There are more SCM polling activities scheduled than handled, so the threads are not keeping up with the demands. Check if your polling is hanging, and/or increase the number of threads if necessary.

            The following polling activities are currently in progress:

            Show
            jpadmana jpadmana added a comment - We are seeing the same issue with Hudson 1.353 and Subversion. Current SCM Polling Activities There are more SCM polling activities scheduled than handled, so the threads are not keeping up with the demands. Check if your polling is hanging, and/or increase the number of threads if necessary. The following polling activities are currently in progress:
            Hide
            ericguz ericguz added a comment -

            We are currently seeing this on an almost nightly basis. Both with Hudson 1.355 and 1.360 using Perforce 2008.2 for SCM. Master and Slave are all running on Solaris 10 i86.

            Show
            ericguz ericguz added a comment - We are currently seeing this on an almost nightly basis. Both with Hudson 1.355 and 1.360 using Perforce 2008.2 for SCM. Master and Slave are all running on Solaris 10 i86.
            Hide
            ki82 Christian Bremer added a comment -

            We are also seeing this problem a couple times a day. We are using Hudson 1.355 and ClearCase Plugin 1.2.
            Any workaround or fix would be highly appreciated!!

            Show
            ki82 Christian Bremer added a comment - We are also seeing this problem a couple times a day. We are using Hudson 1.355 and ClearCase Plugin 1.2. Any workaround or fix would be highly appreciated!!
            Hide
            jpshackelford jpshackelford added a comment - - edited

            We are seeing this or a related problem as well. Linux / Perforce / Hudson ver. 1.361. See thread_dump_02.txt and hung_scm_pollers_02.png. It would be great if we could at least have an option of killing the threads without bouncing Hudson.

            Show
            jpshackelford jpshackelford added a comment - - edited We are seeing this or a related problem as well. Linux / Perforce / Hudson ver. 1.361. See thread_dump_02.txt and hung_scm_pollers_02.png. It would be great if we could at least have an option of killing the threads without bouncing Hudson.
            jpshackelford jpshackelford made changes -
            Attachment thread_dump_02.txt [ 19535 ]
            Attachment hung_scm_pollers_02.PNG [ 19536 ]
            Hide
            jpshackelford jpshackelford added a comment - - edited

            I have just noticed after bouncing the server and watching 180 jobs run through while monitoring http://<my server>/descriptor/hudson.triggers.SCMTrigger/ is that the polling threads associated with the Thread Dump message "SCM polling for hudson.maven.MavenModuleSet..." seem to show up as running for the whole time it takes for the build to run, but I don't see this on other job types. Same thing when I look at the page http://<my server>/job/<my job>/scmPollLog/?. The first line reads "Started on Jun 28, 2010 11:51:03 AM" and then I see nothing else until the build completes 30 mins later. And then the rest of the log shows:

            Looking for changes...
            Using remote perforce client: icdudrelgapp_releng_ease_producer--1152796861
            [EASE - Producer] $ /hosting/bin/p4 workspace -o icdudrelgapp_releng_ease_producer--1152796861
            Saving modified client icdudrelgapp_releng_ease_producer--1152796861
            [EASE - Producer] $ /hosting/bin/p4 -s client -i
            Last sync'd change was 420598
            [EASE - Producer] $ /hosting/bin/p4 changes -m 2 //icdudrelgapp_releng_ease_producer--1152796861/...
            Latest submitted change selected by workspace is 420598
            Assuming that the workspace definition has not changed.
            Done. Took 0.57 sec
            No changes
            

            Perforce wasn't hung--I was also monitoring the commands processed by Perforce and they were flying through. I haven't dug into the source yet, but I don't understand why it looks like the poller is tied up for 30 mins while the build runs though Perforce would have easily processed the request in milliseconds.

            Show
            jpshackelford jpshackelford added a comment - - edited I have just noticed after bouncing the server and watching 180 jobs run through while monitoring http://<my server>/descriptor/hudson.triggers.SCMTrigger/ is that the polling threads associated with the Thread Dump message "SCM polling for hudson.maven.MavenModuleSet..." seem to show up as running for the whole time it takes for the build to run, but I don't see this on other job types. Same thing when I look at the page http://<my server>/job/<my job>/scmPollLog/?. The first line reads "Started on Jun 28, 2010 11:51:03 AM" and then I see nothing else until the build completes 30 mins later. And then the rest of the log shows: Looking for changes... Using remote perforce client: icdudrelgapp_releng_ease_producer--1152796861 [EASE - Producer] $ /hosting/bin/p4 workspace -o icdudrelgapp_releng_ease_producer--1152796861 Saving modified client icdudrelgapp_releng_ease_producer--1152796861 [EASE - Producer] $ /hosting/bin/p4 -s client -i Last sync'd change was 420598 [EASE - Producer] $ /hosting/bin/p4 changes -m 2 //icdudrelgapp_releng_ease_producer--1152796861/... Latest submitted change selected by workspace is 420598 Assuming that the workspace definition has not changed. Done. Took 0.57 sec No changes Perforce wasn't hung--I was also monitoring the commands processed by Perforce and they were flying through. I haven't dug into the source yet, but I don't understand why it looks like the poller is tied up for 30 mins while the build runs though Perforce would have easily processed the request in milliseconds.
            Hide
            carlspring carlspring added a comment -

            I had the same issue when having around 150 modules.
            The moment I added SVN hook based build triggering, this went away. Some other version controls support this as well.

            Show
            carlspring carlspring added a comment - I had the same issue when having around 150 modules. The moment I added SVN hook based build triggering, this went away. Some other version controls support this as well.
            Hide
            carlspring carlspring added a comment -

            I stand corrected.
            Even THIS didn't solve the problem as it just reoccurred!

            Hudson 1.364.

            Show
            carlspring carlspring added a comment - I stand corrected. Even THIS didn't solve the problem as it just reoccurred! Hudson 1.364.
            Hide
            vjuranek vjuranek added a comment -

            Reading the discussion it seems to me that at least in some cases (when slave is stuck and seems to be online even if disconnected) this is exposure of JENKINS-5977.
            Btw. stuck SCM polling threads can be easily killed via Groovy console (at least it works for me)

            Show
            vjuranek vjuranek added a comment - Reading the discussion it seems to me that at least in some cases (when slave is stuck and seems to be online even if disconnected) this is exposure of JENKINS-5977 . Btw. stuck SCM polling threads can be easily killed via Groovy console (at least it works for me)
            Hide
            carlspring carlspring added a comment -

            Reducing the polling to once every half hour and setting up SVN hooks, helped solved the issue, although I am not fully sure it has gone away.

            If it's possible to kill the hung up pollers threads, I would personally recommend adding a poll response timeout mechanism to Hudson.

            Apparently, a lot of people are experiencing this, if they have a large number of projects and I think it should be addressed with a high priority.

            Show
            carlspring carlspring added a comment - Reducing the polling to once every half hour and setting up SVN hooks, helped solved the issue, although I am not fully sure it has gone away. If it's possible to kill the hung up pollers threads, I would personally recommend adding a poll response timeout mechanism to Hudson. Apparently, a lot of people are experiencing this, if they have a large number of projects and I think it should be addressed with a high priority.
            lkishalmi lkishalmi made changes -
            Link This issue is related to JENKINS-5760 [ JENKINS-5760 ]
            Hide
            hjhafner Hans-Juergen Hafner added a comment - - edited

            @vjuranek

            Could you please give me an example how to kill stuck SCM polling threads via Groovy console?
            ( I´m just a rookie in Groovy)
            BR,
            Hans-Jürgen

            Show
            hjhafner Hans-Juergen Hafner added a comment - - edited @vjuranek Could you please give me an example how to kill stuck SCM polling threads via Groovy console? ( I´m just a rookie in Groovy) BR, Hans-Jürgen
            Hide
            vjuranek vjuranek added a comment -

            @hjhafner
            very primitive (I'm too lazy to develop something better as this is not very important issue) Groovy script is bellow. It could happened that it will kill also SCM polling which are not stuck, but we run this script automatically only once a day so it doesn't cause any troubles for us. You can improve it e.g. by saving ids and names of SCM polling threads, check once again after some time and kill only threads which ids are on the list from previous check.

            Thread.getAllStackTraces().keySet().each(){ item ->
            if(item.getName().contains("SCM polling") && item.getName().contains("waiting for hudson.remoting"))

            { println "Interrupting thread " + item.getId() item.interrupt() }

            }

            Show
            vjuranek vjuranek added a comment - @hjhafner very primitive (I'm too lazy to develop something better as this is not very important issue) Groovy script is bellow. It could happened that it will kill also SCM polling which are not stuck, but we run this script automatically only once a day so it doesn't cause any troubles for us. You can improve it e.g. by saving ids and names of SCM polling threads, check once again after some time and kill only threads which ids are on the list from previous check. Thread.getAllStackTraces().keySet().each(){ item -> if(item.getName().contains("SCM polling") && item.getName().contains("waiting for hudson.remoting")) { println "Interrupting thread " + item.getId() item.interrupt() } }
            Hide
            hjhafner Hans-Juergen Hafner added a comment -

            @vjuranek
            Thanks a lot!
            The script worked very well . (With on small change: A ";" was missed before item.interrupt() )

            Show
            hjhafner Hans-Juergen Hafner added a comment - @vjuranek Thanks a lot! The script worked very well . (With on small change: A ";" was missed before item.interrupt() )
            Hide
            erwan_q erwan_q added a comment -

            I have the same issue every week when I restart the master. The slave is unusable (scm polling stuck, fail to join the slave....). Workaround is to disconnect, kill slave manually, restart slaves. It could be great to have a batch script to implement this restart action at reboot time

            Show
            erwan_q erwan_q added a comment - I have the same issue every week when I restart the master. The slave is unusable (scm polling stuck, fail to join the slave....). Workaround is to disconnect, kill slave manually, restart slaves. It could be great to have a batch script to implement this restart action at reboot time
            Hide
            lkishalmi lkishalmi added a comment -

            JENKINS-5977 seems to be the root cause of this issue.
            Try to upgrade to 1.380+

            Show
            lkishalmi lkishalmi added a comment - JENKINS-5977 seems to be the root cause of this issue. Try to upgrade to 1.380+
            Hide
            dty Dean Yu added a comment -

            I've just applied the patch to perform polling on the master for the Subversion plugin. Apologies to everyone who was waiting for this patch. Updating the plugin wiki with instructions.

            Show
            dty Dean Yu added a comment - I've just applied the patch to perform polling on the master for the Subversion plugin. Apologies to everyone who was waiting for this patch. Updating the plugin wiki with instructions.
            Hide
            eguess74 eguess74 added a comment - - edited

            I just started to experience this problem.
            we have two instances of Jenkins running. One of them started to have this polling error. It is not only that the polling gets stuck but also the CPU is overloaded for no reason. I have restarted the server, but it came back to this error state in no time.
            I have changed the concurrent poll amount from 5 to 10 and restarted again to kill hangup polls. Watching...

            We are using jenkins 1.399 + git plugin 1.1.5 + no slaves involved
            We have about 450 jobs with polling interval set to 10 min

            Show
            eguess74 eguess74 added a comment - - edited I just started to experience this problem. we have two instances of Jenkins running. One of them started to have this polling error. It is not only that the polling gets stuck but also the CPU is overloaded for no reason. I have restarted the server, but it came back to this error state in no time. I have changed the concurrent poll amount from 5 to 10 and restarted again to kill hangup polls. Watching... We are using jenkins 1.399 + git plugin 1.1.5 + no slaves involved We have about 450 jobs with polling interval set to 10 min
            Hide
            eguess74 eguess74 added a comment - - edited

            BTW the script provided by vjuranek didn't work for me throwing MissingMethodException
            But i have found and ability to kill threads from GUI using monitoring plugin! The thread details section has this nice little kill button

            Show
            eguess74 eguess74 added a comment - - edited BTW the script provided by vjuranek didn't work for me throwing MissingMethodException But i have found and ability to kill threads from GUI using monitoring plugin! The thread details section has this nice little kill button
            Hide
            eguess74 eguess74 added a comment -

            for the record:
            I was able to narrow it down to three jobs that were consistently getting stuck on the polling/fetching step.
            I was trying different approaches but the only thing that actually resolved the problem was to recreate thos jobs from scratch. I.e. i blew away all related folders and workspace and recreated the job. This brought the CPU usage down and there is no stuck threads anymore already for a full day...

            Show
            eguess74 eguess74 added a comment - for the record: I was able to narrow it down to three jobs that were consistently getting stuck on the polling/fetching step. I was trying different approaches but the only thing that actually resolved the problem was to recreate thos jobs from scratch. I.e. i blew away all related folders and workspace and recreated the job. This brought the CPU usage down and there is no stuck threads anymore already for a full day...
            Hide
            stephenconnolly Stephen Connolly added a comment -
            Show
            stephenconnolly Stephen Connolly added a comment - CloudBees have raised http://issues.tmatesoft.com/issue/SVNKIT-15
            Hide
            tmielke Thomas Mielke added a comment - - edited

            @vjuranek, @Hans-Juergen Hafner

            My team is using Git, and we started experiencing the problem after adding several extra builds to our jenkins server. We resolved the issue by running the script vjuranek provided as a cronjob:

            cron
            0 * * * *       /var/lib/hudson/killscm.sh
            
            # cat /var/lib/hudson/killscm.sh 
            java -jar /var/lib/hudson/hudson-cli.jar -s http://myserver:8090/ groovy /var/lib/hudson/threadkill.groovy
            
            # cat /var/lib/hudson/threadkill.groovy 
            Thread.getAllStackTraces().keySet().each() { 
            	item ->
            	if (item.getName().contains("SCM polling") && item.getName().contains("waiting for hudson.remoting")) { 
            		println "Interrupting thread " + item.getId(); 
            		item.interrupt() 
            	}
            }
            

            Since running these scripts, our nightly builds haven't hung for the last 5 consecutive days.

            Show
            tmielke Thomas Mielke added a comment - - edited @vjuranek, @Hans-Juergen Hafner My team is using Git, and we started experiencing the problem after adding several extra builds to our jenkins server. We resolved the issue by running the script vjuranek provided as a cronjob: cron 0 * * * * / var /lib/hudson/killscm.sh # cat / var /lib/hudson/killscm.sh java -jar / var /lib/hudson/hudson-cli.jar -s http: //myserver:8090/ groovy / var /lib/hudson/threadkill.groovy # cat / var /lib/hudson/threadkill.groovy Thread .getAllStackTraces().keySet().each() { item -> if (item.getName().contains( "SCM polling" ) && item.getName().contains( "waiting for hudson.remoting" )) { println "Interrupting thread " + item.getId(); item.interrupt() } } Since running these scripts, our nightly builds haven't hung for the last 5 consecutive days.
            Hide
            fredp06fr Frederic Pesquet added a comment -

            We are still experiencing this (with 1.420 and 1.414).
            The workarround to kill scm threads periodically does not work for me, not sure why (blocked threads are not killed by the scripts, and can not be killed either by the monitoring plugin).
            We have a large number of jobs (>1000). The Hudson is blocking every day, and the only way to unlock it is to restart it.
            The issue does not seem to be specific to one SCM: we are using SVN and GIT. When I tried to implement the workarround to poll from master with SVN (hudson.scm.SubversionSCM.pollFromMaster), the blocking occured on the GIT polling.
            There are 45 voters on this issue, I guess I'm not alone here... Can we raise the priority of this? Seems a real core issue.

            Show
            fredp06fr Frederic Pesquet added a comment - We are still experiencing this (with 1.420 and 1.414). The workarround to kill scm threads periodically does not work for me, not sure why (blocked threads are not killed by the scripts, and can not be killed either by the monitoring plugin). We have a large number of jobs (>1000). The Hudson is blocking every day, and the only way to unlock it is to restart it. The issue does not seem to be specific to one SCM: we are using SVN and GIT. When I tried to implement the workarround to poll from master with SVN (hudson.scm.SubversionSCM.pollFromMaster), the blocking occured on the GIT polling. There are 45 voters on this issue, I guess I'm not alone here... Can we raise the priority of this? Seems a real core issue.
            Hide
            carlspring carlspring added a comment -

            We used to hit this while we were still using Hudson (ca. 1.3xx). We also have a large number of jobs 300-400. We haven't run into this since we moved to Jenkins.

            (Just a suggestion).

            Show
            carlspring carlspring added a comment - We used to hit this while we were still using Hudson (ca. 1.3xx). We also have a large number of jobs 300-400. We haven't run into this since we moved to Jenkins. (Just a suggestion).
            Hide
            fredp06fr Frederic Pesquet added a comment -

            We are already on Jenkins...
            More infos:
            We have also a medium number of slaves (>25). It is not uncommon that a slave cease to respond temporarily, or reboot, etc.
            As described in this bug initial description, "Furthermore, if the slave dies, the locked channel object still exists in the master JVM.".
            I guess we are probably experiencing something like this. The SCM polling getting hung is just the most obvious symptom here.

            Show
            fredp06fr Frederic Pesquet added a comment - We are already on Jenkins... More infos: We have also a medium number of slaves (>25). It is not uncommon that a slave cease to respond temporarily, or reboot, etc. As described in this bug initial description, "Furthermore, if the slave dies, the locked channel object still exists in the master JVM.". I guess we are probably experiencing something like this. The SCM polling getting hung is just the most obvious symptom here.
            lars_kruse Lars Kruse made changes -
            Description This is to track the problem originally reported here: http://n4.nabble.com/Polling-hung-td1310838.html#a1310838

            What the problem boils down to is that many remote operations are performed synchronously causing the channel object to be locked while a response returns. In situations where a lengthy remote operations is using the channel, SCM polling can be blocked waiting for the monitor on the channel to be released. In extreme situations, all the polling threads can wind up waiting on object monitors for the channel objects, preventing further processing of polling tasks.

            Furthermore, if the slave dies, the locked channel object still exists in the master JVM. If no IOException is thrown to indicate the termination of the connection to the pipe, the channel can never be closed because Channel.close() itself is a sychronized operation.
            This is to track the problem originally reported here: http://n4.nabble.com/Polling-hung-td1310838.html#a1310838
            The referenced thread is relocated to http://jenkins.361315.n4.nabble.com/Polling-hung-td1310838.html

            What the problem boils down to is that many remote operations are performed synchronously causing the channel object to be locked while a response returns. In situations where a lengthy remote operations is using the channel, SCM polling can be blocked waiting for the monitor on the channel to be released. In extreme situations, all the polling threads can wind up waiting on object monitors for the channel objects, preventing further processing of polling tasks.

            Furthermore, if the slave dies, the locked channel object still exists in the master JVM. If no IOException is thrown to indicate the termination of the connection to the pipe, the channel can never be closed because Channel.close() itself is a sychronized operation.
            Hide
            lars_kruse Lars Kruse added a comment -

            Have a look here for at step-by-step description of the work-around http://howto.praqma.net/hudson/jenkins-5413-workaround

            Show
            lars_kruse Lars Kruse added a comment - Have a look here for at step-by-step description of the work-around http://howto.praqma.net/hudson/jenkins-5413-workaround
            jstruck Jes Struck made changes -
            Assignee Jes Struck [ jstruck ]
            Hide
            jstruck Jes Struck added a comment -

            I just started to do a studie of why this happens. If there are any, there have bullet prof scenario to force this behaviore please do tell. Because i cannot reproduce this consistent

            Show
            jstruck Jes Struck added a comment - I just started to do a studie of why this happens. If there are any, there have bullet prof scenario to force this behaviore please do tell. Because i cannot reproduce this consistent
            jstruck Jes Struck made changes -
            Status Open [ 1 ] In Progress [ 3 ]
            jstruck Jes Struck made changes -
            Status In Progress [ 3 ] Open [ 1 ]
            jstruck Jes Struck made changes -
            Assignee Jes Struck [ jstruck ]
            Hide
            jstruck Jes Struck added a comment -

            we had some issues in our scm plugin, that resulted in the polling thread hanging.
            we had three things
            1)Our poll-log that was send between our slaves and the master, gave us multiple threads wrting to the same appender.
            2)We also saw that some uncaught exception resulted in hanging thread.
            3)We have also seen threads hang becaus of a field that where declared transient, which could not be declared transient.

            after we solved thees issue in our plugin we have not seen threads hang anymore
            and therfore we cant reproduce this scenarioe

            Show
            jstruck Jes Struck added a comment - we had some issues in our scm plugin, that resulted in the polling thread hanging. we had three things 1)Our poll-log that was send between our slaves and the master, gave us multiple threads wrting to the same appender. 2)We also saw that some uncaught exception resulted in hanging thread. 3)We have also seen threads hang becaus of a field that where declared transient, which could not be declared transient. after we solved thees issue in our plugin we have not seen threads hang anymore and therfore we cant reproduce this scenarioe
            Hide
            vgrigoruk Vitalii Grygoruk added a comment -

            Can you share fixed version of plugin for test?

            Show
            vgrigoruk Vitalii Grygoruk added a comment - Can you share fixed version of plugin for test?
            Hide
            jstruck Jes Struck added a comment -

            vi are doing stress tests on the fixed version, and exspect to release the new version of our plugin wednesday or thursday

            Show
            jstruck Jes Struck added a comment - vi are doing stress tests on the fixed version, and exspect to release the new version of our plugin wednesday or thursday
            Hide
            wolfgang Christian Wolfgang added a comment -

            Hello.

            We have released our plugin, the source can be found at https://github.com/jenkinsci/clearcase-ucm-plugin.

            Some of the things, that made the polling hang on the slaves were mainly uncaught exception. If they weren't caught, they sometimes resulted in slaves hanging.
            And, as Jes wrote, we experienced a transient SimpleDateFormat causing the slaves to hang and this only happened in the polling phase.

            Our main problem, which does not only concern the polling, is cleartool, which, from time to time, stops working due to many reasons.
            It exits with the error message "albd_contact call failed: RPC: Unable to receive; errno = [WINSOCK] Connection reset by peer",
            but it never returns the control to the master, which results in slaves hanging.

            Our plugin is currently only available for the windows platform and we've had a lot of issues with the desktop heap size.
            ClearCase needs a larger heap size than the default setting(512kb), so setting it to a larger value decreases the number of simultaneous desktops,
            which sometimes caused the slave OS to freeze. If the value is too low, cleartool sometimes fails with the winsock error as mentioned before.

            The conclusion is, make sure the thrown exceptions are caught and the serializable classes have proper transient fields.

            I am not saying this is bullet proof, but as far as our tests goes, we haven't experienced the issue yet.

            Our test setup is 15 jobs polling every minute using one slave with two executors. ClearCase crashes before anything else happens.

            Show
            wolfgang Christian Wolfgang added a comment - Hello. We have released our plugin, the source can be found at https://github.com/jenkinsci/clearcase-ucm-plugin . Some of the things, that made the polling hang on the slaves were mainly uncaught exception. If they weren't caught, they sometimes resulted in slaves hanging. And, as Jes wrote, we experienced a transient SimpleDateFormat causing the slaves to hang and this only happened in the polling phase. Our main problem, which does not only concern the polling, is cleartool, which, from time to time, stops working due to many reasons. It exits with the error message "albd_contact call failed: RPC: Unable to receive; errno = [WINSOCK] Connection reset by peer", but it never returns the control to the master, which results in slaves hanging. Our plugin is currently only available for the windows platform and we've had a lot of issues with the desktop heap size. ClearCase needs a larger heap size than the default setting(512kb), so setting it to a larger value decreases the number of simultaneous desktops, which sometimes caused the slave OS to freeze. If the value is too low, cleartool sometimes fails with the winsock error as mentioned before. The conclusion is, make sure the thrown exceptions are caught and the serializable classes have proper transient fields. I am not saying this is bullet proof, but as far as our tests goes, we haven't experienced the issue yet. Our test setup is 15 jobs polling every minute using one slave with two executors. ClearCase crashes before anything else happens.
            mabahj Markus made changes -
            Link This issue is related to JENKINS-12302 [ JENKINS-12302 ]
            Hide
            mabahj Markus added a comment -

            We've been using the above suggested Groovy script for some time to avoid this problem, but as of 1.446 the script fails with "Remote call on CLI channel from /[ip] failed" (JENKINS-12302). Anyone else have the that problem?

            Show
            mabahj Markus added a comment - We've been using the above suggested Groovy script for some time to avoid this problem, but as of 1.446 the script fails with "Remote call on CLI channel from / [ip] failed" ( JENKINS-12302 ). Anyone else have the that problem?
            Hide
            franck Franck Gilliers added a comment -

            Hello,

            I experience the same issue.
            If it can help, i describe my configuration:

            master : linux - jenkins 1.448
            slaves : windows XP and seven via a service - msysgit 1.7.4
            plugin git : 1.1.15
            projects ~ 100

            I trigger a build via the push notification as describe in git plugin 1.1.14. But, as it does not always trigger the build (i do not know why), i keep a SCM polling every two hours.
            To avoid the hanging, every night I restart the server and reboot slaves. During daytime, i have to kill the children git processes to free the slave

            Show
            franck Franck Gilliers added a comment - Hello, I experience the same issue. If it can help, i describe my configuration: master : linux - jenkins 1.448 slaves : windows XP and seven via a service - msysgit 1.7.4 plugin git : 1.1.15 projects ~ 100 I trigger a build via the push notification as describe in git plugin 1.1.14. But, as it does not always trigger the build (i do not know why), i keep a SCM polling every two hours. To avoid the hanging, every night I restart the server and reboot slaves. During daytime, i have to kill the children git processes to free the slave
            Hide
            wolfgang Christian Wolfgang added a comment -

            I see two branches of this issue:

            1) The fact that slaves gets hung(or threads winds up waiting for lengthy polling) and how the master Jenkins instance should handle this, and
            2) How to prevent slaves from getting hung.

            The initial issue suggests 1), but some of the replies suggests 2).
            I guess both are valid issues. Should they be treated as one or should this issue be split up in two?

            Show
            wolfgang Christian Wolfgang added a comment - I see two branches of this issue: 1) The fact that slaves gets hung(or threads winds up waiting for lengthy polling) and how the master Jenkins instance should handle this, and 2) How to prevent slaves from getting hung. The initial issue suggests 1), but some of the replies suggests 2). I guess both are valid issues. Should they be treated as one or should this issue be split up in two?
            Hide
            zeph Guido Serra added a comment -

            Hi, I got the same issue:

            • Jenkins GIT 1.1.16
            • Slave: Windows 7, msysgit (Git-1.7.9-preview20120201.exe)

            After I moved the SCM from SVN to the Git solution, the poll/build stopped working

            This is how I got the windows machine being able to checkout git with ssh/publickey:

            p.s. I was even thinking of using Fisheye to trigger the build on code change detection

            Show
            zeph Guido Serra added a comment - Hi, I got the same issue: Jenkins GIT 1.1.16 Slave: Windows 7, msysgit (Git-1.7.9-preview20120201.exe) After I moved the SCM from SVN to the Git solution, the poll/build stopped working This is how I got the windows machine being able to checkout git with ssh/publickey: http://guidoserra.it/archivi/2012/03/22/jenkins-msysgit-publickey/ p.s. I was even thinking of using Fisheye to trigger the build on code change detection
            Hide
            wolfgang Christian Wolfgang added a comment -

            We have solved our problems now.

            It turned out, that the underlying framework for the plugin threw RuntimeExceptions which were not catched all the time. After we handled those exceptions the slaves stopped hanging.

            Show
            wolfgang Christian Wolfgang added a comment - We have solved our problems now. It turned out, that the underlying framework for the plugin threw RuntimeExceptions which were not catched all the time. After we handled those exceptions the slaves stopped hanging.
            Hide
            jhansche Joe Hansche added a comment -

            Christian Wolfgang: by "plugin" you're referring to the clearcase plugin, right? So that was the issue with your plugin, but not necessarily the issue with the slaves hanging in general? Although potentially related, I guess?

            So if the SCM polling plugin raises a RuntimeException, the slave thread will die off without notifying the master, and therefore the master continues waiting for it to finish, even though it never will?

            Show
            jhansche Joe Hansche added a comment - Christian Wolfgang : by "plugin" you're referring to the clearcase plugin, right? So that was the issue with your plugin, but not necessarily the issue with the slaves hanging in general? Although potentially related, I guess? So if the SCM polling plugin raises a RuntimeException, the slave thread will die off without notifying the master, and therefore the master continues waiting for it to finish, even though it never will?
            dark_lx Alex Lorenz made changes -
            Summary SCM polling on slaves getting hung SCM polling getting hung
            Priority Major [ 3 ] Critical [ 2 ]
            Hide
            dark_lx Alex Lorenz added a comment -

            This does not only happen on slaves, but also on single machine Jenkins systems.
            With us here at TomTom, it happens regularly and makes us lose valuable builds.

            Escalate -> Critical

            Show
            dark_lx Alex Lorenz added a comment - This does not only happen on slaves, but also on single machine Jenkins systems. With us here at TomTom, it happens regularly and makes us lose valuable builds. Escalate -> Critical
            Hide
            wolfgang Christian Wolfgang added a comment -

            Joe: Yes, the ClearCase UCM plugin. We experienced the slaves to hang when having uncaught runtime exceptions and thus the master's polling thread will never be joined.

            Show
            wolfgang Christian Wolfgang added a comment - Joe: Yes, the ClearCase UCM plugin. We experienced the slaves to hang when having uncaught runtime exceptions and thus the master's polling thread will never be joined.
            Hide
            mandeepr Mandeep Rai added a comment - - edited

            I modified the script a little bit:

            Jenkins.instance.getTrigger("SCMTrigger").getRunners().each()
            {
              item ->
              println(item.getTarget().name)
              println(item.getDuration())
              println(item.getStartTime())
              long millis = Calendar.instance.time.time - item.getStartTime()
            
              if(millis > (1000 * 60 * 3)) // 1000 millis in a second * 60 seconds in a minute * 3 minutes
              {
                Thread.getAllStackTraces().keySet().each()
                { 
                  tItem ->
                  if (tItem.getName().contains("SCM polling") && tItem.getName().contains(item.getTarget().name))
                  { 
                    println "Interrupting thread " + tItem.getName(); 
                    tItem.interrupt()
                  }
                }
              }
            }
            
            Show
            mandeepr Mandeep Rai added a comment - - edited I modified the script a little bit: Jenkins.instance.getTrigger( "SCMTrigger" ).getRunners().each() { item -> println(item.getTarget().name) println(item.getDuration()) println(item.getStartTime()) long millis = Calendar.instance.time.time - item.getStartTime() if (millis > (1000 * 60 * 3)) // 1000 millis in a second * 60 seconds in a minute * 3 minutes { Thread .getAllStackTraces().keySet().each() { tItem -> if (tItem.getName().contains( "SCM polling" ) && tItem.getName().contains(item.getTarget().name)) { println "Interrupting thread " + tItem.getName(); tItem.interrupt() } } } }
            Hide
            lacostej lacostej added a comment -

            I encountered a very similar issue, yet I have a slightly different setup:

            • 1 master 1 slave
            • yet the polling was stuck on the master only
            • SCM polling hanging (warning displayed in Jenkins configure screen). Oldest hanging thread is more than 2 days old.
            • it seems it all started with a Unix process that in some way never returned:
             ps -aef| grep jenkins 
              300 12707     1   0 26Nov12 ??       1336:33.09 /usr/bin/java -Xmx1024M -XX:MaxPermSize=128M -jar /Applications/Jenkins/jenkins.war
              300 98690 12707   0 Sat03PM ??         0:00.00 git fetch -t https://github.com/jenkinsci/testflight-plugin.git +refs/heads/*:refs/remotes/origin/*
              300 98692 98690   0 Sat03PM ??         4:39.72 git-remote-https https://github.com/jenkinsci/testflight-plugin.git https://github.com/jenkinsci/testflight-plugin.git
                0  3371  3360   0  8:20PM ttys000    0:00.02 su jenkins
              300  4017  3372   0  8:52PM ttys000    0:00.00 grep jenkins
                0 10920 10896   0 19Nov12 ttys001    0:00.03 login -pfl jenkins /bin/bash -c exec -la bash /bin/bash
            

            Running Jenkins 1.479

            I killed the processes and associated threads, and it started being better.

            Polling doesn't enfore timeouts ?

            Show
            lacostej lacostej added a comment - I encountered a very similar issue, yet I have a slightly different setup: 1 master 1 slave yet the polling was stuck on the master only SCM polling hanging (warning displayed in Jenkins configure screen). Oldest hanging thread is more than 2 days old. it seems it all started with a Unix process that in some way never returned: ps -aef| grep jenkins 300 12707 1 0 26Nov12 ?? 1336:33.09 /usr/bin/java -Xmx1024M -XX:MaxPermSize=128M -jar /Applications/Jenkins/jenkins.war 300 98690 12707 0 Sat03PM ?? 0:00.00 git fetch -t https: //github.com/jenkinsci/testflight-plugin.git +refs/heads/*:refs/remotes/origin/* 300 98692 98690 0 Sat03PM ?? 4:39.72 git-remote-https https: //github.com/jenkinsci/testflight-plugin.git https://github.com/jenkinsci/testflight-plugin.git 0 3371 3360 0 8:20PM ttys000 0:00.02 su jenkins 300 4017 3372 0 8:52PM ttys000 0:00.00 grep jenkins 0 10920 10896 0 19Nov12 ttys001 0:00.03 login -pfl jenkins /bin/bash -c exec -la bash /bin/bash Running Jenkins 1.479 I killed the processes and associated threads, and it started being better. Polling doesn't enfore timeouts ?
            lacostej lacostej made changes -
            Attachment DUMP1.txt [ 23052 ]
            Hide
            cmbasics Raja Aluri added a comment -

            For people who are on windows and want to setup a scheduled task. Here is a oneliner in powershell.

            PS C:\Users\jenkins>
            PS C:\Users\jenkins>
            PS C:\Users\jenkins>
            PS C:\Users\jenkins> tasklist /FI "IMAGENAME eq ssh.exe" /FI "Status eq Unknown" /NH | %{ $_.Split(' *',[StringSplitOptions]"RemoveEmptyEntries")[1]}  |ForEach-Object {taskkill /F /PID $_}
            PS C:\Users\jenkins>
            PS C:\Users\jenkins>
            PS C:\Users\jenkins>
            PS C:\Users\jenkins>
            
            Show
            cmbasics Raja Aluri added a comment - For people who are on windows and want to setup a scheduled task. Here is a oneliner in powershell. PS C:\Users\jenkins> PS C:\Users\jenkins> PS C:\Users\jenkins> PS C:\Users\jenkins> tasklist /FI "IMAGENAME eq ssh.exe" /FI "Status eq Unknown" /NH | %{ $_.Split( ' *' ,[StringSplitOptions] "RemoveEmptyEntries" )[1]} |ForEach- Object {taskkill /F /PID $_} PS C:\Users\jenkins> PS C:\Users\jenkins> PS C:\Users\jenkins> PS C:\Users\jenkins>
            Hide
            hx_unbanned Linards L added a comment - - edited

            Have not noticed that for long time now. But .. in the v1.48x version series I had similar problems, but then they dissapeared. Remember we also did some scheduled rebooting temperings ... now using v1.494.

            Show
            hx_unbanned Linards L added a comment - - edited Have not noticed that for long time now. But .. in the v1.48x version series I had similar problems, but then they dissapeared. Remember we also did some scheduled rebooting temperings ... now using v1.494.
            Hide
            dwseiber Derek Seibert added a comment - - edited

            Our team encounters this issue almost daily using the Dimensions SCM plugin. We run a single instance Jenkins server which polls a Stream ever 30 minutes. I wanted to comment just to explain how we first noticed the issue was occurring in case anybody searching this issue starts in the same place.

            We select the Dimensions Polling Log for our job and see the following at maybe 9AM or 10AM. The polling has hung at this point and we need to restart our application server.

            Started on Mar 25, 2013 8:30:35 AM
            

            We expect to see something like.

            Started on Mar 25, 2013 8:30:35 AM
            Done. Took 19 sec
            No changes
            

            This is why this issue is so troubling. There is no notification trigger when "Started on..." has just been sitting there hung for a while, and no further polling can be done by that job without a restart of the application server.

            Show
            dwseiber Derek Seibert added a comment - - edited Our team encounters this issue almost daily using the Dimensions SCM plugin. We run a single instance Jenkins server which polls a Stream ever 30 minutes. I wanted to comment just to explain how we first noticed the issue was occurring in case anybody searching this issue starts in the same place. We select the Dimensions Polling Log for our job and see the following at maybe 9AM or 10AM. The polling has hung at this point and we need to restart our application server. Started on Mar 25, 2013 8:30:35 AM We expect to see something like. Started on Mar 25, 2013 8:30:35 AM Done. Took 19 sec No changes This is why this issue is so troubling. There is no notification trigger when "Started on..." has just been sitting there hung for a while, and no further polling can be done by that job without a restart of the application server.
            Hide
            lacostej lacostej added a comment -

            Derek,

            Not sure if Dimensions plugin is using a native call or not hunder the hood.

            Could you make a threaddump and/or a list of processes ?

            J

            Show
            lacostej lacostej added a comment - Derek, Not sure if Dimensions plugin is using a native call or not hunder the hood. Could you make a threaddump and/or a list of processes ? J
            Hide
            mdelapenya Manuel de la Peña added a comment -

            We are using "Github Pull Request Builder" plugin and we encounter this issue daily :S

            Show
            mdelapenya Manuel de la Peña added a comment - We are using "Github Pull Request Builder" plugin and we encounter this issue daily :S
            jglick Jesse Glick made changes -
            Link This issue is related to JENKINS-19055 [ JENKINS-19055 ]
            Hide
            frozen_man Brian Smith added a comment -

            We haven't seen this issue in quite a while. Just recently I have seen it again.

            The only difference of note is that for the past several months we have been specifically not renaming jobs (we have instead been creating new jobs with the new name using the old job to copy from, then deleting the old job) as renaming jobs seemed to cause things to not be "stable".

            Could this be related? Maybe there is a race condition when the name is being changed and the polling activity is going on? Just a thought.

            Show
            frozen_man Brian Smith added a comment - We haven't seen this issue in quite a while. Just recently I have seen it again. The only difference of note is that for the past several months we have been specifically not renaming jobs (we have instead been creating new jobs with the new name using the old job to copy from, then deleting the old job) as renaming jobs seemed to cause things to not be "stable". Could this be related? Maybe there is a race condition when the name is being changed and the polling activity is going on? Just a thought.
            Hide
            dmaslakov Dmitry Maslakov added a comment - - edited

            Just got this error after upgrade from 1.556 to 1.558.

            Using suggested scripts to kill hung threads did not help, they start again and hung.

            Using VisualVM I get the dump of threads and here is the one which hunged for more than 8 hours:

            "SCM polling for hudson.maven.MavenModuleSet@4f6ded0d[project-name]" - Thread t@357
               java.lang.Thread.State: WAITING
            	at sun.misc.Unsafe.park(Native Method)
            	- parking to wait for <28ff58dd> (a java.util.concurrent.FutureTask)
            	at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
            	at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:425)
            	at java.util.concurrent.FutureTask.get(FutureTask.java:187)
            	at hudson.remoting.Request.call(Request.java:157)
            	- locked <2de9b3db> (a hudson.remoting.UserRequest)
            	at hudson.remoting.Channel.call(Channel.java:722)
            	at hudson.scm.SubversionSCM.compareRemoteRevisionWith(SubversionSCM.java:1451)
            	at hudson.scm.SCM._compareRemoteRevisionWith(SCM.java:356)
            	at hudson.scm.SCM.poll(SCM.java:373)
            	at hudson.model.AbstractProject._poll(AbstractProject.java:1490)
            	at hudson.model.AbstractProject.poll(AbstractProject.java:1399)
            	at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:462)
            	at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:491)
            	at hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:118)
            	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
            	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
            	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
            	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
            	at java.lang.Thread.run(Thread.java:744)
            
               Locked ownable synchronizers:
            	- locked <70cbc89f> (a java.util.concurrent.ThreadPoolExecutor$Worker)
            

            The project the SCM is polling is a maven project. Maven plug-in was also upgraded (probably this is a culprit) from 2.1 to 2.2.

            Show
            dmaslakov Dmitry Maslakov added a comment - - edited Just got this error after upgrade from 1.556 to 1.558. Using suggested scripts to kill hung threads did not help, they start again and hung. Using VisualVM I get the dump of threads and here is the one which hunged for more than 8 hours: "SCM polling for hudson.maven.MavenModuleSet@4f6ded0d[project-name]" - Thread t@357 java.lang.Thread.State: WAITING at sun.misc.Unsafe.park(Native Method) - parking to wait for <28ff58dd> (a java.util.concurrent.FutureTask) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:425) at java.util.concurrent.FutureTask.get(FutureTask.java:187) at hudson.remoting.Request.call(Request.java:157) - locked <2de9b3db> (a hudson.remoting.UserRequest) at hudson.remoting.Channel.call(Channel.java:722) at hudson.scm.SubversionSCM.compareRemoteRevisionWith(SubversionSCM.java:1451) at hudson.scm.SCM._compareRemoteRevisionWith(SCM.java:356) at hudson.scm.SCM.poll(SCM.java:373) at hudson.model.AbstractProject._poll(AbstractProject.java:1490) at hudson.model.AbstractProject.poll(AbstractProject.java:1399) at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:462) at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:491) at hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:118) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) Locked ownable synchronizers: - locked <70cbc89f> (a java.util.concurrent.ThreadPoolExecutor$Worker) The project the SCM is polling is a maven project. Maven plug-in was also upgraded (probably this is a culprit) from 2.1 to 2.2.
            Hide
            dmaslakov Dmitry Maslakov added a comment -

            Finally I found 3 projects with hung SCM pooling, two of them are maven projects.

            As w/a I have a) applied job renaming and creating new as a copy from old one (something like that is mentioned above), job history was lost but not critical in my case; b) reverted maven plugin to version 2.1; c) restarted Jenkins.

            Show
            dmaslakov Dmitry Maslakov added a comment - Finally I found 3 projects with hung SCM pooling, two of them are maven projects. As w/a I have a) applied job renaming and creating new as a copy from old one (something like that is mentioned above), job history was lost but not critical in my case; b) reverted maven plugin to version 2.1; c) restarted Jenkins.
            Hide
            ndirons Nathaniel Irons added a comment -

            I see this same bug, using a Jenkins 1.565 machine with 14 git jobs, about 2/3 of which use randomized ten-minute polling ("H/10 * * * *"). The machine sits behind an inflexible corporate firewall, and the git repo is hosted outside the firewall, so there is no opportunity to use per-commit notifications, much as I'd like to.

            I started noticing missed builds late last week, when we were using Jenkins 1.559. Now that I know what I'm looking for, I start seeing stuck jobs reported by hudson.triggers.SCMTrigger within an hour or two of restarting the daemon. We've been adding 1-2 jobs a week for a couple of months, and seem to have hit some kind of tipping point.

            The Groovy scripts that people have posted in the past appear to be ineffective. This one:

            Thread.getAllStackTraces().keySet().each(){ item ->
            if(item.getName().contains("SCM polling") && item.getName().contains("waiting for hudson.remoting")){ println "Interrupting thread " + item.getId() item.interrupt() }
            }
            

            ... claims to be interrupting the right SCM polling threads, and returns success, but the stuck threads persist, as reported by hudson.triggers.SCMTrigger. The longer script, starting with "Jenkins.instance.getTrigger" fails with "FATAL: No such property: Jenkins for class: Script1".

            Jenkins' warning message says, "Check if your polling is hanging, and/or increase the number of threads if necessary", but as far as I can determine there is no way to increase the number of threads in current versions of Jenkins. Is that really the case?

            Thanks for your time.

            Show
            ndirons Nathaniel Irons added a comment - I see this same bug, using a Jenkins 1.565 machine with 14 git jobs, about 2/3 of which use randomized ten-minute polling ("H/10 * * * *"). The machine sits behind an inflexible corporate firewall, and the git repo is hosted outside the firewall, so there is no opportunity to use per-commit notifications, much as I'd like to. I started noticing missed builds late last week, when we were using Jenkins 1.559. Now that I know what I'm looking for, I start seeing stuck jobs reported by hudson.triggers.SCMTrigger within an hour or two of restarting the daemon. We've been adding 1-2 jobs a week for a couple of months, and seem to have hit some kind of tipping point. The Groovy scripts that people have posted in the past appear to be ineffective. This one: Thread .getAllStackTraces().keySet().each(){ item -> if (item.getName().contains( "SCM polling" ) && item.getName().contains( "waiting for hudson.remoting" )){ println "Interrupting thread " + item.getId() item.interrupt() } } ... claims to be interrupting the right SCM polling threads, and returns success, but the stuck threads persist, as reported by hudson.triggers.SCMTrigger. The longer script, starting with "Jenkins.instance.getTrigger" fails with "FATAL: No such property: Jenkins for class: Script1". Jenkins' warning message says, "Check if your polling is hanging, and/or increase the number of threads if necessary", but as far as I can determine there is no way to increase the number of threads in current versions of Jenkins. Is that really the case? Thanks for your time.
            Hide
            yenchiugu YenChiu Ku added a comment -

            Nathaniel,

            Add "import jenkins.model.Jenkins" in the beginning. I think it will solve ""FATAL: No such property: Jenkins for class: Script1" issue.

            Show
            yenchiugu YenChiu Ku added a comment - Nathaniel, Add "import jenkins.model.Jenkins" in the beginning. I think it will solve ""FATAL: No such property: Jenkins for class: Script1" issue.
            Hide
            ndirons Nathaniel Irons added a comment -

            Thanks. Adding that line does fix the execution error. However, the full script, while it also reports successful-looking output, fails to interrupt any threads.

            [EnvInject] - Loading node environment variables.
            Building in workspace /Users/sbuxagent/Jenkins/Home/jobs/Zap Polling Threads/workspace
            android-malaysia
            1 day 16 hr
            1404809940627
            Interrupting thread SCM polling for hudson.matrix.MatrixProject@2e2cb699[android-malaysia]
            ios-hongkong
            1 day 16 hr
            1404810120578
            Interrupting thread SCM polling for hudson.matrix.MatrixProject@128e42d[ios-hongkong]
            ios-china
            1 day 16 hr
            1404809940628
            Interrupting thread SCM polling for hudson.matrix.MatrixProject@68f60dc8[ios-china]
            Script returned: [hudson.triggers.SCMTrigger$Runner@2e2cb699, hudson.triggers.SCMTrigger$Runner@128e42d, hudson.triggers.SCMTrigger$Runner@68f60dc8]
            Finished: SUCCESS
            

            I can run the script over and over, and it continues to report those same three thread IDs.

            I don't know why, but this stuck-thread problem disappeared for a couple of weeks. Now it's back. I'm going to update to the latest jenkins from 1.565 and see if anything's improved.

            Show
            ndirons Nathaniel Irons added a comment - Thanks. Adding that line does fix the execution error. However, the full script , while it also reports successful-looking output, fails to interrupt any threads. [EnvInject] - Loading node environment variables. Building in workspace /Users/sbuxagent/Jenkins/Home/jobs/Zap Polling Threads/workspace android-malaysia 1 day 16 hr 1404809940627 Interrupting thread SCM polling for hudson.matrix.MatrixProject@2e2cb699[android-malaysia] ios-hongkong 1 day 16 hr 1404810120578 Interrupting thread SCM polling for hudson.matrix.MatrixProject@128e42d[ios-hongkong] ios-china 1 day 16 hr 1404809940628 Interrupting thread SCM polling for hudson.matrix.MatrixProject@68f60dc8[ios-china] Script returned: [hudson.triggers.SCMTrigger$Runner@2e2cb699, hudson.triggers.SCMTrigger$Runner@128e42d, hudson.triggers.SCMTrigger$Runner@68f60dc8] Finished: SUCCESS I can run the script over and over, and it continues to report those same three thread IDs. I don't know why, but this stuck-thread problem disappeared for a couple of weeks. Now it's back. I'm going to update to the latest jenkins from 1.565 and see if anything's improved.
            Hide
            ndirons Nathaniel Irons added a comment -

            In 1.571, about ten hours after updating to Jenkins 1.571 (and increasing the number of polling threads from 4 to 8), I now see four polling threads that have been stuck for a little over eight hours. The big difference is that I used to be able to see which jobs had gotten stuck, but now none of the stuck threads are named: http://cl.ly/image/0W461v33053f

            The same thread-interrupter script which was claiming success in 1.565 (but not actually cleaning up any threads) fails to run at all in 1.571:

            FATAL: No such property: name for class: jenkins.triggers.SCMTriggerItem$SCMTriggerItems$Bridge

            The full stack trace is available at https://gist.github.com/irons/1f804e69c0cd6d0b7f20, and the script, unchanged from last night, is at https://gist.github.com/irons/09090503150e119f7096

            The shorter script, posted above on May 29, continues to execute and return success, but doesn't result in a net reduction of stuck threads. Now that I can no longer tell which jobs are affected, this Jenkins upgrade appears to have deepened the problem.

            Show
            ndirons Nathaniel Irons added a comment - In 1.571, about ten hours after updating to Jenkins 1.571 (and increasing the number of polling threads from 4 to 8), I now see four polling threads that have been stuck for a little over eight hours. The big difference is that I used to be able to see which jobs had gotten stuck, but now none of the stuck threads are named: http://cl.ly/image/0W461v33053f The same thread-interrupter script which was claiming success in 1.565 (but not actually cleaning up any threads) fails to run at all in 1.571: FATAL: No such property: name for class: jenkins.triggers.SCMTriggerItem$SCMTriggerItems$Bridge The full stack trace is available at https://gist.github.com/irons/1f804e69c0cd6d0b7f20 , and the script, unchanged from last night, is at https://gist.github.com/irons/09090503150e119f7096 The shorter script, posted above on May 29, continues to execute and return success, but doesn't result in a net reduction of stuck threads. Now that I can no longer tell which jobs are affected, this Jenkins upgrade appears to have deepened the problem.
            Hide
            danielbeck Daniel Beck added a comment -

            UI issue described by Nathaniel Irons likely caused in this commit when the type was changed without adjusting the polling page to make sure to call asItem().

            Show
            danielbeck Daniel Beck added a comment - UI issue described by Nathaniel Irons likely caused in this commit when the type was changed without adjusting the polling page to make sure to call asItem().
            Hide
            danielbeck Daniel Beck added a comment -

            Possible solution to issue with SCMTrigger status page described by Nathaniel Irons proposed: https://github.com/jenkinsci/jenkins/pull/1355

            Show
            danielbeck Daniel Beck added a comment - Possible solution to issue with SCMTrigger status page described by Nathaniel Irons proposed: https://github.com/jenkinsci/jenkins/pull/1355
            Hide
            sharon_xia sharon xia added a comment -

            We are also seeing this issue.

            Show
            sharon_xia sharon xia added a comment - We are also seeing this issue.
            Hide
            sharon_xia sharon xia added a comment -

            02:36:16 Started by upstream project "echidna-patch-quality" build number 335
            02:36:16 originally caused by:
            02:36:16 Started by command line by xxx
            02:36:16 [EnvInject] - Loading node environment variables.
            02:36:17 Building remotely on ECHIDNA-QUALITY (6.1 windows-6.1 windows amd64-windows amd64-windows-6.1 amd64) in workspace c:\buildfarm-slave\workspace\echidna-patch-compile
            02:36:18 > git rev-parse --is-inside-work-tree
            02:36:19 Fetching changes from the remote Git repository
            02:36:19 > git config remote.origin.url ssh://*@...:*/ghts/ta
            02:36:20 Fetching upstream changes from ssh://*@...:*/ghts/ta
            02:36:20 > git --version
            02:36:20 > git fetch --tags --progress ssh://*@...:/ghts/ta +refs/heads/:refs/remotes/origin/*
            02:56:20 ERROR: Timeout after 20 minutes
            02:56:20 FATAL: Failed to fetch from ssh://*@...:*/ghts/ta
            02:56:20 hudson.plugins.git.GitException: Failed to fetch from ssh://bmcdiags@10.110.61.117:30000/ghts/ta
            02:56:20 at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:623)
            02:56:20 at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:855)
            02:56:20 at hudson.plugins.git.GitSCM.checkout(GitSCM.java:880)
            02:56:20 at hudson.model.AbstractProject.checkout(AbstractProject.java:1414)
            02:56:20 at hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:671)
            02:56:20 at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88)
            02:56:20 at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:580)
            02:56:20 at hudson.model.Run.execute(Run.java:1684)
            02:56:20 at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
            02:56:20 at hudson.model.ResourceController.execute(ResourceController.java:88)
            02:56:20 at hudson.model.Executor.run(Executor.java:231)
            02:56:20 Caused by: hudson.plugins.git.GitException: Command "git fetch --tags --progress ssh://*@...:/ghts/ta +refs/heads/:refs/remotes/origin/*" returned status code -1:
            02:56:20 stdout:
            02:56:20 stderr: Could not create directory 'c/Users/Administrator/.ssh'.
            02:56:20
            02:56:20 at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1325)
            02:56:20 at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1186)
            02:56:20 at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$200(CliGitAPIImpl.java:87)
            02:56:20 at org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:257)
            02:56:20 at org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:153)
            02:56:20 at org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:146)
            02:56:20 at hudson.remoting.UserRequest.perform(UserRequest.java:118)
            02:56:20 at hudson.remoting.UserRequest.perform(UserRequest.java:48)
            02:56:20 at hudson.remoting.Request$2.run(Request.java:326)
            02:56:20 at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
            02:56:20 at java.util.concurrent.FutureTask.run(Unknown Source)
            02:56:20 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
            02:56:20 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
            02:56:20 at hudson.remoting.Engine$1$1.run(Engine.java:63)
            02:56:20 at java.lang.Thread.run(Unknown Source)

            Show
            sharon_xia sharon xia added a comment - 02:36:16 Started by upstream project "echidna-patch-quality" build number 335 02:36:16 originally caused by: 02:36:16 Started by command line by xxx 02:36:16 [EnvInject] - Loading node environment variables. 02:36:17 Building remotely on ECHIDNA-QUALITY (6.1 windows-6.1 windows amd64-windows amd64-windows-6.1 amd64) in workspace c:\buildfarm-slave\workspace\echidna-patch-compile 02:36:18 > git rev-parse --is-inside-work-tree 02:36:19 Fetching changes from the remote Git repository 02:36:19 > git config remote.origin.url ssh://* @ . . . : */ghts/ta 02:36:20 Fetching upstream changes from ssh://* @ . . . : */ghts/ta 02:36:20 > git --version 02:36:20 > git fetch --tags --progress ssh://* @ . . . : /ghts/ta +refs/heads/ :refs/remotes/origin/* 02:56:20 ERROR: Timeout after 20 minutes 02:56:20 FATAL: Failed to fetch from ssh://* @ . . . : */ghts/ta 02:56:20 hudson.plugins.git.GitException: Failed to fetch from ssh://bmcdiags@10.110.61.117:30000/ghts/ta 02:56:20 at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:623) 02:56:20 at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:855) 02:56:20 at hudson.plugins.git.GitSCM.checkout(GitSCM.java:880) 02:56:20 at hudson.model.AbstractProject.checkout(AbstractProject.java:1414) 02:56:20 at hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:671) 02:56:20 at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:88) 02:56:20 at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:580) 02:56:20 at hudson.model.Run.execute(Run.java:1684) 02:56:20 at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43) 02:56:20 at hudson.model.ResourceController.execute(ResourceController.java:88) 02:56:20 at hudson.model.Executor.run(Executor.java:231) 02:56:20 Caused by: hudson.plugins.git.GitException: Command "git fetch --tags --progress ssh://* @ . . . : /ghts/ta +refs/heads/ :refs/remotes/origin/*" returned status code -1: 02:56:20 stdout: 02:56:20 stderr: Could not create directory 'c/Users/Administrator/.ssh'. 02:56:20 02:56:20 at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1325) 02:56:20 at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1186) 02:56:20 at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$200(CliGitAPIImpl.java:87) 02:56:20 at org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:257) 02:56:20 at org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:153) 02:56:20 at org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:146) 02:56:20 at hudson.remoting.UserRequest.perform(UserRequest.java:118) 02:56:20 at hudson.remoting.UserRequest.perform(UserRequest.java:48) 02:56:20 at hudson.remoting.Request$2.run(Request.java:326) 02:56:20 at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) 02:56:20 at java.util.concurrent.FutureTask.run(Unknown Source) 02:56:20 at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 02:56:20 at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 02:56:20 at hudson.remoting.Engine$1$1.run(Engine.java:63) 02:56:20 at java.lang.Thread.run(Unknown Source)
            Hide
            danielbeck Daniel Beck added a comment -

            sharon xia: That's a completely different issue. This issue is about polling that NEVER finishes, yours aborts after 20 minutes. It even seems to tell you what the problem is: Could not create directory 'c/Users/Administrator/.ssh'.

            To request further assistance, please ask on the jenkinsci-users mailing list or in #jenkins on Freenode. This thread is long enough already.

            Show
            danielbeck Daniel Beck added a comment - sharon xia : That's a completely different issue. This issue is about polling that NEVER finishes, yours aborts after 20 minutes. It even seems to tell you what the problem is: Could not create directory 'c/Users/Administrator/.ssh'. To request further assistance, please ask on the jenkinsci-users mailing list or in #jenkins on Freenode. This thread is long enough already.
            Hide
            frozen_man Brian Smith added a comment -

            I haven't had this issue since we started doing weekly reboots of the whole system (master and nodes).

            Show
            frozen_man Brian Smith added a comment - I haven't had this issue since we started doing weekly reboots of the whole system (master and nodes).
            Hide
            mark3000 mark 3000 added a comment -

            We encountered this issue for the first time (that I'm aware of) after upgrading to 1.583 from 1.578.

            Show
            mark3000 mark 3000 added a comment - We encountered this issue for the first time (that I'm aware of) after upgrading to 1.583 from 1.578.
            Hide
            funeeldy marlene cote added a comment -

            We are seeing this too! It is having a huge impact on our productivity!! We too upgraded to 1.583.

            Please help.

            Show
            funeeldy marlene cote added a comment - We are seeing this too! It is having a huge impact on our productivity!! We too upgraded to 1.583. Please help.
            Hide
            meolsen Morten Engelhardt Olsen added a comment - - edited

            At Atmel we're now managing this issue by having the following system groovy script run every couple of minutes to monitor the processor load:

            import java.lang.management.*;
            
            def threadBean = ManagementFactory.getThreadMXBean();
            def osBean     = ManagementFactory.getOperatingSystemMXBean();
            
            println "\n\n\n[Checking state of (master)]";
            
            println "Current CPU Time used by Jenkins: " + threadBean.getCurrentThreadCpuTime() + "ns";
            
            double processLoad = (osBean.getProcessCpuLoad() * 100).round(2);
            double cpuLoad = (osBean.getSystemCpuLoad() * 100).round(2);
            println "Process CPU Load: " + processLoad + "%";
            println "CPU Load: " + cpuLoad + "%";
            
            if (processLoad < 90) {
              println "\n\n\n === Load is less than 90%, nothing to do ===\n\n\n";
              println "\n\n\n[Done checking: CPU Load: " + cpuLoad + "%]\n\n\n";
              return;
            } else {
              println "\n\n\n === Load is more than 90%, checking for stuck threads! ===\n\n\n";
            }
            
            
            println "\n\n\n[Checking all threads]\n\n\n";
            def threadNum = 0;
            def killThreadNum = 0;
            
            def stacktraces = Thread.getAllStackTraces();
            stacktraces.each { thread, stack ->
              if (thread.getName().contains("trigger/TimerTrigger/check") ) {
                println "=== Interrupting thread " + thread.getName()+ " ===";
                thread.interrupt();
                killThreadNum++;
              }
              threadNum++;
            }
            
            println "\n\n\n[Done checking: " + threadNum + " threads, killed " + killThreadNum + "]\n\n\n";
            
            return; // Suppress groovy state dump

            Note that we had to check for TimerTrigger, not SCM Polling as the original code did. This is currently running on 1.580.2.

            Show
            meolsen Morten Engelhardt Olsen added a comment - - edited At Atmel we're now managing this issue by having the following system groovy script run every couple of minutes to monitor the processor load: import java.lang.management.*; def threadBean = ManagementFactory.getThreadMXBean(); def osBean = ManagementFactory.getOperatingSystemMXBean(); println "\n\n\n[Checking state of (master)]" ; println "Current CPU Time used by Jenkins: " + threadBean.getCurrentThreadCpuTime() + "ns" ; double processLoad = (osBean.getProcessCpuLoad() * 100).round(2); double cpuLoad = (osBean.getSystemCpuLoad() * 100).round(2); println " Process CPU Load: " + processLoad + "%" ; println "CPU Load: " + cpuLoad + "%" ; if (processLoad < 90) { println "\n\n\n === Load is less than 90%, nothing to do ===\n\n\n" ; println "\n\n\n[Done checking: CPU Load: " + cpuLoad + "%]\n\n\n" ; return ; } else { println "\n\n\n === Load is more than 90%, checking for stuck threads! ===\n\n\n" ; } println "\n\n\n[Checking all threads]\n\n\n" ; def threadNum = 0; def killThreadNum = 0; def stacktraces = Thread .getAllStackTraces(); stacktraces.each { thread, stack -> if (thread.getName().contains( "trigger/TimerTrigger/check" ) ) { println "=== Interrupting thread " + thread.getName()+ " ===" ; thread.interrupt(); killThreadNum++; } threadNum++; } println "\n\n\n[Done checking: " + threadNum + " threads, killed " + killThreadNum + "]\n\n\n" ; return ; // Suppress groovy state dump Note that we had to check for TimerTrigger , not SCM Polling as the original code did. This is currently running on 1.580.2.
            Hide
            ndirons Nathaniel Irons added a comment -

            The script provided on Jan 13 seems to be solving a different problem. On our instance, we see stuck SCM polling threads even when the CPU load is zero. With three SCM polling processes stuck as of this moment, the thread names reported by Thread.getAllStackTraces() are main, Finalizer, Signal Dispatcher, and Reference Handler.

            I'm pig-ignorant of groovy, and have yet to figure out where its access to Jenkins thread innards are documented, but previous iterations of scripts that did identify a stuck thread to interrupt were ineffective for us — we've yet to find an effective workaround that doesn't rely on restarting the jenkins daemon.

            We're using 1.590, and looking to switch to LTS releases as soon as they pass us by.

            Show
            ndirons Nathaniel Irons added a comment - The script provided on Jan 13 seems to be solving a different problem. On our instance, we see stuck SCM polling threads even when the CPU load is zero. With three SCM polling processes stuck as of this moment, the thread names reported by Thread.getAllStackTraces() are main, Finalizer, Signal Dispatcher, and Reference Handler. I'm pig-ignorant of groovy, and have yet to figure out where its access to Jenkins thread innards are documented, but previous iterations of scripts that did identify a stuck thread to interrupt were ineffective for us — we've yet to find an effective workaround that doesn't rely on restarting the jenkins daemon. We're using 1.590, and looking to switch to LTS releases as soon as they pass us by.
            Hide
            ahoffmann Andrew Hoffmann added a comment -

            We are experiencing git polling getting hung as well. We have ~15 jobs that poll every 5 minutes. It gets hung roughly 24 hours after a service restart. We also have the BitBucket pull request builder polling every 5 minutes for another ~15 jobs.

            Jenkins v1.622
            git plugin 2.4.0
            git-client plugin 1.18.0
            bitbucket-pullrequest-builder plugin 1.4.7

            0-30 minutes prior to being hung, I see this exception:

            WARNING: Process leaked file descriptors. See http://wiki.jenkins-ci.org/display/JENKINS/Spawning+processes+from+build for more information
            java.lang.Exception
            	at hudson.Proc$LocalProc.join(Proc.java:329)
            	at hudson.Proc.joinWithTimeout(Proc.java:168)
            	at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1596)
            	at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1576)
            	at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1572)
            	at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1233)
            	at org.jenkinsci.plugins.gitclient.CliGitAPIImpl$4.execute(CliGitAPIImpl.java:583)
            	at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1310)
            	at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1261)
            	at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1252)
            	at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.getHeadRev(CliGitAPIImpl.java:2336)
            	at hudson.plugins.git.GitSCM.compareRemoteRevisionWithImpl(GitSCM.java:583)
            	at hudson.plugins.git.GitSCM.compareRemoteRevisionWith(GitSCM.java:527)
            	at hudson.scm.SCM.compareRemoteRevisionWith(SCM.java:381)
            	at hudson.scm.SCM.poll(SCM.java:398)
            	at hudson.model.AbstractProject._poll(AbstractProject.java:1461)
            	at hudson.model.AbstractProject.poll(AbstractProject.java:1364)
            	at jenkins.triggers.SCMTriggerItem$SCMTriggerItems$Bridge.poll(SCMTriggerItem.java:119)
            	at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:510)
            	at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:539)
            	at hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:118)
            	at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
            	at java.util.concurrent.FutureTask.run(Unknown Source)
            	at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
            	at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
            	at java.lang.Thread.run(Unknown Source)
            

            I will be happy to provide more configuration details and logs if requested.

            Show
            ahoffmann Andrew Hoffmann added a comment - We are experiencing git polling getting hung as well. We have ~15 jobs that poll every 5 minutes. It gets hung roughly 24 hours after a service restart. We also have the BitBucket pull request builder polling every 5 minutes for another ~15 jobs. Jenkins v1.622 git plugin 2.4.0 git-client plugin 1.18.0 bitbucket-pullrequest-builder plugin 1.4.7 0-30 minutes prior to being hung, I see this exception: WARNING: Process leaked file descriptors. See http: //wiki.jenkins-ci.org/display/JENKINS/Spawning+processes+from+build for more information java.lang.Exception at hudson.Proc$LocalProc.join(Proc.java:329) at hudson.Proc.joinWithTimeout(Proc.java:168) at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1596) at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1576) at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1572) at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1233) at org.jenkinsci.plugins.gitclient.CliGitAPIImpl$4.execute(CliGitAPIImpl.java:583) at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1310) at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1261) at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1252) at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.getHeadRev(CliGitAPIImpl.java:2336) at hudson.plugins.git.GitSCM.compareRemoteRevisionWithImpl(GitSCM.java:583) at hudson.plugins.git.GitSCM.compareRemoteRevisionWith(GitSCM.java:527) at hudson.scm.SCM.compareRemoteRevisionWith(SCM.java:381) at hudson.scm.SCM.poll(SCM.java:398) at hudson.model.AbstractProject._poll(AbstractProject.java:1461) at hudson.model.AbstractProject.poll(AbstractProject.java:1364) at jenkins.triggers.SCMTriggerItem$SCMTriggerItems$Bridge.poll(SCMTriggerItem.java:119) at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:510) at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:539) at hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:118) at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) at java.util.concurrent.FutureTask.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang. Thread .run(Unknown Source) I will be happy to provide more configuration details and logs if requested.
            Hide
            ymartin1040 Yves Martin added a comment - - edited

            When investigating a Subversion SCM polling issue (JENKINS-31192), I find out that a global lock hudson.scm.SubversionSCM$ModuleLocation prevents threads to work concurrently. Is that "big lock" really relevant ? Maybe it is possible to reduce the code section when the lock is held.

            Show
            ymartin1040 Yves Martin added a comment - - edited When investigating a Subversion SCM polling issue ( JENKINS-31192 ), I find out that a global lock hudson.scm.SubversionSCM$ModuleLocation prevents threads to work concurrently. Is that "big lock" really relevant ? Maybe it is possible to reduce the code section when the lock is held.
            rtyler R. Tyler Croy made changes -
            Workflow JNJira [ 135502 ] JNJira + In-Review [ 174320 ]
            Hide
            zxkane Meng Xin Zhu added a comment -

            Still happening on Jenkins LTS 2.19.4.

            My job is periodically polling from git repo(every 5 minutes). However the scm polling might hang infinitely without timeout. The subsequent manually job building also is blocked by scm polling. It's definitely a critical issue to impact the usability of jenkins.

            Show
            zxkane Meng Xin Zhu added a comment - Still happening on Jenkins LTS 2.19.4. My job is periodically polling from git repo(every 5 minutes). However the scm polling might hang infinitely without timeout. The subsequent manually job building also is blocked by scm polling. It's definitely a critical issue to impact the usability of jenkins.
            cloudbees CloudBees Inc. made changes -
            Remote Link This issue links to "CloudBees Internal OSS-902 (Web Link)" [ 18805 ]

              People

              • Assignee:
                Unassigned
                Reporter:
                dty Dean Yu
              • Votes:
                140 Vote for this issue
                Watchers:
                145 Start watching this issue

                Dates

                • Created:
                  Updated: