Details

    • Similar Issues:

      Description

      Using Perforce plugin 1.3.0

      Sometimes, when perforce polling happens on a slave, it hangs and never finds changes.

      The job's polling log shows only this:

      Started on Aug 10, 2011 10:30:57 AM
      Looking for changes...
      Using node: <snip>
      Using remote perforce client: <snip>--1607756523

      The slave-in-question's thread dump is this, I don't see anything related to SCM polling:

      Channel reader thread: channel

      "Channel reader thread: channel" Id=9 Group=main RUNNABLE (in native)
      at java.io.FileInputStream.readBytes(Native Method)
      at java.io.FileInputStream.read(FileInputStream.java:199)
      at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
      at java.io.BufferedInputStream.read(BufferedInputStream.java:237)

      • locked java.io.BufferedInputStream@1262043
        at java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2248)
        at java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream.java:2541)
        at java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2551)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1296)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:350)
        at hudson.remoting.Channel$ReaderThread.run(Channel.java:1008)

      main

      "main" Id=1 Group=main WAITING on hudson.remoting.Channel@10f6d3
      at java.lang.Object.wait(Native Method)

      • waiting on hudson.remoting.Channel@10f6d3
        at java.lang.Object.wait(Object.java:485)
        at hudson.remoting.Channel.join(Channel.java:758)
        at hudson.remoting.Launcher.main(Launcher.java:418)
        at hudson.remoting.Launcher.runWithStdinStdout(Launcher.java:364)
        at hudson.remoting.Launcher.run(Launcher.java:204)
        at hudson.remoting.Launcher.main(Launcher.java:166)

      Ping thread for channel hudson.remoting.Channel@10f6d3:channel

      "Ping thread for channel hudson.remoting.Channel@10f6d3:channel" Id=10 Group=main TIMED_WAITING
      at java.lang.Thread.sleep(Native Method)
      at hudson.remoting.PingThread.run(PingThread.java:86)

      Pipe writer thread: channel

      "Pipe writer thread: channel" Id=13 Group=main WAITING on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@12f4818
      at sun.misc.Unsafe.park(Native Method)

      • waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@12f4818
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987)
        at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:399)
        at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
        at java.lang.Thread.run(Thread.java:662)

      pool-1-thread-545

      "pool-1-thread-545" Id=708 Group=main RUNNABLE
      at sun.management.ThreadImpl.dumpThreads0(Native Method)
      at sun.management.ThreadImpl.dumpAllThreads(ThreadImpl.java:374)
      at hudson.Functions.getThreadInfos(Functions.java:817)
      at hudson.util.RemotingDiagnostics$GetThreadDump.call(RemotingDiagnostics.java:93)
      at hudson.util.RemotingDiagnostics$GetThreadDump.call(RemotingDiagnostics.java:89)
      at hudson.remoting.UserRequest.perform(UserRequest.java:118)
      at hudson.remoting.UserRequest.perform(UserRequest.java:48)
      at hudson.remoting.Request$2.run(Request.java:270)
      at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
      at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
      at java.util.concurrent.FutureTask.run(FutureTask.java:138)
      at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
      at java.lang.Thread.run(Thread.java:662)

      Number of locked synchronizers = 1

      • java.util.concurrent.locks.ReentrantLock$NonfairSync@1e30132

      Finalizer

      "Finalizer" Id=3 Group=system WAITING on java.lang.ref.ReferenceQueue$Lock@103333
      at java.lang.Object.wait(Native Method)

      • waiting on java.lang.ref.ReferenceQueue$Lock@103333
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134)
        at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159)

      Reference Handler

      "Reference Handler" Id=2 Group=system WAITING on java.lang.ref.Reference$Lock@191659c
      at java.lang.Object.wait(Native Method)

      • waiting on java.lang.ref.Reference$Lock@191659c
        at java.lang.Object.wait(Object.java:485)
        at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116)

      Signal Dispatcher

      "Signal Dispatcher" Id=4 Group=system RUNNABLE

      The master's thread dump has a thread for this hung job showing the polling attempt:

      SCM polling for hudson.model.FreeStyleProject@11fa600[<snip>]

      "SCM polling for hudson.model.FreeStyleProject@11fa600[<snip>]" Id=2213 Group=main TIMED_WAITING on [B@1dbe391
      at java.lang.Object.wait(Native Method)

      • waiting on [B@1dbe391
        at hudson.remoting.FastPipedInputStream.read(FastPipedInputStream.java:173)
        at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:264)
        at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:306)
        at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:158)
      • locked java.io.InputStreamReader@a4139e
        at java.io.InputStreamReader.read(InputStreamReader.java:167)
        at java.io.BufferedReader.fill(BufferedReader.java:136)
        at java.io.BufferedReader.readLine(BufferedReader.java:299)
      • locked java.io.InputStreamReader@a4139e
        at java.io.BufferedReader.readLine(BufferedReader.java:362)
        at com.tek42.perforce.parse.AbstractPerforceTemplate.getPerforceResponse(AbstractPerforceTemplate.java:330)
        at com.tek42.perforce.parse.AbstractPerforceTemplate.getPerforceResponse(AbstractPerforceTemplate.java:292)
        at com.tek42.perforce.parse.Workspaces.getWorkspace(Workspaces.java:54)
        at hudson.plugins.perforce.PerforceSCM.getPerforceWorkspace(PerforceSCM.java:1144)
        at hudson.plugins.perforce.PerforceSCM.compareRemoteRevisionWith(PerforceSCM.java:840)
        at hudson.scm.SCM._compareRemoteRevisionWith(SCM.java:354)
        at hudson.scm.SCM.poll(SCM.java:371)
        at hudson.model.AbstractProject.poll(AbstractProject.java:1305)
        at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:420)
        at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:449)
        at hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:118)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)

      Number of locked synchronizers = 1

      • java.util.concurrent.locks.ReentrantLock$NonfairSync@3ed79

      As expected, going to Hudson > Manage shows this message

      There are more SCM polling activities scheduled than handled, so the threads are not keeping up with the demands. Check if your polling is hanging, and/or increase the number of threads if necessary.

      Many people had the same issue with the Subversion plugin, and they delivered a feature to the Subversion plugin to allow only polling on the master.
      https://issues.jenkins-ci.org/browse/JENKINS-5413

      We should probably have the same thing in perforce plugin, an option to only poll on the master.

        Attachments

          Issue Links

            Activity

            Hide
            jazzyjayx Jay Spang added a comment -

            Can you clarify how to get around this then? I do have the latest version of the Perforce plugin (1.3.26)

            • The master is Windows, and uses c:\p4\p4.exe
            • The slave is OSX, and uses /usr/bin/p4
            • I have the job configured to run on the OSX slave and use /usr/bin/p4 as the executable.

            If I check "Poll only on master", the job tries to poll by running "/usr/bin/p4" on the Windows master, which obviously fails. If I change the Perforce executable in the job to c:\p4\p4.exe, Polling will work again, but the job immediately fails to sync the workspace (because it tries to run c:\p4\p4.exe on the OSX slave).

            Show
            jazzyjayx Jay Spang added a comment - Can you clarify how to get around this then? I do have the latest version of the Perforce plugin (1.3.26) The master is Windows, and uses c:\p4\p4.exe The slave is OSX, and uses /usr/bin/p4 I have the job configured to run on the OSX slave and use /usr/bin/p4 as the executable. If I check "Poll only on master", the job tries to poll by running "/usr/bin/p4" on the Windows master, which obviously fails. If I change the Perforce executable in the job to c:\p4\p4.exe, Polling will work again, but the job immediately fails to sync the workspace (because it tries to run c:\p4\p4.exe on the OSX slave).
            Hide
            rpetti Rob Petti added a comment -

            You can override the path to P4 in the Node configuration.

            Make a new perforce installation in the global jenkins config, and set it to C:\p4\p4.exe.
            In your node configuration for your slave, override the path of this installation to point to /usr/bin/p4.
            In your job configuration, change it to use the new perforce installation you just set up.

            This is very much the same process as when setting up Java or some other utility.

            Show
            rpetti Rob Petti added a comment - You can override the path to P4 in the Node configuration. Make a new perforce installation in the global jenkins config, and set it to C:\p4\p4.exe. In your node configuration for your slave, override the path of this installation to point to /usr/bin/p4. In your job configuration, change it to use the new perforce installation you just set up. This is very much the same process as when setting up Java or some other utility.
            Hide
            alexey_larsky Alexey Larsky added a comment -

            Hi Rob,

            I've got the same issue for a years: after restarting slave(s) (polling from slave) polling is hungs.
            I can poll from master because I use special cofigured permanent workspaces on slaves and don't wish create doubles on master.
            The only one way to fix hung - restarting master. This is not convenient.
            Do you know another way to fix polling or maybe this issue can be fixed?

            Show
            alexey_larsky Alexey Larsky added a comment - Hi Rob, I've got the same issue for a years: after restarting slave(s) (polling from slave) polling is hungs. I can poll from master because I use special cofigured permanent workspaces on slaves and don't wish create doubles on master. The only one way to fix hung - restarting master. This is not convenient. Do you know another way to fix polling or maybe this issue can be fixed?
            Hide
            rpetti Rob Petti added a comment -

            Upgrade to the latest version, switch to the p4-plugin, or use the system groovy script I mentioned above to kill hung polling threads.

            Show
            rpetti Rob Petti added a comment - Upgrade to the latest version, switch to the p4-plugin, or use the system groovy script I mentioned above to kill hung polling threads.
            Hide
            alexey_larsky Alexey Larsky added a comment -

            I use 1.3.33 version - latest in LTS. It periodically hungs on slaves.
            PS. Thank you. The script is working.

            Show
            alexey_larsky Alexey Larsky added a comment - I use 1.3.33 version - latest in LTS. It periodically hungs on slaves. PS. Thank you. The script is working.

              People

              • Assignee:
                Unassigned
                Reporter:
                brianharris brianharris
              • Votes:
                0 Vote for this issue
                Watchers:
                5 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: