Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-27514

Core - Thread spikes in Computer.threadPoolForRemoting leading to eventual server OOM

    Details

    • Type: Epic
    • Status: Open (View Workflow)
    • Priority: Major
    • Resolution: Unresolved
    • Labels:
    • Environment:
    • Epic Name:
      Core - Thread spikes in Computer.threadPoolForRemoting
    • Similar Issues:

      Description

      This issue has been converted to EPIC, because there are reports of various independent issues inside.

      Issue:

      • Remoting threadPool is being widely used in Jenkins: https://github.com/search?q=org%3Ajenkinsci+threadPoolForRemoting&type=Code
      • Not all usages of Computer.threadPoolForRemoting are valid for starters
      • Computer.threadPoolForRemoting has downscaling logic, threads get killed after 60-second timeout
      • The pool has no thread limit by default, so it may grow infinitely until number of threads kills JVM or causes OOM
      • Some Jenkins use-cases cause burst Computer.threadPoolForRemoting load by design (e.g. Jenkins startup or agent reconnection after the issue)
      • Deadlocks or waits in the threadpool may also make it to grow infinitely

      Proposed fixes:

      • Define usage policy for this thread pool in the documentation
      • Limit number of threads being created depending on the system scale, make the limit configurable (256 by default?)
      • Fix the most significant issues where the thread pool gets misused or blocked
         
        Original report (tracked as JENKINS-47012):

      > After some period of time the Jenkins master will have up to ten thousand or so threads most of which are Computer.theadPoolForRemoting threads that have leaked. This forces us to restart the Jenkins master.

      > We do add and delete slave nodes frequently (thousands per day per master) which I think may be part of the problem.

      > I thought https://github.com/jenkinsci/ssh-slaves-plugin/commit/b5f26ae3c685496ba942a7c18fc9659167293e43 may be the fix because stacktraces indicated threads are hanging in the plugins afterDisconnect() method. I have updated half of our Jenkins masters to ssh-slaves plugin version 1.9 which includes that change, but early today we had a master with ssh-slaves plugin fall over from this issue.

      > Unfortunately I don't have any stacktraces handy (we had to force reboot the master today), but will update this bug if we get another case of this problem. Hoping that by filing it with as much info as I can we can at least start to diagnose the problem.

        Attachments

        1. 20150904-jenkins03.txt
          2.08 MB
        2. file-leak-detector.log
          41 kB
        3. Jenkins_Dump_2017-06-12-10-52.zip
          1.58 MB
        4. jenkins_watchdog_report.txt
          267 kB
        5. jenkins_watchdog.sh
          2 kB
        6. jenkins02-thread-dump.txt
          1.49 MB
        7. support_2015-08-04_14.10.32.zip
          2.17 MB
        8. support_2016-06-29_13.17.36 (2).zip
          3.90 MB
        9. thread-dump.txt
          5.48 MB

          Issue Links

            Activity

            Hide
            oleg_nenashev Oleg Nenashev added a comment -

            Actually I am wrong. The Cached Thread Pool implementation should be able to terminate threads if they are unused for more than 60 seconds. Probably the threads are being created so intensively that the executor service goes crazy. I will see I I can create logging for that at least

            Show
            oleg_nenashev Oleg Nenashev added a comment - Actually I am wrong. The Cached Thread Pool implementation should be able to terminate threads if they are unused for more than 60 seconds. Probably the threads are being created so intensively that the executor service goes crazy. I will see I I can create logging for that at least
            Hide
            integer Kanstantsin Shautsou added a comment - - edited

            Stephen Connolly https://issues.jenkins-ci.org/browse/JENKINS-27514?focusedCommentId=306267&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-306267 ? I remember that connect/disconnect always had issues in UI. I.e. you try disconnect, but it still infinitely tries reconnect. Maybe related?

            Show
            integer Kanstantsin Shautsou added a comment - - edited Stephen Connolly   https://issues.jenkins-ci.org/browse/JENKINS-27514?focusedCommentId=306267&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-306267  ? I remember that connect/disconnect always had issues in UI. I.e. you try disconnect, but it still infinitely tries reconnect. Maybe related?
            Hide
            ecliptik Micheal Waltz added a comment -

            This still appears to be an issue, but I figured out what was triggering high CPU and thousands of Computer.threadPoolForRemoting threads with our setup.

            Architecture:

            • 1 Ubuntu 16.04 Master running Jenkins v2.76
            • 6 Ubuntu 16.04 Agents via swarm plugin v3.4
            • Agents are connected to master via a ELB since they are in multiple AWS regions

            There were a few jobs that used containers to mount a reports volume within the job workspace. The container would generate reports as root:root and they would appear within $WORKSPACE with these same permissions. The jenkins agent runs as user jenkins, and couldn't remove these files when it would try and clean up $WORKSPACE after each run.

            jenkins@ip-10-0-0-5:~/workspace/automated-matador-pull_requests_ws-cleanup_1504807099908$ ls -l coverage/
            total 6352
            drwxr-xr-x 3 root root    4096 Sep  7 15:51 assets
            -rw-r--r-- 1 root root 6498213 Sep  7 15:51 index.htm
            

            The jobs that wrote these reports were run regularly, on every push and Pull Request to a repository, causing them to build up quickly. On the master thousands of files named atomic*.tmp would start to appear in /var/lib/jenkins

            ubuntu@jenkins:/var/lib/jenkins$ ls atomic*.tmp | wc -l
            6521
            

            and each file would contain hundreds of lines like,

            <detailMessage>Unable to delete &apos;/var/lib/jenkins/workspace/automated-matador-develop-build-on-push_ws-cleanup_1504728487305/coverage/.resultset.json.lock&apos;. Tried 3 times (of a maximum of 3) waiting 0.1 sec between attempts.</detailMessage>
            

            Eventually the Computer.threadPoolForRemoting errors would reach into the thousands the the master CPU would be 100%. A reboot would temporarily fix it, but CPU would jump again until all the /var/lib/jenkins/atomic*.tmp files were removed.

            We resolved the issue by doing a chown jenkins:jenkins on the report directories created by the containers in a job so there are no longer "Unable to delete errors" and atomic*.tmp files created. We haven't seen a CPU or computer.threadPoolForRemoting spike in two weeks since doing this.

            Hopefully this helps anyone else who may be experiencing this issue and maybe provide some guidance on the root cause of this issue.

            Show
            ecliptik Micheal Waltz added a comment - This still appears to be an issue, but I figured out what was triggering high CPU and thousands of Computer.threadPoolForRemoting threads with our setup. Architecture: 1 Ubuntu 16.04 Master running Jenkins v2.76 6 Ubuntu 16.04 Agents via swarm plugin v3.4 Agents are connected to master via a ELB since they are in multiple AWS regions There were a few jobs that used containers to mount a reports volume within the job workspace. The container would generate reports as root:root and they would appear within $WORKSPACE with these same permissions. The jenkins agent runs as user jenkins, and couldn't remove these files when it would try and clean up $WORKSPACE after each run. jenkins@ip-10-0-0-5:~/workspace/automated-matador-pull_requests_ws-cleanup_1504807099908$ ls -l coverage/ total 6352 drwxr-xr-x 3 root root 4096 Sep 7 15:51 assets -rw-r--r-- 1 root root 6498213 Sep 7 15:51 index.htm The jobs that wrote these reports were run regularly, on every push and Pull Request to a repository, causing them to build up quickly. On the master thousands of files named atomic*.tmp would start to appear in /var/lib/jenkins ubuntu@jenkins:/ var /lib/jenkins$ ls atomic*.tmp | wc -l 6521 and each file would contain hundreds of lines like, <detailMessage>Unable to delete &apos;/ var /lib/jenkins/workspace/automated-matador-develop-build-on-push_ws-cleanup_1504728487305/coverage/.resultset.json.lock&apos;. Tried 3 times (of a maximum of 3) waiting 0.1 sec between attempts.</detailMessage> Eventually the Computer.threadPoolForRemoting errors would reach into the thousands the the master CPU would be 100%. A reboot would temporarily fix it, but CPU would jump again until all the /var/lib/jenkins/atomic*.tmp files were removed. We resolved the issue by doing a chown jenkins:jenkins on the report directories created by the containers in a job so there are no longer "Unable to delete errors" and atomic*.tmp files created. We haven't seen a CPU or computer.threadPoolForRemoting spike in two weeks since doing this. Hopefully this helps anyone else who may be experiencing this issue and maybe provide some guidance on the root cause of this issue.
            Hide
            oleg_nenashev Oleg Nenashev added a comment -

            Micheal Waltz Ideally I need a stacktrace to confirm what causes it, but I am pretty sure it happens due to the workspace cleanup. Jenkins-initiated workspace cleanup happens in bursts and it uses Remoting thread pool for sure, so it may cause such behavior.

            Regarding this ticket, I am going to convert it to EPIC. Remoting thread pool is a shared resource in the system, and it may be consumed by various things. I ask everybody to re-report their cases under the EPIC

            Show
            oleg_nenashev Oleg Nenashev added a comment - Micheal Waltz Ideally I need a stacktrace to confirm what causes it, but I am pretty sure it happens due to the workspace cleanup. Jenkins-initiated workspace cleanup happens in bursts and it uses Remoting thread pool for sure, so it may cause such behavior. Regarding this ticket, I am going to convert it to EPIC. Remoting thread pool is a shared resource in the system, and it may be consumed by various things. I ask everybody to re-report their cases under the EPIC
            Hide
            oleg_nenashev Oleg Nenashev added a comment -

            Original ticket has been cross-posted in JENKINS-47012 . In this EPIC I will be handling only issues related to the Jenkins Core, SSH Slaves Plugin and Remoting. Issues related to Remoting thread pool usage and misusage by other plugins are separate.

            Show
            oleg_nenashev Oleg Nenashev added a comment - Original ticket has been cross-posted in  JENKINS-47012  . In this EPIC I will be handling only issues related to the Jenkins Core, SSH Slaves Plugin and Remoting. Issues related to Remoting thread pool usage and misusage by other plugins are separate.

              People

              • Assignee:
                Unassigned
                Reporter:
                cboylan Clark Boylan
              • Votes:
                13 Vote for this issue
                Watchers:
                28 Start watching this issue

                Dates

                • Created:
                  Updated: