Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-38764

Nodes allocated inside of parallel() should have their workspaces removed immediately

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Not A Defect
    • Icon: Minor Minor
    • pipeline
    • None
    • Jenkins 2.7.4

      I noticed some long-lived agents on ci.jenkins.io start to reach capacity on their disks. The major culprit seems to be these just-in-time allocated workspaces created by the parallel step.

      From the file system of an agent in question:

      272M	Core_jenkins_PR-2560-SLUUKE4ANV5FD5D67BJ6QJXT7I6A5KK7OK5XKDDLLMGUC2SH3DNA
      4.0K	Core_jenkins_PR-2560-SLUUKE4ANV5FD5D67BJ6QJXT7I6A5KK7OK5XKDDLLMGUC2SH3DNA@tmp
      193M	cture_jenkins-infra_staging-USM6F6JS6HK2JGY2BJ5HZ5TWAVMAIHCFNV6IJD37YMEUECW3O3EQ
      4.0K	cture_jenkins-infra_staging-USM6F6JS6HK2JGY2BJ5HZ5TWAVMAIHCFNV6IJD37YMEUECW3O3EQ@tmp
      35G	fra_infra-statistics_master-BQS7QBCYM7MBZLAZ2RN2ZHFUGONNKIGZDAM3XOWNBMQGUZL7RBLA
      4.0K	fra_infra-statistics_master-BQS7QBCYM7MBZLAZ2RN2ZHFUGONNKIGZDAM3XOWNBMQGUZL7RBLA@tmp
      177M	Infra_jenkins-infra_staging-KTU3HUJ4E475OGVM7LIUJKPX5J7XVRNW3NLLJ3INKWD4D5MXWZZQ
      8.0M	Infra_jenkins-infra_staging-KTU3HUJ4E475OGVM7LIUJKPX5J7XVRNW3NLLJ3INKWD4D5MXWZZQ@2
      4.0K	Infra_jenkins-infra_staging-KTU3HUJ4E475OGVM7LIUJKPX5J7XVRNW3NLLJ3INKWD4D5MXWZZQ@2@tmp
      177M	Infra_jenkins-infra_staging-KTU3HUJ4E475OGVM7LIUJKPX5J7XVRNW3NLLJ3INKWD4D5MXWZZQ@3
      4.0K	Infra_jenkins-infra_staging-KTU3HUJ4E475OGVM7LIUJKPX5J7XVRNW3NLLJ3INKWD4D5MXWZZQ@3@tmp
      4.0K	Infra_jenkins-infra_staging-KTU3HUJ4E475OGVM7LIUJKPX5J7XVRNW3NLLJ3INKWD4D5MXWZZQ@tmp
      446M	Infra_jenkins.io_chapters-4T3WF77ZIQHDHWX7P37VQIP3GVGUSQFI72DJEHW3XB5KHU4YDMDA
      4.0K	Infra_jenkins.io_chapters-4T3WF77ZIQHDHWX7P37VQIP3GVGUSQFI72DJEHW3XB5KHU4YDMDA@tmp
      140M	Infra_jenkins.io_PR-206-DW3QHZOAZG46G4TMSHIMCYPSJILSN3YYNS5BDCFTX4M7IWLTIM6Q
      4.0K	Infra_jenkins.io_PR-206-DW3QHZOAZG46G4TMSHIMCYPSJILSN3YYNS5BDCFTX4M7IWLTIM6Q@tmp
      118M	Packaging_docker_2.19.1-BR2CJ4NUEEYYTSGGHCKZ3NNOYC2B2GVZFHVQL2UQGJQ5IFC4JTSA
      4.0K	Packaging_docker_2.19.1-BR2CJ4NUEEYYTSGGHCKZ3NNOYC2B2GVZFHVQL2UQGJQ5IFC4JTSA@tmp
      118M	Packaging_docker_windows-MKYB4XOWBW253352S4QEAGLPAEDKJ4RFT65WYQCMOT5PZGF5VLTA
      4.0K	Packaging_docker_windows-MKYB4XOWBW253352S4QEAGLPAEDKJ4RFT65WYQCMOT5PZGF5VLTA@tmp
      177M	ra_jenkins-infra_production-4TTENGJWW2V5USAIFAFZPTML4FH6S63TQKHWZDNDDNW4YIHE7VEA
      8.0M	ra_jenkins-infra_production-4TTENGJWW2V5USAIFAFZPTML4FH6S63TQKHWZDNDDNW4YIHE7VEA@2
      4.0K	ra_jenkins-infra_production-4TTENGJWW2V5USAIFAFZPTML4FH6S63TQKHWZDNDDNW4YIHE7VEA@2@tmp
      177M	ra_jenkins-infra_production-4TTENGJWW2V5USAIFAFZPTML4FH6S63TQKHWZDNDDNW4YIHE7VEA@3
      4.0K	ra_jenkins-infra_production-4TTENGJWW2V5USAIFAFZPTML4FH6S63TQKHWZDNDDNW4YIHE7VEA@3@tmp
      4.0K	ra_jenkins-infra_production-4TTENGJWW2V5USAIFAFZPTML4FH6S63TQKHWZDNDDNW4YIHE7VEA@tmp
      172M	re_jenkins-infra_production-QUBFKGKDXOGMFFLPJ3JDIDNVXKBB5QJWU5JSERSBRHCIGLTPB5YA
      172M	re_jenkins-infra_production-QUBFKGKDXOGMFFLPJ3JDIDNVXKBB5QJWU5JSERSBRHCIGLTPB5YA@2
      4.0K	re_jenkins-infra_production-QUBFKGKDXOGMFFLPJ3JDIDNVXKBB5QJWU5JSERSBRHCIGLTPB5YA@2@tmp
      3.1M	re_jenkins-infra_production-QUBFKGKDXOGMFFLPJ3JDIDNVXKBB5QJWU5JSERSBRHCIGLTPB5YA@3
      4.0K	re_jenkins-infra_production-QUBFKGKDXOGMFFLPJ3JDIDNVXKBB5QJWU5JSERSBRHCIGLTPB5YA@3@tmp
      4.0K	re_jenkins-infra_production-QUBFKGKDXOGMFFLPJ3JDIDNVXKBB5QJWU5JSERSBRHCIGLTPB5YA@tmp
      37G	total
      

      I don't think these workspaces serve any reasonable purpose after the parallel() step has completed since you cannot browse or do anything else with them.

      They should be removed to free up disk resources for other jobs on the agent.

      Potentially related to JENKINS-11046, JENKINS-34781, JENKINS-26471

            cloudbees CloudBees Inc.
            rtyler R. Tyler Croy
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

              Created:
              Updated:
              Resolved: