Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-26008

Use of compression during copying of artifacts cut throughput by a factor of 3

XMLWordPrintable

    • Icon: Improvement Improvement
    • Resolution: Fixed
    • Icon: Major Major
    • core
    • Jenkins 1.565
      jdk-1.7.0_67 (master and slave)
      Custom Linux running 3.10.59 64 bit (master and slave)
    • jenkins-2.196

      We have 19GB of tarballs and isos that are built on a Linux slave. The pulling of artifacts to the master takes 33 minutes over a local network. I modified the Jenkins source to not use compression and reran the job; the job took 9 minutes. This mirrors what happens if you use tar and ssh from the command line with the same machines.

      My method of test was to have a job in Jenkins that only archived the artifacts. I pre-populated the workspace with the data that I wanted archived.

      To turn of compression I modified the function copyRecursiveTo in FilePath.java (core/src/main/hudson). I change the 4 lines in that function that referenced TarCompression.GZIP to TarCompression.NONE for my test.

      I don't suggest just simply changing the TarCompression to NONE but to make it configurable. It should be configurable due to slaves running over the internet, remotely, or in the cloud, etc where compression may be desirable.

      As a note if I tar up the same data with compression and pipe it to ssh and untar it on the destination it takes 19 minutes. Doing the same thing without compression it takes 5 minutes.

            Unassigned Unassigned
            carlg Carl George
            Votes:
            5 Vote for this issue
            Watchers:
            6 Start watching this issue

              Created:
              Updated:
              Resolved: