Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-40651

Kubernetes plugin 0.10 failing to start pipeline with message failed to mkdirs

    XMLWordPrintable

    Details

    • Similar Issues:

      Description

      After upgrading the kubernetes plugin to 0.10 from 0.90 when starting a pipeline it fails while performing the checkout scm command with the below error.

      java.io.IOException: Failed to mkdirs: /home/jenkins/workspace/vendasta_CS_jenkins-build-TMVFST4Q5OYBVUZUBWZJEIQMXWFW7XCHYPMZOWVTMRKFHF6GRX3A
      	at hudson.FilePath.mkdirs(FilePath.java:1191)
      	at hudson.plugins.git.GitSCM.createClient(GitSCM.java:736)
      	at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1088)
      	at org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:109)
      	at org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:83)
      	at org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:73)
      	at org.jenkinsci.plugins.workflow.steps.AbstractSynchronousNonBlockingStepExecution$1$1.call(AbstractSynchronousNonBlockingStepExecution.java:47)
      	at hudson.security.ACL.impersonate(ACL.java:221)
      	at org.jenkinsci.plugins.workflow.steps.AbstractSynchronousNonBlockingStepExecution$1.run(AbstractSynchronousNonBlockingStepExecution.java:44)
      	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
      	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      	at java.lang.Thread.run(Thread.java:745)
      Finished: FAILURE
      

        Attachments

          Issue Links

            Activity

            Hide
            csanchez Carlos Sanchez added a comment -

            what jnlp slave are you using ? are you mountin volumes in there ? seems it doesn't have permissions

            Show
            csanchez Carlos Sanchez added a comment - what jnlp slave are you using ? are you mountin volumes in there ? seems it doesn't have permissions
            Hide
            jredl Jesse Redl added a comment -

            Thanks for following up.

            We're using the https://hub.docker.com/r/jenkinsci/jnlp-slave/ image for our podTemplate. Here is capture of the volumes we are mounting. All secrets exist and work correctly on version 0.9

            Show
            jredl Jesse Redl added a comment - Thanks for following up. We're using the https://hub.docker.com/r/jenkinsci/jnlp-slave/ image for our podTemplate. Here is capture of the volumes we are mounting. All secrets exist and work correctly on version 0.9
            Hide
            jredl Jesse Redl added a comment -

            So, after getting back from the holidays I figured I would take another run of this. Upgraded the plugin from .9 to .10 and everything works fine.

            No idea what was different this time vs the 5 times trying to install it last week? Closing issue.

            Show
            jredl Jesse Redl added a comment - So, after getting back from the holidays I figured I would take another run of this. Upgraded the plugin from .9 to .10 and everything works fine. No idea what was different this time vs the 5 times trying to install it last week? Closing issue.
            Hide
            dlozano David Lozano added a comment -

            I had the same problem, it's difficult to debug beceause it seems random (it depend on which container pick ups the build task)

            After checking the scheduled POD and the sources I released that if you define a custom jnlp slave container for the pod it must be named 'jnlp' or an additional container is added with the default docker image (jenkinsci/jnlp-slave:alpine).
            I override the jnlp slave container with another image but it was named 'default' :/

            Show
            dlozano David Lozano added a comment - I had the same problem, it's difficult to debug beceause it seems random (it depend on which container pick ups the build task) After checking the scheduled POD and the sources I released that if you define a custom jnlp slave container for the pod it must be named 'jnlp' or an additional container is added with the default docker image (jenkinsci/jnlp-slave:alpine). I override the jnlp slave container with another image but it was named 'default' :/
            Hide
            dlozano David Lozano added a comment -

            I am not sure if would be better to add a checkbox in each container in the configuration to mark it as a/the jnlp worker of the pod.

            Show
            dlozano David Lozano added a comment - I am not sure if would be better to add a checkbox in each container in the configuration to mark it as a/the jnlp worker of the pod.

              People

              • Assignee:
                csanchez Carlos Sanchez
                Reporter:
                jredl Jesse Redl
              • Votes:
                0 Vote for this issue
                Watchers:
                4 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: