Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-60264

Running a multibranch pipeline job results in missing workspace error

    Details

    • Similar Issues:

      Description

      I build my own Jenkins image and check it's sanity by starting in in a docker container and trying to login to it. I achieve this using the following Jenkinsfile:

          stages {
              stage('Build Jenkins Master Image') {
                  steps {
                      sh(
                              script: """
                                  cd Jenkins-Master
                                  docker pull jenkins:latest
                                  docker build --rm -t ${IMAGE_TAG} .
                              """
                      )
                  }
              }
              stage('Image sanity check') {
                  steps {
                      withCredentials([string(credentialsId: 'CASC_VAULT_TOKEN', variable: 'CASC_VAULT_TOKEN'),
                                       usernamePassword(credentialsId: 'Forge_service_account', passwordVariable: 'JENKINS_PASSWORD', usernameVariable: 'JENKINS_LOGIN')]) {
                          sh(
                              script: """
                                      docker run -e CASC_VAULT_TOKEN=${CASC_VAULT_TOKEN} \
                                                 --name jenkins \
                                                 -d \
                                                 -p 8080:8080 ${IMAGE_TAG}
                                      mvn -Djenkins.test.timeout=${GLOBAL_TEST_TIMEOUT} -B -f Jenkins-Master/pom.xml test
                                      """
                          )
                      }
                  }
              }
      

      The test is successful, but the build fails with the following log:

      [2019-11-25T10:33:38.333Z] Nov 25, 2019 11:33:37 AM ch.ti8m.forge.jenkins.logintest.LocalhostJenkinsRule before
      [2019-11-25T10:33:38.333Z] INFO: Waiting for Jenkins instance... (response code 503)
      [2019-11-25T10:33:43.628Z] Nov 25, 2019 11:33:42 AM ch.ti8m.forge.jenkins.logintest.LocalhostJenkinsRule before
      [2019-11-25T10:33:43.628Z] INFO: Waiting for Jenkins instance... (response code 503)
      [Pipeline] }
      [Pipeline] // withCredentials
      [Pipeline] }
      [Pipeline] // withEnv
      [Pipeline] }
      [Pipeline] // stage
      [Pipeline] stage
      [Pipeline] { (Push Jenkins Master Image)
      Stage "Push Jenkins Master Image" skipped due to earlier failure(s)
      [Pipeline] }
      [Pipeline] // stage
      [Pipeline] }
      [Pipeline] // withEnv
      [Pipeline] }
      [Pipeline] // ansiColor
      [Pipeline] }
      [Pipeline] // timeout
      [Pipeline] }
      [Pipeline] // timestamps
      [Pipeline] }
      [Pipeline] // withEnv
      [Pipeline] }
      [Pipeline] // withEnv
      [Pipeline] }
      [Pipeline] // node
      [Pipeline] End of Pipeline
      ERROR: missing workspace /data/ci/workspace/orge_ti8m-ci-2.0_main-instance_8 on srvzh-jenkinsnode-tst-005
      Finished: FAILURE
      

      As I debug in workflow-durable-task-step I notice a strange behavior. My breakpoint is set to DurableTaskStep.java#L386, when it halts there, it means ws.isDirectory() returned false. But during this break in debugger I evaluate ws.isDirectory() manually and it returns true.



      Any ideas what might cause this?

        Attachments

          Activity

          Hide
          halkeye Gavin Mogan added a comment -

          Repeating from on gitter:

          your bug essentially reads "I am building my own docker image using secret steps. The secret tests fail, and my pipeline fails". Which seems right, when tests fail, mvn exit code > 0, and pipeline exits

           

          Based on your super truncated error message / log, I'm pretty sure its failing on the mvn test. I don't know what jenkins, or your pom file does for mvn test, but it doesn't feel like a pipeline issue to me.

          Show
          halkeye Gavin Mogan added a comment - Repeating from on gitter: your bug essentially reads "I am building my own docker image using secret steps. The secret tests fail, and my pipeline fails". Which seems right, when tests fail, mvn exit code > 0, and pipeline exits   Based on your super truncated error message / log, I'm pretty sure its failing on the mvn test. I don't know what jenkins, or your pom file does for mvn test, but it doesn't feel like a pipeline issue to me.
          Hide
          smasher Daniel Estermann added a comment -

          Thank you for pointing that out! Now I see something else, which is also suspicious. Maven doesn't print a report on test result as usual. Usually it outputs the number of tests run, how many are failed or skipped, no matter if test fails or succeeds.

          Show
          smasher Daniel Estermann added a comment - Thank you for pointing that out! Now I see something else, which is also suspicious. Maven doesn't print a report on test result as usual. Usually it outputs the number of tests run, how many are failed or skipped, no matter if test fails or succeeds.
          Hide
          smasher Daniel Estermann added a comment -

          I still cannot resolve this because I don't understand why maven process just quits. I mean if the test would fail, maven still should output the test summary. It looks like the process gets killed for some inexplicable reason...

          Show
          smasher Daniel Estermann added a comment - I still cannot resolve this because I don't understand why maven process just quits. I mean if the test would fail, maven still should output the test summary. It looks like the process gets killed for some inexplicable reason...
          Hide
          smasher Daniel Estermann added a comment -

          I fixed it... it makes some sense now. The jenkins image I started within the test was using the same buildslave-configuration as the jenkins instance itself. It seems that it somehow affected the connections to the buildslaves, especially to the node where the test was running. I could workaround it like this:

                                  script: """
                                          mkdir /tmp/casc_configs/ && echo "" > /tmp/casc_configs/nodes.yaml && chown -R 1000:1000 /tmp/casc_configs/
                                          docker run -e CASC_VAULT_TOKEN=${CASC_VAULT_TOKEN} \
                                                     --name jenkins \
                                                     -d \
                                                     -p 8080:8080 \
                                                     -v /tmp/casc_configs/:/var/jenkins_home/casc_configs/ \
                                                     ${IMAGE_TAG}
                                          mvn -Djenkins.test.timeout=${GLOBAL_TEST_TIMEOUT} -B -f Jenkins-Master/pom.xml test
                                          """
          
          Show
          smasher Daniel Estermann added a comment - I fixed it... it makes some sense now. The jenkins image I started within the test was using the same buildslave-configuration as the jenkins instance itself. It seems that it somehow affected the connections to the buildslaves, especially to the node where the test was running. I could workaround it like this: script: """ mkdir /tmp/casc_configs/ && echo "" > /tmp/casc_configs/nodes.yaml && chown -R 1000:1000 /tmp/casc_configs/ docker run -e CASC_VAULT_TOKEN=${CASC_VAULT_TOKEN} \ --name jenkins \ -d \ -p 8080:8080 \ -v /tmp/casc_configs/:/var/jenkins_home/casc_configs/ \ ${IMAGE_TAG} mvn -Djenkins.test.timeout=${GLOBAL_TEST_TIMEOUT} -B -f Jenkins-Master/pom.xml test """

            People

            • Assignee:
              Unassigned
              Reporter:
              smasher Daniel Estermann
            • Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: