-
Bug
-
Resolution: Not A Defect
-
Major
-
None
-
Jenkins 2.110 running inside a Kubernetes Cluster
I have the following declarative pipeline
#!/usr/bin/env groovy @Library('shared-library@1.1.0') _ pipeline { agent { kubernetes { label buildId() containerTemplate { name 'node' image 'node:8.9.4-alpine' ttyEnabled true command 'cat' } } } stages { stage('Build') { steps { sh "npm run build" // stash build directory stash includes: 'build/**', name: 'app' } } stage('Unit Test') { steps { sh "npm run test" } } stage('Package') { agent { node { label 'docker1' } } options { skipDefaultCheckout() } steps { sh "/bin/sleep 120" } } } }
My use case is due to the fact that the Kubernetes declarative plugin doesn't allow you to specify multiple containers (as far as I know) nor volumes.
The node I have is simply a JNLP POD running in my kubernetes cluster that has the docker socket mounted into it as well as the docker client.
During execution, the Package step connects to the docker1 node successfully but errors with the following:
[_location-service_PR-2-head-RP4JKIYE4QYJREC2YOQQGNKQG5UC56YLZET3VDC2IPW4C5FSH7EQ] Running shell script
sh: can't create /home/jenkins/workspace/_location-service_PR-2-head-RP4JKIYE4QYJREC2YOQQGNKQG5UC56YLZET3VDC2IPW4C5FSH7EQ@tmp/durable-6b3ecec4/jenkins-log.txt: nonexistent directory
sh: can't create /home/jenkins/workspace/_location-service_PR-2-head-RP4JKIYE4QYJREC2YOQQGNKQG5UC56YLZET3VDC2IPW4C5FSH7EQ@tmp/durable-6b3ecec4/jenkins-result.txt.tmp: nonexistent directory
mv: can't rename '/home/jenkins/workspace/_location-service_PR-2-head-RP4JKIYE4QYJREC2YOQQGNKQG5UC56YLZET3VDC2IPW4C5FSH7EQ@tmp/durable-6b3ecec4/jenkins-result.txt.tmp': No such file or directory
EXITCODE 0process apparently never started in /home/jenkins/workspace/_location-service_PR-2-head-RP4JKIYE4QYJREC2YOQQGNKQG5UC56YLZET3VDC2IPW4C5FSH7EQ@tmp/durable-6b3ecec4
The workspace directory location-service_PR-2-head-.....@tmp exists __ but the durable does not.
If I set the global agent to none and then just run per stage agents I don't get the same error. The problem with doing that is that after the build stage I would have to stash the whole workspace (node_modules) and that can get very very large so would prefer not to go that route.
I don't believe this has to do with the fact that the node for the Package stage is actually a container as I proved that out by running the above with per stage agents and setting the global agent to none.
Any help would be appreciated.
- duplicates
-
JENKINS-46713 script returned exit code -2 when trying to switch to another Jenkins node inside the container block
- Resolved