The job is run on a physical Linux slave/node machine with:
- Username: jenkins-slave-1.
- User home: /home/jenkins-slave-1.
- Remote slave root: /home/jenkins-slave-1/slave-root.
- Docker available for the jenkins-slave-1 user.
I am trying to have Jenkins jobs built within Docker containers using the Custom Build Environment plugin preserve the Gradle cache (for the downloaded artifacts and the wrapper). For this, I am adding a custom volume $HOME/.gradle -> $WORKSPACE/.gradle, similarly what the help says. This is the job log (skipped env vars for brevity):
There seem to be a few issues:
- The additional volume is specified twice, once with the env vars used in the configuration and once expanded (seen in the third 'docker run' statement).
- It apparently doesn't clean such additional volume up. This can be check by running 'docker volume ls -f dangling=true' before the job is run, and afterwards. Each new run creates a new dangling volume. I'm not sure whether it has anything to do with the first issue, the double --volume specification.
- If the /home/jenkins-slave-1/.gradle (the mounted directory from the host) doesn't exist, it is created but owned by the root user. The reason is that the initial 'docker run' creating the container is not run with the jenkins-slave-1 user, but whatever the image sets (I'm using openjdk:8u102), and subsequent calls specify '--user 1003:1003' (1003 is the userid / groupip if the jenkins-slave-1 user on the node). This prevents the job from running, as the folder is not writable by the task called using a later 'docker exec --user 1003:1003', as it can't write to that directory, and the Gradle call breaks.
- The custom volume directory is writable by the user the build is running as.
- After the build, custom volumes are cleanup up.