Currently due to Jenkins running the docker container using `-u <userid of Jenkins>:<group id of Jenkins>` for running the Docker containers, none of the default containers I am using can be used without building a custom container that contains valid /etc/passwd and /etc/group for the userid of the host Jenkins, and contains a home directory.
Various tools require that the id they are being run under has a valid user, and has valid permissions to various locations.
For example, the python:2.7 container runs as root, and by default running as a non-root user means we can't use pip/other tools to install directly into the provided Python's site-packages, forcing us to create another virtual environment first, which leads to extra steps and extra work that shouldn't be necessary.
I can't find the ticket, but the choice was made to have Jenkins do all SCM and various other such things in the host so that the Docker container didn't have to have those installed, and then passing the directory as a volume map, but this requires co-operation from the docker container with regard to the userid/groupid so that files written inside the docker container can be modified by the host Jenkins.
This leads to various issues though with files potentially being left behind that are owned by a user that the host can't manipulate (specifically running as root in the docker container)
The other method that I would hope may work better, and doesn't require the end users docker container to contain the Jenkins specific SCM tools/other tools, is to create a docker volume, then run a Jenkins specific SCM sidecar docker container that does the checkout to that volume, and then run the users docker container without specifically passing the `-u` flag to specify the user/group id instead using the defaults as set up by the Docker container.
At the end of a test run the volume can then be removed without worrying about file permissions, thereby allowing a clean build next time. Support for keeping the build directory/not cleaning can be done by keeping the volume around (but this is less useful to me personally, since we spin up executors on demand through AWS, and tear down when they are no longer useful and we start from scratch on workspace anyway).