I have the ssh-agent-plugin configured for one of my Jenkins jobs.
The SSH connection uses a jump server to get to a target machine.
A) The key for the "deploy" user on the jump server has been added to the Jenkins configuration and is added to the SSH Agent as part of the job.
B) The user on the target server is selected from a drop-down box in the parameterized job form, which is displayed when clicking on the link "Build with Parameters".
For this to work a Bash shell script is started which adds the private key for the selected environment to the SSH agent by running ssh-add seckeys/dev. The connection to the SSH agent is established by the Unix socket, as defined by SSH_AUTH_SOCK=/tmp/jenkins4211455000048058133.jnr.
I run ssh-add -l afterwards to validate that the key has been added:
The first key is the deploy user's (A) key on the jump server, which is the same for all targets.
The second one is the private key for the environment specific user on the target server (B).
As you can see that the key, added by ssh-add from the shell script seems to broken, as the key length, i.e. the first value, is only 17Bit. The consequence is that the server rejects the key and not connection is possible.
When I add both keys to the Linux standard ssh-agent the key dump looks like this:
The workarounds are:
- Add all keys to Jenkins. The disadvantage is that the list of environments/keys cannot be extended by the maintainers of the deployment project in our Git repository. There is always a Jenkins configuration change necessary (We have a restricted environment where users cannot access the configuration page of a job).
- Use the Linux default ssh-agent as part of the shell script and do not use the ssh-agent-plugin at all