Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-57704

Run JNLP container as side car to default build container

    Details

    • Similar Issues:

      Description

      At the time of writing, the container that performs the JNDI connection to the Jenkins master and the default build container are the same. To customize a build container, I can currently pick from multiple non-ideal solutions:

      (1) Overwrite the container named `jndi` with a custom container. This is bad because it requires me to put Java and the JNLP client into that container image, e.g. if I need another Java version for my builds than the one required by Jenkins. And it makes the images unnecessarily big.

      (2) Surround my build steps with a `container` block. This is bad because the pipeline code would be coupled to the container structure.

      I'd rather prefer a third solution where the pipeline doesn't need to know anything about whether it runs in pods:

      • The JNLP container should run nothing but the JNLP slave. By doing this, we can make the underlying image as small as possible – it can even just be a distroless image with nothing but the JNLP slave.
      • There should be a default build container that runs as a sidecar. It's name should be something like `build` or configurable via "Configure Jenkins". The pipeline code itself should not know about the container hierarchy. The build container doesn't need to contain the JNLP slave.
      • Thus, an agent pod would contain at least two pods, `jnlp` and `build`.

        Attachments

          Issue Links

            Activity

            Hide
            csanchez Carlos Sanchez added a comment -

            you can do it in declarative with defaultContainer

            Show
            csanchez Carlos Sanchez added a comment - you can do it in declarative with defaultContainer
            Hide
            hendrikhalkow Hendrik Halkow added a comment - - edited

            you can do it in declarative with defaultContainer

            That's exactly the point: I think this shouldn't be in the pipeline code. The pipeline should be agnostic to the technology that runs the build. Beside that, this requires me having a `kubernetes` block inside the agent block, which prevents me from having a `labels` block. And the labels inside the `kubernetes` block work differently than regular Jenkins labels.

            Edit: I just it with defaultContainers and it doesn't even work. This is my pipeline code:

            pipeline {
              agent none
              stages {
                stage('Test') {
                  agent {
                    kubernetes {
                      defaultContainer 'build'
                      label 'centos'
                    }
                  }
                  steps {
                    sh """
                      set -x
                      ls -la /opt
                      sleep 1000
                    """
                  }
                }
              }
            }
            

            And this is the result:

            21:05:59  [Pipeline] withEnv
            21:05:59  [Pipeline] {
            21:05:59  [Pipeline] container
            21:05:59  [Pipeline] {
            21:05:59  [Pipeline] sh
            21:05:59  /bin/sh: line 1: cd: /code/workspace/xxxxx_hello-pipeline_feat_k8s: No such file or directory
            21:05:59  sh: /code/workspace/xxxxx_hello-pipeline_feat_k8s@tmp/durable-2e2ff7d6/jenkins-log.txt: No such file or directory
            21:05:59  sh: /code/workspace/xxxxx_hello-pipeline_feat_k8s@tmp/durable-2e2ff7d6/jenkins-result.txt.tmp: No such file or directory
            21:05:59  mv: cannot stat ‘/code/workspace/xxxxx_hello-pipeline_feat_k8s@tmp/durable-2e2ff7d6/jenkins-result.txt.tmp’: No such file or directory
            21:05:58   > git rev-list --no-walk ef30fb54796f5bcad9f6f968f280a608eb46f1b7 # timeout=10
            21:11:07  process apparently never started in /code/workspace/xxxxx_hello-pipeline_feat_k8s@tmp/durable-2e2ff7d6
            21:11:07  [Pipeline] }
            21:11:07  [Pipeline] // container
            21:11:07  [Pipeline] }
            21:11:07  [Pipeline] // withEnv
            21:11:07  [Pipeline] }
            21:11:08  [Pipeline] // node
            21:11:08  [Pipeline] }
            21:11:08  [Pipeline] // podTemplate
            21:11:08  [Pipeline] }
            21:11:08  [Pipeline] // stage
            21:11:08  [Pipeline] End of Pipeline
            21:11:09  
            21:11:09  GitHub has been notified of this commit’s build result
            21:11:09  
            21:11:09  ERROR: script returned exit code -2
            21:11:09  Finished: FAILURE
            

            Edit 2: This is how I wish the agent block should look like:

            agent {
              // Pod template with that label and default container
              // are configured inside Jenkins.
              // Nothing about Kubernetes in the pipeline here.
              // The build can even run on a VM.
              label 'centos'
            }
            
            Show
            hendrikhalkow Hendrik Halkow added a comment - - edited you can do it in declarative with defaultContainer That's exactly the point: I think this shouldn't be in the pipeline code. The pipeline should be agnostic to the technology that runs the build. Beside that, this requires me having a `kubernetes` block inside the agent block, which prevents me from having a `labels` block. And the labels inside the `kubernetes` block work differently than regular Jenkins labels. Edit: I just it with defaultContainers and it doesn't even work. This is my pipeline code: pipeline { agent none stages { stage( 'Test' ) { agent { kubernetes { defaultContainer 'build' label 'centos' } } steps { sh """ set -x ls -la /opt sleep 1000 """ } } } } And this is the result: 21:05:59 [Pipeline] withEnv 21:05:59 [Pipeline] { 21:05:59 [Pipeline] container 21:05:59 [Pipeline] { 21:05:59 [Pipeline] sh 21:05:59 /bin/sh: line 1: cd: /code/workspace/xxxxx_hello-pipeline_feat_k8s: No such file or directory 21:05:59 sh: /code/workspace/xxxxx_hello-pipeline_feat_k8s@tmp/durable-2e2ff7d6/jenkins-log.txt: No such file or directory 21:05:59 sh: /code/workspace/xxxxx_hello-pipeline_feat_k8s@tmp/durable-2e2ff7d6/jenkins-result.txt.tmp: No such file or directory 21:05:59 mv: cannot stat ‘/code/workspace/xxxxx_hello-pipeline_feat_k8s@tmp/durable-2e2ff7d6/jenkins-result.txt.tmp’: No such file or directory 21:05:58 > git rev-list --no-walk ef30fb54796f5bcad9f6f968f280a608eb46f1b7 # timeout=10 21:11:07 process apparently never started in /code/workspace/xxxxx_hello-pipeline_feat_k8s@tmp/durable-2e2ff7d6 21:11:07 [Pipeline] } 21:11:07 [Pipeline] // container 21:11:07 [Pipeline] } 21:11:07 [Pipeline] // withEnv 21:11:07 [Pipeline] } 21:11:08 [Pipeline] // node 21:11:08 [Pipeline] } 21:11:08 [Pipeline] // podTemplate 21:11:08 [Pipeline] } 21:11:08 [Pipeline] // stage 21:11:08 [Pipeline] End of Pipeline 21:11:09 21:11:09 GitHub has been notified of this commit’s build result 21:11:09 21:11:09 ERROR: script returned exit code -2 21:11:09 Finished: FAILURE Edit 2: This is how I wish the agent block should look like: agent { // Pod template with that label and default container // are configured inside Jenkins. // Nothing about Kubernetes in the pipeline here. // The build can even run on a VM. label 'centos' }
            Hide
            jglick Jesse Glick added a comment -

            This issue seems to be specifically about the Declarative syntax, rather than how the plugin’s underlying steps work (and thus what the Scripted syntax would look like).

            The kubernetes block is required inside agent to activate this plugin. Different agent launching technologies work fundamentally differently and there is no use obscuring which has been selected.

            Given JENKINS-57830 (the use of Jenkins labels is an implementation detail that most Jenkinsfile authors should not need to pay attention to) and some related changes, we could probably boil down the syntax to something like

            agent {
              kubernetes {
                containerTemplate {
                  image 'maven:3.6.1-jdk-11'
                  defaultContainer: true
                }
              }
            }
            

            without loss of backward compatibility and with minimal redundancy in syntactic options.

            Show
            jglick Jesse Glick added a comment - This issue seems to be specifically about the Declarative syntax, rather than how the plugin’s underlying steps work (and thus what the Scripted syntax would look like). The kubernetes block is required inside agent to activate this plugin. Different agent launching technologies work fundamentally differently and there is no use obscuring which has been selected. Given JENKINS-57830 (the use of Jenkins labels is an implementation detail that most Jenkinsfile authors should not need to pay attention to) and some related changes, we could probably boil down the syntax to something like agent { kubernetes { containerTemplate { image 'maven:3.6.1-jdk-11' defaultContainer: true } } } without loss of backward compatibility and with minimal redundancy in syntactic options.

              People

              • Assignee:
                Unassigned
                Reporter:
                hendrikhalkow Hendrik Halkow
              • Votes:
                1 Vote for this issue
                Watchers:
                4 Start watching this issue

                Dates

                • Created:
                  Updated: