Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-50525

Error When Provisioning Slave: mountPath: Required value

    XMLWordPrintable

    Details

    • Type: Bug
    • Status: Resolved (View Workflow)
    • Priority: Blocker
    • Resolution: Fixed
    • Component/s: kubernetes-plugin
    • Labels:
      None
    • Environment:
    • Similar Issues:

      Description

      I have a Jenkins instance deployed via Helm on Friday that was spinning up pods just fine, and came back Monday to a system that couldn't start any. Configuration below:

      The logs look like lots of this:

       

      {{Apr 02, 2018 10:52:26 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate}}
      {{INFO: Terminating Kubernetes instance for agent standard-1-r0fd4}}
      {{Apr 02, 2018 10:52:26 PM okhttp3.internal.platform.Platform log}}
      {{INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?}}
      {{Apr 02, 2018 10:52:26 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher launch}}
      {{WARNING: Error in provisioning; agent=KubernetesSlave name: standard-1-rxq2t, template=PodTemplate\{inheritFrom='', name='standard-1', namespace='', instanceCap=20, idleMinutes=30, label='standard-1 worker', nodeSelector='', nodeUsageMode=NORMAL, customWorkspaceVolumeEnabled=true, workspaceVolume=org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume@565b8276, volumes=[org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume@20a0b5ee, org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume@4bc6165a, org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume@41046566, org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume@372e4c46, org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume@12049088, org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume@690b3054], containers=[ContainerTemplate\{name='jnlp', image='gcr.io/myproj/cyrusbio-jnlp-minion:master', alwaysPullImage=true, workingDir='/home/jenkins', command='', args='', ttyEnabled=true, resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory='', envVars=[KeyValueEnvVar [getValue()=tcp://127.0.0.1:2375, getKey()=DOCKER_HOST], KeyValueEnvVar [getValue()=/gcloud/credentials.json, getKey()=GOOGLE_APPLICATION_CREDENTIALS], KeyValueEnvVar [getValue()=/gcloud/credentials.json, getKey()=CLOUDSDK_AUTH_CREDENTIAL_FILE_OVERRIDE]], livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@3c38cba6}, ContainerTemplate\{name='docker-engine', image='docker:dind', privileged=true, workingDir='', command='', args='', ttyEnabled=true, resourceRequestCpu='1', resourceRequestMemory='2Gi', resourceLimitCpu='2100m', resourceLimitMemory='3Gi', livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@b54246a}]}}}
      {{io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://kubernetes.default/api/v1/namespaces/default/pods. Message: Pod "standard-1-rxq2t" is invalid: spec.containers[0].volumeMounts[6].mountPath: Required value. Received status: Status(apiVersion=v1, code=422, details=StatusDetails(causes=[StatusCause(field=spec.containers[0].volumeMounts[6].mountPath, message=Required value, reason=FieldValueRequired, additionalProperties=\{})], group=null, kind=Pod, name=standard-1-rxq2t, retryAfterSeconds=null, uid=null, additionalProperties=\{}), kind=Status, message=Pod "standard-1-rxq2t" is invalid: spec.containers[0].volumeMounts[6].mountPath: Required value, metadata=ListMeta(resourceVersion=null, selfLink=null, additionalProperties=\{}), reason=Invalid, status=Failure, additionalProperties=\{}).}}
      {{at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:472)}}
      {{at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:411)}}
      {{at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:381)}}
      {{at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:344)}}
      {{at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:227)}}
      {{at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:756)}}
      {{at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:334)}}
      {{at org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher.launch(KubernetesLauncher.java:105)}}
      {{at hudson.slaves.SlaveComputer$1.call(SlaveComputer.java:288)}}
      {{at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)}}
      {{at java.util.concurrent.FutureTask.run(FutureTask.java:266)}}
      {{at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)}}
      {{at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)}}
      {{at java.lang.Thread.run(Thread.java:748)}}{{Apr 02, 2018 10:52:26 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate}}
      {{INFO: Terminating Kubernetes instance for agent standard-1-rxq2t}}
      {{Apr 02, 2018 10:52:26 PM okhttp3.internal.platform.Platform log}}
      {{INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?}}
      {{Apr 02, 2018 10:52:26 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher launch}}
      {{WARNING: Error in provisioning; agent=KubernetesSlave name: standard-1-2btdk, template=PodTemplate\{inheritFrom='', name='standard-1', namespace='', instanceCap=20, idleMinutes=30, label='standard-1 worker', nodeSelector='', nodeUsageMode=NORMAL, customWorkspaceVolumeEnabled=true, workspaceVolume=org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume@565b8276, volumes=[org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume@20a0b5ee, org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume@4bc6165a, org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume@41046566, org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume@372e4c46, org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume@12049088, org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume@690b3054], containers=[ContainerTemplate\{name='jnlp', image='gcr.io/myproj/cyrusbio-jnlp-minion:master', alwaysPullImage=true, workingDir='/home/jenkins', command='', args='', ttyEnabled=true, resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory='', envVars=[KeyValueEnvVar [getValue()=tcp://127.0.0.1:2375, getKey()=DOCKER_HOST], KeyValueEnvVar [getValue()=/gcloud/credentials.json, getKey()=GOOGLE_APPLICATION_CREDENTIALS], KeyValueEnvVar [getValue()=/gcloud/credentials.json, getKey()=CLOUDSDK_AUTH_CREDENTIAL_FILE_OVERRIDE]], livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@3c38cba6}, ContainerTemplate\{name='docker-engine', image='docker:dind', privileged=true, workingDir='', command='', args='', ttyEnabled=true, resourceRequestCpu='1', resourceRequestMemory='2Gi', resourceLimitCpu='2100m', resourceLimitMemory='3Gi', livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@b54246a}]}}}
      {{io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://kubernetes.default/api/v1/namespaces/default/pods. Message: Pod "standard-1-2btdk" is invalid: spec.containers[0].volumeMounts[6].mountPath: Required value. Received status: Status(apiVersion=v1, code=422, details=StatusDetails(causes=[StatusCause(field=spec.containers[0].volumeMounts[6].mountPath, message=Required value, reason=FieldValueRequired, additionalProperties=\{})], group=null, kind=Pod, name=standard-1-2btdk, retryAfterSeconds=null, uid=null, additionalProperties=\{}), kind=Status, message=Pod "standard-1-2btdk" is invalid: spec.containers[0].volumeMounts[6].mountPath: Required value, metadata=ListMeta(resourceVersion=null, selfLink=null, additionalProperties=\{}), reason=Invalid, status=Failure, additionalProperties=\{}).}}
      {{at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:472)}}
      {{at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:411)}}
      {{at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:381)}}
      {{at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:344)}}
      {{at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:227)}}
      {{at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:756)}}
      {{at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:334)}}
      {{at org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher.launch(KubernetesLauncher.java:105)}}
      {{at hudson.slaves.SlaveComputer$1.call(SlaveComputer.java:288)}}
      {{at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)}}
      {{at java.util.concurrent.FutureTask.run(FutureTask.java:266)}}
      {{at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)}}
      {{at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)}}
      {{at java.lang.Thread.run(Thread.java:748)}}{{Apr 02, 2018 10:52:26 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate}}
      

       

      Here's the pod template from my config.xml:

       

      {{<org.csanchez.jenkins.plugins.kubernetes.PodTemplate>}}
      {{ <inheritFrom></inheritFrom>}}
      {{ <name>standard-1</name>}}
      {{ <namespace></namespace>}}
      {{ <privileged>false</privileged>}}
      {{ <alwaysPullImage>false</alwaysPullImage>}}
      {{ <instanceCap>20</instanceCap>}}
      {{ <slaveConnectTimeout>100</slaveConnectTimeout>}}
      {{ <idleMinutes>30</idleMinutes>}}
      {{ <activeDeadlineSeconds>0</activeDeadlineSeconds>}}
      {{ <label>standard-1 worker</label>}}
      {{ <nodeSelector></nodeSelector>}}
      {{ <nodeUsageMode>NORMAL</nodeUsageMode>}}
      {{ <customWorkspaceVolumeEnabled>true</customWorkspaceVolumeEnabled>}}
      {{ <workspaceVolume class="org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume">}}
      {{ <memory>false</memory>}}
      {{ </workspaceVolume>}}
      {{ <volumes>}}
      {{ <org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>}}
      {{ <mountPath>/home/jenkins/XXXX</mountPath>}}
      {{ <secretName>XXXXXXXXX</secretName>}}
      {{ </org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>}}
      {{ <org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>}}
      {{ <mountPath>/docker-cfg</mountPath>}}
      {{ <secretName>XXXXXXXXX</secretName>}}
      {{ </org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>}}
      {{ <org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>}}
      {{ <mountPath>/gcloud</mountPath>}}
      {{ <secretName>XXXXXXXXX</secretName>}}
      {{ </org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>}}
      {{ <org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume>}}
      {{ <mountPath>/home/jenkins/workspace</mountPath>}}
      {{ <memory>false</memory>}}
      {{ </org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume>}}
      {{ <org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume>}}
      {{ <mountPath>/tmp</mountPath>}}
      {{ <memory>false</memory>}}
      {{ </org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume>}}
      {{ <org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume>}}
      {{ <mountPath>/var/lib/docker</mountPath>}}
      {{ <memory>false</memory>}}
      {{ </org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume>}}
      {{ </volumes>}}
      {{ <containers>}}
      {{ <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>}}
      {{ <name>jnlp</name>}}
      {{ }}
      {{ <privileged>false</privileged>}}
      {{ <alwaysPullImage>true</alwaysPullImage>}}
      {{ <workingDir>/home/jenkins</workingDir>}}
      {{ <command></command>}}
      {{ <args></args>}}
      {{ <ttyEnabled>true</ttyEnabled>}}
      {{ <resourceRequestCpu></resourceRequestCpu>}}
      {{ <resourceRequestMemory></resourceRequestMemory>}}
      {{ <resourceLimitCpu></resourceLimitCpu>}}
      {{ <resourceLimitMemory></resourceLimitMemory>}}
      {{ <envVars>}}
      {{ <org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>}}
      {{ <key>DOCKER_HOST</key>}}
      {{ <value>tcp://127.0.0.1:2375</value>}}
      {{ </org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>}}
      {{ <org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>}}
      {{ <key>GOOGLE_APPLICATION_CREDENTIALS</key>}}
      {{ <value>/gcloud/credentials.json</value>}}
      {{ </org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>}}
      {{ <org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>}}
      {{ <key>CLOUDSDK_AUTH_CREDENTIAL_FILE_OVERRIDE</key>}}
      {{ <value>/gcloud/credentials.json</value>}}
      {{ </org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>}}
      {{ </envVars>}}
      {{ <ports/>}}
      {{ <livenessProbe>}}
      {{ <execArgs></execArgs>}}
      {{ <timeoutSeconds>0</timeoutSeconds>}}
      {{ <initialDelaySeconds>0</initialDelaySeconds>}}
      {{ <failureThreshold>0</failureThreshold>}}
      {{ <periodSeconds>0</periodSeconds>}}
      {{ <successThreshold>0</successThreshold>}}
      {{ </livenessProbe>}}
      {{ </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>}}
      {{ <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>}}
      {{ <name>docker-engine</name>}}
      {{ }}
      {{ <privileged>true</privileged>}}
      {{ <alwaysPullImage>false</alwaysPullImage>}}
      {{ <workingDir></workingDir>}}
      {{ <command></command>}}
      {{ <args></args>}}
      {{ <ttyEnabled>true</ttyEnabled>}}
      {{ <resourceRequestCpu>1</resourceRequestCpu>}}
      {{ <resourceRequestMemory>2Gi</resourceRequestMemory>}}
      {{ <resourceLimitCpu>2100m</resourceLimitCpu>}}
      {{ <resourceLimitMemory>3Gi</resourceLimitMemory>}}
      {{ <envVars/>}}
      {{ <ports/>}}
      {{ <livenessProbe>}}
      {{ <execArgs></execArgs>}}
      {{ <timeoutSeconds>0</timeoutSeconds>}}
      {{ <initialDelaySeconds>0</initialDelaySeconds>}}
      {{ <failureThreshold>0</failureThreshold>}}
      {{ <periodSeconds>0</periodSeconds>}}
      {{ <successThreshold>0</successThreshold>}}
      {{ </livenessProbe>}}
      {{ </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>}}
      {{ </containers>}}
      {{ <envVars/>}}
      {{ <annotations/>}}
      {{ <imagePullSecrets/>}}
      {{</org.csanchez.jenkins.plugins.kubernetes.PodTemplate>}}
      
      

        Attachments

          Issue Links

            Activity

            pnovotnak Peter Novotnak created issue -
            pnovotnak Peter Novotnak made changes -
            Field Original Value New Value
            Attachment Screen Shot 2018-04-02 at 3.57.16 PM.png [ 42042 ]
            csanchez Carlos Sanchez made changes -
            Description I have a Jenkins instance deployed via Helm on Friday that was spinning up pods just fine, and came back Monday to a system that couldn't start any. Configuration below:

            !Screen Shot 2018-04-02 at 3.57.16 PM.png!

            The logs look like lots of this:

             

            {{Apr 02, 2018 10:52:26 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate}}
            {{INFO: Terminating Kubernetes instance for agent standard-1-r0fd4}}
            {{Apr 02, 2018 10:52:26 PM okhttp3.internal.platform.Platform log}}
            {{INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?}}
            {{Apr 02, 2018 10:52:26 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher launch}}
            {{WARNING: Error in provisioning; agent=KubernetesSlave name: standard-1-rxq2t, template=PodTemplate\{inheritFrom='', name='standard-1', namespace='', instanceCap=20, idleMinutes=30, label='standard-1 worker', nodeSelector='', nodeUsageMode=NORMAL, customWorkspaceVolumeEnabled=true, workspaceVolume=org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume@565b8276, volumes=[org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume@20a0b5ee, org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume@4bc6165a, org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume@41046566, org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume@372e4c46, org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume@12049088, org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume@690b3054], containers=[ContainerTemplate\{name='jnlp', image='gcr.io/myproj/cyrusbio-jnlp-minion:master', alwaysPullImage=true, workingDir='/home/jenkins', command='', args='', ttyEnabled=true, resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory='', envVars=[KeyValueEnvVar [getValue()=tcp://127.0.0.1:2375, getKey()=DOCKER_HOST], KeyValueEnvVar [getValue()=/gcloud/credentials.json, getKey()=GOOGLE_APPLICATION_CREDENTIALS], KeyValueEnvVar [getValue()=/gcloud/credentials.json, getKey()=CLOUDSDK_AUTH_CREDENTIAL_FILE_OVERRIDE]], livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@3c38cba6}, ContainerTemplate\{name='docker-engine', image='docker:dind', privileged=true, workingDir='', command='', args='', ttyEnabled=true, resourceRequestCpu='1', resourceRequestMemory='2Gi', resourceLimitCpu='2100m', resourceLimitMemory='3Gi', livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@b54246a}]}}}
            {{io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://kubernetes.default/api/v1/namespaces/default/pods. Message: Pod "standard-1-rxq2t" is invalid: spec.containers[0].volumeMounts[6].mountPath: Required value. Received status: Status(apiVersion=v1, code=422, details=StatusDetails(causes=[StatusCause(field=spec.containers[0].volumeMounts[6].mountPath, message=Required value, reason=FieldValueRequired, additionalProperties=\{})], group=null, kind=Pod, name=standard-1-rxq2t, retryAfterSeconds=null, uid=null, additionalProperties=\{}), kind=Status, message=Pod "standard-1-rxq2t" is invalid: spec.containers[0].volumeMounts[6].mountPath: Required value, metadata=ListMeta(resourceVersion=null, selfLink=null, additionalProperties=\{}), reason=Invalid, status=Failure, additionalProperties=\{}).}}
            {{at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:472)}}
            {{at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:411)}}
            {{at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:381)}}
            {{at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:344)}}
            {{at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:227)}}
            {{at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:756)}}
            {{at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:334)}}
            {{at org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher.launch(KubernetesLauncher.java:105)}}
            {{at hudson.slaves.SlaveComputer$1.call(SlaveComputer.java:288)}}
            {{at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)}}
            {{at java.util.concurrent.FutureTask.run(FutureTask.java:266)}}
            {{at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)}}
            {{at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)}}
            {{at java.lang.Thread.run(Thread.java:748)}}{{Apr 02, 2018 10:52:26 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate}}
            {{INFO: Terminating Kubernetes instance for agent standard-1-rxq2t}}
            {{Apr 02, 2018 10:52:26 PM okhttp3.internal.platform.Platform log}}
            {{INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?}}
            {{Apr 02, 2018 10:52:26 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher launch}}
            {{WARNING: Error in provisioning; agent=KubernetesSlave name: standard-1-2btdk, template=PodTemplate\{inheritFrom='', name='standard-1', namespace='', instanceCap=20, idleMinutes=30, label='standard-1 worker', nodeSelector='', nodeUsageMode=NORMAL, customWorkspaceVolumeEnabled=true, workspaceVolume=org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume@565b8276, volumes=[org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume@20a0b5ee, org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume@4bc6165a, org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume@41046566, org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume@372e4c46, org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume@12049088, org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume@690b3054], containers=[ContainerTemplate\{name='jnlp', image='gcr.io/myproj/cyrusbio-jnlp-minion:master', alwaysPullImage=true, workingDir='/home/jenkins', command='', args='', ttyEnabled=true, resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory='', envVars=[KeyValueEnvVar [getValue()=tcp://127.0.0.1:2375, getKey()=DOCKER_HOST], KeyValueEnvVar [getValue()=/gcloud/credentials.json, getKey()=GOOGLE_APPLICATION_CREDENTIALS], KeyValueEnvVar [getValue()=/gcloud/credentials.json, getKey()=CLOUDSDK_AUTH_CREDENTIAL_FILE_OVERRIDE]], livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@3c38cba6}, ContainerTemplate\{name='docker-engine', image='docker:dind', privileged=true, workingDir='', command='', args='', ttyEnabled=true, resourceRequestCpu='1', resourceRequestMemory='2Gi', resourceLimitCpu='2100m', resourceLimitMemory='3Gi', livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@b54246a}]}}}
            {{io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://kubernetes.default/api/v1/namespaces/default/pods. Message: Pod "standard-1-2btdk" is invalid: spec.containers[0].volumeMounts[6].mountPath: Required value. Received status: Status(apiVersion=v1, code=422, details=StatusDetails(causes=[StatusCause(field=spec.containers[0].volumeMounts[6].mountPath, message=Required value, reason=FieldValueRequired, additionalProperties=\{})], group=null, kind=Pod, name=standard-1-2btdk, retryAfterSeconds=null, uid=null, additionalProperties=\{}), kind=Status, message=Pod "standard-1-2btdk" is invalid: spec.containers[0].volumeMounts[6].mountPath: Required value, metadata=ListMeta(resourceVersion=null, selfLink=null, additionalProperties=\{}), reason=Invalid, status=Failure, additionalProperties=\{}).}}
            {{at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:472)}}
            {{at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:411)}}
            {{at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:381)}}
            {{at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:344)}}
            {{at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:227)}}
            {{at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:756)}}
            {{at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:334)}}
            {{at org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher.launch(KubernetesLauncher.java:105)}}
            {{at hudson.slaves.SlaveComputer$1.call(SlaveComputer.java:288)}}
            {{at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)}}
            {{at java.util.concurrent.FutureTask.run(FutureTask.java:266)}}
            {{at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)}}
            {{at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)}}
            {{at java.lang.Thread.run(Thread.java:748)}}{{Apr 02, 2018 10:52:26 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate}}

             

            Here's the pod template from my config.xml:

             

            {{<org.csanchez.jenkins.plugins.kubernetes.PodTemplate>}}
            {{ <inheritFrom></inheritFrom>}}
            {{ <name>standard-1</name>}}
            {{ <namespace></namespace>}}
            {{ <privileged>false</privileged>}}
            {{ <alwaysPullImage>false</alwaysPullImage>}}
            {{ <instanceCap>20</instanceCap>}}
            {{ <slaveConnectTimeout>100</slaveConnectTimeout>}}
            {{ <idleMinutes>30</idleMinutes>}}
            {{ <activeDeadlineSeconds>0</activeDeadlineSeconds>}}
            {{ <label>standard-1 worker</label>}}
            {{ <nodeSelector></nodeSelector>}}
            {{ <nodeUsageMode>NORMAL</nodeUsageMode>}}
            {{ <customWorkspaceVolumeEnabled>true</customWorkspaceVolumeEnabled>}}
            {{ <workspaceVolume class="org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume">}}
            {{ <memory>false</memory>}}
            {{ </workspaceVolume>}}
            {{ <volumes>}}
            {{ <org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>}}
            {{ <mountPath>/home/jenkins/XXXX</mountPath>}}
            {{ <secretName>XXXXXXXXX</secretName>}}
            {{ </org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>}}
            {{ <org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>}}
            {{ <mountPath>/docker-cfg</mountPath>}}
            {{ <secretName>XXXXXXXXX</secretName>}}
            {{ </org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>}}
            {{ <org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>}}
            {{ <mountPath>/gcloud</mountPath>}}
            {{ <secretName>XXXXXXXXX</secretName>}}
            {{ </org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>}}
            {{ <org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume>}}
            {{ <mountPath>/home/jenkins/workspace</mountPath>}}
            {{ <memory>false</memory>}}
            {{ </org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume>}}
            {{ <org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume>}}
            {{ <mountPath>/tmp</mountPath>}}
            {{ <memory>false</memory>}}
            {{ </org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume>}}
            {{ <org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume>}}
            {{ <mountPath>/var/lib/docker</mountPath>}}
            {{ <memory>false</memory>}}
            {{ </org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume>}}
            {{ </volumes>}}
            {{ <containers>}}
            {{ <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>}}
            {{ <name>jnlp</name>}}
            {{ }}
            {{ <privileged>false</privileged>}}
            {{ <alwaysPullImage>true</alwaysPullImage>}}
            {{ <workingDir>/home/jenkins</workingDir>}}
            {{ <command></command>}}
            {{ <args></args>}}
            {{ <ttyEnabled>true</ttyEnabled>}}
            {{ <resourceRequestCpu></resourceRequestCpu>}}
            {{ <resourceRequestMemory></resourceRequestMemory>}}
            {{ <resourceLimitCpu></resourceLimitCpu>}}
            {{ <resourceLimitMemory></resourceLimitMemory>}}
            {{ <envVars>}}
            {{ <org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>}}
            {{ <key>DOCKER_HOST</key>}}
            {{ <value>tcp://127.0.0.1:2375&lt;/value>}}
            {{ </org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>}}
            {{ <org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>}}
            {{ <key>GOOGLE_APPLICATION_CREDENTIALS</key>}}
            {{ <value>/gcloud/credentials.json</value>}}
            {{ </org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>}}
            {{ <org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>}}
            {{ <key>CLOUDSDK_AUTH_CREDENTIAL_FILE_OVERRIDE</key>}}
            {{ <value>/gcloud/credentials.json</value>}}
            {{ </org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>}}
            {{ </envVars>}}
            {{ <ports/>}}
            {{ <livenessProbe>}}
            {{ <execArgs></execArgs>}}
            {{ <timeoutSeconds>0</timeoutSeconds>}}
            {{ <initialDelaySeconds>0</initialDelaySeconds>}}
            {{ <failureThreshold>0</failureThreshold>}}
            {{ <periodSeconds>0</periodSeconds>}}
            {{ <successThreshold>0</successThreshold>}}
            {{ </livenessProbe>}}
            {{ </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>}}
            {{ <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>}}
            {{ <name>docker-engine</name>}}
            {{ }}
            {{ <privileged>true</privileged>}}
            {{ <alwaysPullImage>false</alwaysPullImage>}}
            {{ <workingDir></workingDir>}}
            {{ <command></command>}}
            {{ <args></args>}}
            {{ <ttyEnabled>true</ttyEnabled>}}
            {{ <resourceRequestCpu>1</resourceRequestCpu>}}
            {{ <resourceRequestMemory>2Gi</resourceRequestMemory>}}
            {{ <resourceLimitCpu>2100m</resourceLimitCpu>}}
            {{ <resourceLimitMemory>3Gi</resourceLimitMemory>}}
            {{ <envVars/>}}
            {{ <ports/>}}
            {{ <livenessProbe>}}
            {{ <execArgs></execArgs>}}
            {{ <timeoutSeconds>0</timeoutSeconds>}}
            {{ <initialDelaySeconds>0</initialDelaySeconds>}}
            {{ <failureThreshold>0</failureThreshold>}}
            {{ <periodSeconds>0</periodSeconds>}}
            {{ <successThreshold>0</successThreshold>}}
            {{ </livenessProbe>}}
            {{ </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>}}
            {{ </containers>}}
            {{ <envVars/>}}
            {{ <annotations/>}}
            {{ <imagePullSecrets/>}}
            {{</org.csanchez.jenkins.plugins.kubernetes.PodTemplate>}}

             
            I have a Jenkins instance deployed via Helm on Friday that was spinning up pods just fine, and came back Monday to a system that couldn't start any. Configuration below:

            !Screen Shot 2018-04-02 at 3.57.16 PM.png!

            The logs look like lots of this:

             
            {code}
            {{Apr 02, 2018 10:52:26 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate}}
            {{INFO: Terminating Kubernetes instance for agent standard-1-r0fd4}}
            {{Apr 02, 2018 10:52:26 PM okhttp3.internal.platform.Platform log}}
            {{INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?}}
            {{Apr 02, 2018 10:52:26 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher launch}}
            {{WARNING: Error in provisioning; agent=KubernetesSlave name: standard-1-rxq2t, template=PodTemplate\{inheritFrom='', name='standard-1', namespace='', instanceCap=20, idleMinutes=30, label='standard-1 worker', nodeSelector='', nodeUsageMode=NORMAL, customWorkspaceVolumeEnabled=true, workspaceVolume=org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume@565b8276, volumes=[org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume@20a0b5ee, org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume@4bc6165a, org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume@41046566, org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume@372e4c46, org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume@12049088, org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume@690b3054], containers=[ContainerTemplate\{name='jnlp', image='gcr.io/myproj/cyrusbio-jnlp-minion:master', alwaysPullImage=true, workingDir='/home/jenkins', command='', args='', ttyEnabled=true, resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory='', envVars=[KeyValueEnvVar [getValue()=tcp://127.0.0.1:2375, getKey()=DOCKER_HOST], KeyValueEnvVar [getValue()=/gcloud/credentials.json, getKey()=GOOGLE_APPLICATION_CREDENTIALS], KeyValueEnvVar [getValue()=/gcloud/credentials.json, getKey()=CLOUDSDK_AUTH_CREDENTIAL_FILE_OVERRIDE]], livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@3c38cba6}, ContainerTemplate\{name='docker-engine', image='docker:dind', privileged=true, workingDir='', command='', args='', ttyEnabled=true, resourceRequestCpu='1', resourceRequestMemory='2Gi', resourceLimitCpu='2100m', resourceLimitMemory='3Gi', livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@b54246a}]}}}
            {{io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://kubernetes.default/api/v1/namespaces/default/pods. Message: Pod "standard-1-rxq2t" is invalid: spec.containers[0].volumeMounts[6].mountPath: Required value. Received status: Status(apiVersion=v1, code=422, details=StatusDetails(causes=[StatusCause(field=spec.containers[0].volumeMounts[6].mountPath, message=Required value, reason=FieldValueRequired, additionalProperties=\{})], group=null, kind=Pod, name=standard-1-rxq2t, retryAfterSeconds=null, uid=null, additionalProperties=\{}), kind=Status, message=Pod "standard-1-rxq2t" is invalid: spec.containers[0].volumeMounts[6].mountPath: Required value, metadata=ListMeta(resourceVersion=null, selfLink=null, additionalProperties=\{}), reason=Invalid, status=Failure, additionalProperties=\{}).}}
            {{at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:472)}}
            {{at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:411)}}
            {{at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:381)}}
            {{at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:344)}}
            {{at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:227)}}
            {{at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:756)}}
            {{at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:334)}}
            {{at org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher.launch(KubernetesLauncher.java:105)}}
            {{at hudson.slaves.SlaveComputer$1.call(SlaveComputer.java:288)}}
            {{at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)}}
            {{at java.util.concurrent.FutureTask.run(FutureTask.java:266)}}
            {{at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)}}
            {{at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)}}
            {{at java.lang.Thread.run(Thread.java:748)}}{{Apr 02, 2018 10:52:26 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate}}
            {{INFO: Terminating Kubernetes instance for agent standard-1-rxq2t}}
            {{Apr 02, 2018 10:52:26 PM okhttp3.internal.platform.Platform log}}
            {{INFO: ALPN callback dropped: HTTP/2 is disabled. Is alpn-boot on the boot class path?}}
            {{Apr 02, 2018 10:52:26 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher launch}}
            {{WARNING: Error in provisioning; agent=KubernetesSlave name: standard-1-2btdk, template=PodTemplate\{inheritFrom='', name='standard-1', namespace='', instanceCap=20, idleMinutes=30, label='standard-1 worker', nodeSelector='', nodeUsageMode=NORMAL, customWorkspaceVolumeEnabled=true, workspaceVolume=org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume@565b8276, volumes=[org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume@20a0b5ee, org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume@4bc6165a, org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume@41046566, org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume@372e4c46, org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume@12049088, org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume@690b3054], containers=[ContainerTemplate\{name='jnlp', image='gcr.io/myproj/cyrusbio-jnlp-minion:master', alwaysPullImage=true, workingDir='/home/jenkins', command='', args='', ttyEnabled=true, resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory='', envVars=[KeyValueEnvVar [getValue()=tcp://127.0.0.1:2375, getKey()=DOCKER_HOST], KeyValueEnvVar [getValue()=/gcloud/credentials.json, getKey()=GOOGLE_APPLICATION_CREDENTIALS], KeyValueEnvVar [getValue()=/gcloud/credentials.json, getKey()=CLOUDSDK_AUTH_CREDENTIAL_FILE_OVERRIDE]], livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@3c38cba6}, ContainerTemplate\{name='docker-engine', image='docker:dind', privileged=true, workingDir='', command='', args='', ttyEnabled=true, resourceRequestCpu='1', resourceRequestMemory='2Gi', resourceLimitCpu='2100m', resourceLimitMemory='3Gi', livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@b54246a}]}}}
            {{io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://kubernetes.default/api/v1/namespaces/default/pods. Message: Pod "standard-1-2btdk" is invalid: spec.containers[0].volumeMounts[6].mountPath: Required value. Received status: Status(apiVersion=v1, code=422, details=StatusDetails(causes=[StatusCause(field=spec.containers[0].volumeMounts[6].mountPath, message=Required value, reason=FieldValueRequired, additionalProperties=\{})], group=null, kind=Pod, name=standard-1-2btdk, retryAfterSeconds=null, uid=null, additionalProperties=\{}), kind=Status, message=Pod "standard-1-2btdk" is invalid: spec.containers[0].volumeMounts[6].mountPath: Required value, metadata=ListMeta(resourceVersion=null, selfLink=null, additionalProperties=\{}), reason=Invalid, status=Failure, additionalProperties=\{}).}}
            {{at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:472)}}
            {{at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:411)}}
            {{at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:381)}}
            {{at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:344)}}
            {{at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:227)}}
            {{at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:756)}}
            {{at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:334)}}
            {{at org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher.launch(KubernetesLauncher.java:105)}}
            {{at hudson.slaves.SlaveComputer$1.call(SlaveComputer.java:288)}}
            {{at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)}}
            {{at java.util.concurrent.FutureTask.run(FutureTask.java:266)}}
            {{at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)}}
            {{at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)}}
            {{at java.lang.Thread.run(Thread.java:748)}}{{Apr 02, 2018 10:52:26 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate}}
            {code}
             

            Here's the pod template from my config.xml:

             
            {code}
            {{<org.csanchez.jenkins.plugins.kubernetes.PodTemplate>}}
            {{ <inheritFrom></inheritFrom>}}
            {{ <name>standard-1</name>}}
            {{ <namespace></namespace>}}
            {{ <privileged>false</privileged>}}
            {{ <alwaysPullImage>false</alwaysPullImage>}}
            {{ <instanceCap>20</instanceCap>}}
            {{ <slaveConnectTimeout>100</slaveConnectTimeout>}}
            {{ <idleMinutes>30</idleMinutes>}}
            {{ <activeDeadlineSeconds>0</activeDeadlineSeconds>}}
            {{ <label>standard-1 worker</label>}}
            {{ <nodeSelector></nodeSelector>}}
            {{ <nodeUsageMode>NORMAL</nodeUsageMode>}}
            {{ <customWorkspaceVolumeEnabled>true</customWorkspaceVolumeEnabled>}}
            {{ <workspaceVolume class="org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume">}}
            {{ <memory>false</memory>}}
            {{ </workspaceVolume>}}
            {{ <volumes>}}
            {{ <org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>}}
            {{ <mountPath>/home/jenkins/XXXX</mountPath>}}
            {{ <secretName>XXXXXXXXX</secretName>}}
            {{ </org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>}}
            {{ <org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>}}
            {{ <mountPath>/docker-cfg</mountPath>}}
            {{ <secretName>XXXXXXXXX</secretName>}}
            {{ </org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>}}
            {{ <org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>}}
            {{ <mountPath>/gcloud</mountPath>}}
            {{ <secretName>XXXXXXXXX</secretName>}}
            {{ </org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>}}
            {{ <org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume>}}
            {{ <mountPath>/home/jenkins/workspace</mountPath>}}
            {{ <memory>false</memory>}}
            {{ </org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume>}}
            {{ <org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume>}}
            {{ <mountPath>/tmp</mountPath>}}
            {{ <memory>false</memory>}}
            {{ </org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume>}}
            {{ <org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume>}}
            {{ <mountPath>/var/lib/docker</mountPath>}}
            {{ <memory>false</memory>}}
            {{ </org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume>}}
            {{ </volumes>}}
            {{ <containers>}}
            {{ <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>}}
            {{ <name>jnlp</name>}}
            {{ }}
            {{ <privileged>false</privileged>}}
            {{ <alwaysPullImage>true</alwaysPullImage>}}
            {{ <workingDir>/home/jenkins</workingDir>}}
            {{ <command></command>}}
            {{ <args></args>}}
            {{ <ttyEnabled>true</ttyEnabled>}}
            {{ <resourceRequestCpu></resourceRequestCpu>}}
            {{ <resourceRequestMemory></resourceRequestMemory>}}
            {{ <resourceLimitCpu></resourceLimitCpu>}}
            {{ <resourceLimitMemory></resourceLimitMemory>}}
            {{ <envVars>}}
            {{ <org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>}}
            {{ <key>DOCKER_HOST</key>}}
            {{ <value>tcp://127.0.0.1:2375&lt;/value>}}
            {{ </org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>}}
            {{ <org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>}}
            {{ <key>GOOGLE_APPLICATION_CREDENTIALS</key>}}
            {{ <value>/gcloud/credentials.json</value>}}
            {{ </org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>}}
            {{ <org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>}}
            {{ <key>CLOUDSDK_AUTH_CREDENTIAL_FILE_OVERRIDE</key>}}
            {{ <value>/gcloud/credentials.json</value>}}
            {{ </org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>}}
            {{ </envVars>}}
            {{ <ports/>}}
            {{ <livenessProbe>}}
            {{ <execArgs></execArgs>}}
            {{ <timeoutSeconds>0</timeoutSeconds>}}
            {{ <initialDelaySeconds>0</initialDelaySeconds>}}
            {{ <failureThreshold>0</failureThreshold>}}
            {{ <periodSeconds>0</periodSeconds>}}
            {{ <successThreshold>0</successThreshold>}}
            {{ </livenessProbe>}}
            {{ </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>}}
            {{ <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>}}
            {{ <name>docker-engine</name>}}
            {{ }}
            {{ <privileged>true</privileged>}}
            {{ <alwaysPullImage>false</alwaysPullImage>}}
            {{ <workingDir></workingDir>}}
            {{ <command></command>}}
            {{ <args></args>}}
            {{ <ttyEnabled>true</ttyEnabled>}}
            {{ <resourceRequestCpu>1</resourceRequestCpu>}}
            {{ <resourceRequestMemory>2Gi</resourceRequestMemory>}}
            {{ <resourceLimitCpu>2100m</resourceLimitCpu>}}
            {{ <resourceLimitMemory>3Gi</resourceLimitMemory>}}
            {{ <envVars/>}}
            {{ <ports/>}}
            {{ <livenessProbe>}}
            {{ <execArgs></execArgs>}}
            {{ <timeoutSeconds>0</timeoutSeconds>}}
            {{ <initialDelaySeconds>0</initialDelaySeconds>}}
            {{ <failureThreshold>0</failureThreshold>}}
            {{ <periodSeconds>0</periodSeconds>}}
            {{ <successThreshold>0</successThreshold>}}
            {{ </livenessProbe>}}
            {{ </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>}}
            {{ </containers>}}
            {{ <envVars/>}}
            {{ <annotations/>}}
            {{ <imagePullSecrets/>}}
            {{</org.csanchez.jenkins.plugins.kubernetes.PodTemplate>}}

            {code}
            Hide
            simonwydooghe Simon Wydooghe added a comment -

            I also had to roll back to 1.14 because of issues spinning up pods. The error was also related to the volumes being mounted (a local hostPath volume). It was complaining about the value not being unique or something. I will upgrade it again when Jenkins is not in use to try to reproduce it to provide logs and podTemplate.

            Show
            simonwydooghe Simon Wydooghe added a comment - I also had to roll back to 1.14 because of issues spinning up pods. The error was also related to the volumes being mounted (a local hostPath volume). It was complaining about the value not being unique or something. I will upgrade it again when Jenkins is not in use to try to reproduce it to provide logs and podTemplate.
            Hide
            omnamenard Adrian Menard added a comment - - edited

            We also had this issue. Updated to 1.15 and started getting `Error in provisioning; agent=KubernetesSlave` in the jenkins logs. Very similar error messages reported above. Had to rollback to 1.14. We are on jenkins version 2.114

            Show
            omnamenard Adrian Menard added a comment - - edited We also had this issue. Updated to 1.15 and started getting `Error in provisioning; agent=KubernetesSlave` in the jenkins logs. Very similar error messages reported above. Had to rollback to 1.14. We are on jenkins version 2.114
            Hide
            simonwydooghe Simon Wydooghe added a comment -
            WARNING: Error in provisioning; agent=KubernetesSlave name: jenkins-agent-github-drupal8-site-cms-nieuws-master-59-5b-cdvjx, template=PodTemplate{, name='jenkins-agent-github-drupal8-site-cms-nieuws-master-59-5bjkm', namespace='ops-prod', label='8a6f6637-d8e3-48e6-9c18-8a1f7082675d', serviceAccount='jenkins', nodeSelector='cloud.google.com/gke-preemptible=true,cloud.google.com/gke-local-ssd=true', nodeUsageMode=EXCLUSIVE, volumes=[org.csanchez.jenkins.plugins.kubernetes.volumes.HostPathVolume@63b25977], containers=[ContainerTemplate{name='jnlp', image='medialaan/jenkins-jnlp-slave:3.16-1', alwaysPullImage=true, workingDir='/home/jenkins', args='${computer.jnlpmac} ${computer.name}'}, ContainerTemplate{name='composer', image='medialaan/composer:1.0.0', alwaysPullImage=true, workingDir='/home/jenkins', command='cat', ttyEnabled=true}, ContainerTemplate{name='bazel', image='eu.gcr.io/medialaan-production/bazel:0.1.0', alwaysPullImage=true, workingDir='/home/jenkins', command='cat', ttyEnabled=true}, ContainerTemplate{name='gcp', image='eu.gcr.io/medialaan-production/gcp:0.1.0', alwaysPullImage=true, workingDir='/home/jenkins', command='cat', ttyEnabled=true, envVars=[KeyValueEnvVar [getValue()=$PWD/.kube/config, getKey()=KUBECONFIG], KeyValueEnvVar [getValue()=$PWD/.config, getKey()=CLOUDSDK_CONFIG], KeyValueEnvVar [getValue()=$PWD/.helm, getKey()=HELM_HOME]]}]}
            io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://kubernetes.default/api/v1/namespaces/ops-prod/pods. Message: Pod "jenkins-agent-github-drupal8-site-cms-nieuws-master-59-5b-cdvjx" is invalid: [spec.containers[0].volumeMounts[1].mountPath: Invalid value: "/home/jenkins": must be unique, spec.containers[1].volumeMounts[1].mountPath: Invalid value: "/home/jenkins": must be unique, spec.containers[2].volumeMounts[1].mountPath: Invalid value: "/home/jenkins": must be unique, spec.containers[3].volumeMounts[1].mountPath: Invalid value: "/home/jenkins": must be unique]. Received status: Status(apiVersion=v1, code=422, details=StatusDetails(causes=[StatusCause(field=spec.containers[0].volumeMounts[1].mountPath, message=Invalid value: "/home/jenkins": must be unique, reason=FieldValueInvalid, additionalProperties={}), StatusCause(field=spec.containers[1].volumeMounts[1].mountPath, message=Invalid value: "/home/jenkins": must be unique, reason=FieldValueInvalid, additionalProperties={}), StatusCause(field=spec.containers[2].volumeMounts[1].mountPath, message=Invalid value: "/home/jenkins": must be unique, reason=FieldValueInvalid, additionalProperties={}), StatusCause(field=spec.containers[3].volumeMounts[1].mountPath, message=Invalid value: "/home/jenkins": must be unique, reason=FieldValueInvalid, additionalProperties={})], group=null, kind=Pod, name=jenkins-agent-github-drupal8-site-cms-nieuws-master-59-5b-cdvjx, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=Pod "jenkins-agent-github-drupal8-site-cms-nieuws-master-59-5b-cdvjx" is invalid: [spec.containers[0].volumeMounts[1].mountPath: Invalid value: "/home/jenkins": must be unique, spec.containers[1].volumeMounts[1].mountPath: Invalid value: "/home/jenkins": must be unique, spec.containers[2].volumeMounts[1].mountPath: Invalid value: "/home/jenkins": must be unique, spec.containers[3].volumeMounts[1].mountPath: Invalid value: "/home/jenkins": must be unique], metadata=ListMeta(resourceVersion=null, selfLink=null, additionalProperties={}), reason=Invalid, status=Failure, additionalProperties={}).
            

            This is the podTemplate:

            podTemplate(label: podTemplateLabel, name: podTemplateName, serviceAccount: 'jenkins', volumes: [hostPathVolume(hostPath: '/mnt/disks/ssd0', mountPath: '/home/jenkins')], nodeSelector: 'cloud.google.com/gke-preemptible=true,cloud.google.com/gke-local-ssd=true', nodeUsageMode: 'EXCLUSIVE', containers: [
            containerTemplate(
            name: 'jnlp',
            image: 'medialaan/jenkins-jnlp-slave:3.16-1',
            alwaysPullImage: true,
            args: '${computer.jnlpmac} ${computer.name}'
            ),
            containerTemplate(
            name: 'composer',
            image: 'medialaan/composer:1.0.0',
            alwaysPullImage: true,
            ttyEnabled: true,
            command: 'cat'
            ),
            containerTemplate(
            name: 'bazel',
            image: 'eu.gcr.io/medialaan-production/bazel:0.1.0',
            alwaysPullImage: true,
            ttyEnabled: true,
            command: 'cat'
            ),
            containerTemplate(
            name: 'gcp',
            image: 'eu.gcr.io/medialaan-production/gcp:0.1.0',
            alwaysPullImage: true,
            ttyEnabled: true,
            command: 'cat',
            envVars: [
            envVar(key: 'KUBECONFIG', value: '$PWD/.kube/config'),
            envVar(key: 'CLOUDSDK_CONFIG', value: '$PWD/.config'),
            envVar(key: 'HELM_HOME', value: '$PWD/.helm')
            ]
            )])
            
            Show
            simonwydooghe Simon Wydooghe added a comment - WARNING: Error in provisioning; agent=KubernetesSlave name: jenkins-agent-github-drupal8-site-cms-nieuws-master-59-5b-cdvjx, template=PodTemplate{, name= 'jenkins-agent-github-drupal8-site-cms-nieuws-master-59-5bjkm' , namespace= 'ops-prod' , label= '8a6f6637-d8e3-48e6-9c18-8a1f7082675d' , serviceAccount= 'jenkins' , nodeSelector= 'cloud.google.com/gke-preemptible= true ,cloud.google.com/gke-local-ssd= true ' , nodeUsageMode=EXCLUSIVE, volumes=[org.csanchez.jenkins.plugins.kubernetes.volumes.HostPathVolume@63b25977], containers=[ContainerTemplate{name= 'jnlp' , image= 'medialaan/jenkins-jnlp-slave:3.16-1' , alwaysPullImage= true , workingDir= '/home/jenkins' , args= '${computer.jnlpmac} ${computer.name}' }, ContainerTemplate{name= 'composer' , image= 'medialaan/composer:1.0.0' , alwaysPullImage= true , workingDir= '/home/jenkins' , command= 'cat' , ttyEnabled= true }, ContainerTemplate{name= 'bazel' , image= 'eu.gcr.io/medialaan-production/bazel:0.1.0' , alwaysPullImage= true , workingDir= '/home/jenkins' , command= 'cat' , ttyEnabled= true }, ContainerTemplate{name= 'gcp' , image= 'eu.gcr.io/medialaan-production/gcp:0.1.0' , alwaysPullImage= true , workingDir= '/home/jenkins' , command= 'cat' , ttyEnabled= true , envVars=[KeyValueEnvVar [getValue()=$PWD/.kube/config, getKey()=KUBECONFIG], KeyValueEnvVar [getValue()=$PWD/.config, getKey()=CLOUDSDK_CONFIG], KeyValueEnvVar [getValue()=$PWD/.helm, getKey()=HELM_HOME]]}]} io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https: //kubernetes. default /api/v1/namespaces/ops-prod/pods. Message: Pod "jenkins-agent-github-drupal8-site-cms-nieuws-master-59-5b-cdvjx" is invalid: [spec.containers[0].volumeMounts[1].mountPath: Invalid value: "/home/jenkins" : must be unique, spec.containers[1].volumeMounts[1].mountPath: Invalid value: "/home/jenkins" : must be unique, spec.containers[2].volumeMounts[1].mountPath: Invalid value: "/home/jenkins" : must be unique, spec.containers[3].volumeMounts[1].mountPath: Invalid value: "/home/jenkins" : must be unique]. Received status: Status(apiVersion=v1, code=422, details=StatusDetails(causes=[StatusCause(field=spec.containers[0].volumeMounts[1].mountPath, message=Invalid value: "/home/jenkins" : must be unique, reason=FieldValueInvalid, additionalProperties={}), StatusCause(field=spec.containers[1].volumeMounts[1].mountPath, message=Invalid value: "/home/jenkins" : must be unique, reason=FieldValueInvalid, additionalProperties={}), StatusCause(field=spec.containers[2].volumeMounts[1].mountPath, message=Invalid value: "/home/jenkins" : must be unique, reason=FieldValueInvalid, additionalProperties={}), StatusCause(field=spec.containers[3].volumeMounts[1].mountPath, message=Invalid value: "/home/jenkins" : must be unique, reason=FieldValueInvalid, additionalProperties={})], group= null , kind=Pod, name=jenkins-agent-github-drupal8-site-cms-nieuws-master-59-5b-cdvjx, retryAfterSeconds= null , uid= null , additionalProperties={}), kind=Status, message=Pod "jenkins-agent-github-drupal8-site-cms-nieuws-master-59-5b-cdvjx" is invalid: [spec.containers[0].volumeMounts[1].mountPath: Invalid value: "/home/jenkins" : must be unique, spec.containers[1].volumeMounts[1].mountPath: Invalid value: "/home/jenkins" : must be unique, spec.containers[2].volumeMounts[1].mountPath: Invalid value: "/home/jenkins" : must be unique, spec.containers[3].volumeMounts[1].mountPath: Invalid value: "/home/jenkins" : must be unique], metadata=ListMeta(resourceVersion= null , selfLink= null , additionalProperties={}), reason=Invalid, status=Failure, additionalProperties={}). This is the podTemplate: podTemplate(label: podTemplateLabel, name: podTemplateName, serviceAccount: 'jenkins' , volumes: [hostPathVolume(hostPath: '/mnt/disks/ssd0' , mountPath: '/home/jenkins' )], nodeSelector: 'cloud.google.com/gke-preemptible= true ,cloud.google.com/gke-local-ssd= true ' , nodeUsageMode: 'EXCLUSIVE' , containers: [ containerTemplate( name: 'jnlp' , image: 'medialaan/jenkins-jnlp-slave:3.16-1' , alwaysPullImage: true , args: '${computer.jnlpmac} ${computer.name}' ), containerTemplate( name: 'composer' , image: 'medialaan/composer:1.0.0' , alwaysPullImage: true , ttyEnabled: true , command: 'cat' ), containerTemplate( name: 'bazel' , image: 'eu.gcr.io/medialaan-production/bazel:0.1.0' , alwaysPullImage: true , ttyEnabled: true , command: 'cat' ), containerTemplate( name: 'gcp' , image: 'eu.gcr.io/medialaan-production/gcp:0.1.0' , alwaysPullImage: true , ttyEnabled: true , command: 'cat' , envVars: [ envVar(key: 'KUBECONFIG' , value: '$PWD/.kube/config' ), envVar(key: 'CLOUDSDK_CONFIG' , value: '$PWD/.config' ), envVar(key: 'HELM_HOME' , value: '$PWD/.helm' ) ] )])
            csanchez Carlos Sanchez made changes -
            Status Open [ 1 ] In Progress [ 3 ]
            csanchez Carlos Sanchez made changes -
            Remote Link This issue links to "PR-303 (Web Link)" [ 20371 ]
            csanchez Carlos Sanchez made changes -
            Status In Progress [ 3 ] In Review [ 10005 ]
            Show
            csanchez Carlos Sanchez added a comment - Merged https://github.com/jenkinsci/kubernetes-plugin/pull/303
            csanchez Carlos Sanchez made changes -
            Status In Review [ 10005 ] Resolved [ 5 ]
            Resolution Fixed [ 1 ]
            Hide
            simonwydooghe Simon Wydooghe added a comment -

            Cool, cheers!

            Show
            simonwydooghe Simon Wydooghe added a comment - Cool, cheers!
            Hide
            jgangemi Jae Gangemi added a comment -

            i'm now getting this error after upgrading for a pod template that was working before and has no volumes/mount paths associated with it at all

            Error in provisioning; agent=KubernetesSlave name: kubectl-8-debian-lv201, template=PodTemplate{inheritFrom='', name='kubectl-8-debian', namespace='', label='kubectl-8-debian', nodeSelector='', nodeUsageMode=NORMAL, workspaceVolume=org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume@2d52e6f4, containers=[ContainerTemplate{name='jnlp', image='registry.battery-park.conductor.com/jenkins-slave-8-kubectl-debian', alwaysPullImage=true, workingDir='', command='', args='${computer.jnlpmac} ${computer.name}', ttyEnabled=true, resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory='', livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@774c827b}]}
            io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: http://jenkins-kubernetes-master.infra.us-east-1.conductor.sh/api/v1/namespaces/beta/pods. Message: Pod "kubectl-8-debian-lv201" is invalid: spec.containers[0].volumeMounts[0].mountPath: Required value. Received status: Status(apiVersion=v1, code=422, details=StatusDetails(causes=[StatusCause(field=spec.containers[0].volumeMounts[0].mountPath, message=Required value, reason=FieldValueRequired, additionalProperties={})], group=null, kind=Pod, name=kubectl-8-debian-lv201, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=Pod "kubectl-8-debian-lv201" is invalid: spec.containers[0].volumeMounts[0].mountPath: Required value, metadata=ListMeta(resourceVersion=null, selfLink=null, additionalProperties={}), reason=Invalid, status=Failure, additionalProperties={}).
            	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:472)
            	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:411)
            	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:381)
            	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:344)
            	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:227)
            	at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:756)
            	at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:334)
            	at org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher.launch(KubernetesLauncher.java:105)
            	at hudson.slaves.SlaveComputer$1.call(SlaveComputer.java:288)
            	at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)
            	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
            	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
            	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
            	at java.lang.Thread.run(Thread.java:748)
            
            Show
            jgangemi Jae Gangemi added a comment - i'm now getting this error after upgrading for a pod template that was working before and has no volumes/mount paths associated with it at all Error in provisioning; agent=KubernetesSlave name: kubectl-8-debian-lv201, template=PodTemplate{inheritFrom= '', name=' kubectl-8-debian ', namespace=' ', label=' kubectl-8-debian ', nodeSelector=' ', nodeUsageMode=NORMAL, workspaceVolume=org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume@2d52e6f4, containers=[ContainerTemplate{name=' jnlp ', image=' registry.battery-park.conductor.com/jenkins-slave-8-kubectl-debian ', alwaysPullImage= true , workingDir=' ', command=' ', args=' ${computer.jnlpmac} ${computer.name} ', ttyEnabled= true , resourceRequestCpu=' ', resourceRequestMemory=' ', resourceLimitCpu=' ', resourceLimitMemory=' ', livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@774c827b}]} io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: http: //jenkins-kubernetes-master.infra.us-east-1.conductor.sh/api/v1/namespaces/beta/pods. Message: Pod "kubectl-8-debian-lv201" is invalid: spec.containers[0].volumeMounts[0].mountPath: Required value. Received status: Status(apiVersion=v1, code=422, details=StatusDetails(causes=[StatusCause(field=spec.containers[0].volumeMounts[0].mountPath, message=Required value, reason=FieldValueRequired, additionalProperties={})], group= null , kind=Pod, name=kubectl-8-debian-lv201, retryAfterSeconds= null , uid= null , additionalProperties={}), kind=Status, message=Pod "kubectl-8-debian-lv201" is invalid: spec.containers[0].volumeMounts[0].mountPath: Required value, metadata=ListMeta(resourceVersion= null , selfLink= null , additionalProperties={}), reason=Invalid, status=Failure, additionalProperties={}). at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:472) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:411) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:381) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:344) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:227) at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:756) at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:334) at org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher.launch(KubernetesLauncher.java:105) at hudson.slaves.SlaveComputer$1.call(SlaveComputer.java:288) at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang. Thread .run( Thread .java:748)
            Hide
            csanchez Carlos Sanchez added a comment -

            Jae Gangemi 1.5.0 or 1.5.1 ?

            Show
            csanchez Carlos Sanchez added a comment - Jae Gangemi 1.5.0 or 1.5.1 ?
            Hide
            jgangemi Jae Gangemi added a comment -

            both

            Show
            jgangemi Jae Gangemi added a comment - both
            Hide
            jgangemi Jae Gangemi added a comment -

            i tried mounting in a host volume just to see if that populate the field but i still got the same error.

            Show
            jgangemi Jae Gangemi added a comment - i tried mounting in a host volume just to see if that populate the field but i still got the same error.
            Hide
            csanchez Carlos Sanchez added a comment -

            do you have a minimal podTemplate example that shows the issue ?

            Show
            csanchez Carlos Sanchez added a comment - do you have a minimal podTemplate example that shows the issue ?
            jgangemi Jae Gangemi made changes -
            Attachment image-2018-04-10-10-48-36-384.png [ 42138 ]
            Hide
            jgangemi Jae Gangemi added a comment -

            Show
            jgangemi Jae Gangemi added a comment -
            Hide
            jgangemi Jae Gangemi added a comment -

            i have no other values set in the pod template other then the container definition. i am currently using version 0.11 - i had tried newer 1.x versions but had to roll back b/c of that issue where an additional container was spun up which i believe was fixed in 1.3.1.

            Show
            jgangemi Jae Gangemi added a comment - i have no other values set in the pod template other then the container definition. i am currently using version 0.11 - i had tried newer 1.x versions but had to roll back b/c of that issue where an additional container was spun up which i believe was fixed in 1.3.1.
            Hide
            omnamenard Adrian Menard added a comment -

            Tried updating to plugin version 1.5.2 this morning. Still seeing the behavior that slave pods fail to provision. Here is the error from the jenkins master log:
            Apr 17, 2018 3:42:59 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher launch
            WARNING: Error in provisioning; agent=KubernetesSlave name: jnlp-tf989, template=PodTemplate{inheritFrom='', name='jnlp', namespace='jenkins', label='k8s', nodeSelector='', nodeUsageMode=NORMAL, workspaceVolume=org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume@2dc11a61, volumes=[HostPathVolume [mountPath=/usr/bin/docker, hostPath=/usr/bin/docker], HostPathVolume [mountPath=/var/run/docker.sock, hostPath=/var/run/docker.sock], org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume@227cf16a], containers=[ContainerTemplate\{name='jnlp', image='gcr.io/ops-production/jenkins-slave:v8', alwaysPullImage=true, workingDir='/home/jenkins', command='', args='$\{computer.jnlpmac} $\{computer.name}', resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory='', livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@10dce3c9}, ContainerTemplate\{name='postgres', image='postgres', alwaysPullImage=true, workingDir='', command='', args='', resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory='', livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@33f0d515}, ContainerTemplate\{name='protobuf-mocks', image='gcr.io/ops-production/protobuf-mocks:v2', workingDir='', command='', args='$\{computer.jnlpmac} $\{computer.name}', resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory='', livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@d8cfaae}]}
            io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://kubernetes.default/api/v1/namespaces/jenkins/pods. Message: Pod "jnlp-tf989" is invalid: [spec.containers[0].volumeMounts[3].mountPath: Required value, spec.containers[2].volumeMounts[3].mountPath: Required value]. Received status: Status(apiVersion=v1, code=422, details=StatusDetails(causes=[StatusCause(field=spec.containers[0].volumeMounts[3].mountPath, message=Required value, reason=FieldValueRequired, additionalProperties={}), StatusCause(field=spec.containers[2].volumeMounts[3].mountPath, message=Required value, reason=FieldValueRequired, additionalProperties={})], group=null, kind=Pod, name=jnlp-tf989, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=Pod "jnlp-tf989" is invalid: [spec.containers[0].volumeMounts[3].mountPath: Required value, spec.containers[2].volumeMounts[3].mountPath: Required value], metadata=ListMeta(resourceVersion=null, selfLink=null, additionalProperties={}), reason=Invalid, status=Failure, additionalProperties={}).
            at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:472)
            at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:411)
            at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:381)
            at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:344)
            at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:227)
            at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:756)
            at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:334)
            at org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher.launch(KubernetesLauncher.java:105)
            at hudson.slaves.SlaveComputer$1.call(SlaveComputer.java:288)
            at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)
            at java.util.concurrent.FutureTask.run(FutureTask.java:266)
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
            at java.lang.Thread.run(Thread.java:748)
            Apr 17, 2018 3:42:59 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate
            INFO: Terminating Kubernetes instance for agent jnlp-tf989
            Apr 17, 2018 3:42:59 PM okhttp3.internal.platform.Platform log

            Show
            omnamenard Adrian Menard added a comment - Tried updating to plugin version 1.5.2 this morning. Still seeing the behavior that slave pods fail to provision. Here is the error from the jenkins master log: Apr 17, 2018 3:42:59 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher launch WARNING: Error in provisioning; agent=KubernetesSlave name: jnlp-tf989, template=PodTemplate{inheritFrom='', name='jnlp', namespace='jenkins', label='k8s', nodeSelector='', nodeUsageMode=NORMAL, workspaceVolume=org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume@2dc11a61, volumes=[HostPathVolume [mountPath=/usr/bin/docker, hostPath=/usr/bin/docker] , HostPathVolume [mountPath=/var/run/docker.sock, hostPath=/var/run/docker.sock] , org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume@227cf16a], containers= [ContainerTemplate\{name='jnlp', image='gcr.io/ops-production/jenkins-slave:v8', alwaysPullImage=true, workingDir='/home/jenkins', command='', args='$\{computer.jnlpmac} $\{computer.name}', resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory='', livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@10dce3c9}, ContainerTemplate\{name='postgres', image='postgres', alwaysPullImage=true, workingDir='', command='', args='', resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory='', livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@33f0d515}, ContainerTemplate\{name='protobuf-mocks', image='gcr.io/ops-production/protobuf-mocks:v2', workingDir='', command='', args='$\{computer.jnlpmac} $\{computer.name}', resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory='', livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@d8cfaae}] } io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://kubernetes.default/api/v1/namespaces/jenkins/pods . Message: Pod "jnlp-tf989" is invalid: [spec.containers [0] .volumeMounts [3] .mountPath: Required value, spec.containers [2] .volumeMounts [3] .mountPath: Required value]. Received status: Status(apiVersion=v1, code=422, details=StatusDetails(causes=[StatusCause(field=spec.containers [0] .volumeMounts [3] .mountPath, message=Required value, reason=FieldValueRequired, additionalProperties={}), StatusCause(field=spec.containers [2] .volumeMounts [3] .mountPath, message=Required value, reason=FieldValueRequired, additionalProperties={})], group=null, kind=Pod, name=jnlp-tf989, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=Pod "jnlp-tf989" is invalid: [spec.containers [0] .volumeMounts [3] .mountPath: Required value, spec.containers [2] .volumeMounts [3] .mountPath: Required value], metadata=ListMeta(resourceVersion=null, selfLink=null, additionalProperties={}), reason=Invalid, status=Failure, additionalProperties={}). at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:472) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:411) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:381) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:344) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:227) at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:756) at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:334) at org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher.launch(KubernetesLauncher.java:105) at hudson.slaves.SlaveComputer$1.call(SlaveComputer.java:288) at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Apr 17, 2018 3:42:59 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave _terminate INFO: Terminating Kubernetes instance for agent jnlp-tf989 Apr 17, 2018 3:42:59 PM okhttp3.internal.platform.Platform log
            csanchez Carlos Sanchez made changes -
            Resolution Fixed [ 1 ]
            Status Resolved [ 5 ] Reopened [ 4 ]
            csanchez Carlos Sanchez made changes -
            Assignee Carlos Sanchez [ csanchez ]
            csanchez Carlos Sanchez made changes -
            Link This issue is duplicated by JENKINS-50801 [ JENKINS-50801 ]
            Hide
            deiwin Deiwin Sarjas added a comment -

            This is marked with a priority "Minor", which is defined as "Minor loss of function, or other problem where easy workaround is present.". Not being able to provision pods feels pretty major to me and as far as I can see there's no known workaround.

            Show
            deiwin Deiwin Sarjas added a comment - This is marked with a priority "Minor", which is defined as  "Minor loss of function, or other problem where easy workaround is present.". Not being able to provision pods feels pretty major to me and as far as I can see there's no known workaround.
            Hide
            pnovotnak Peter Novotnak added a comment - - edited

            Just wanted to say–I'm still getting this with version 1.6. Let me know if I can provide any additional debugging info. Deiwin Sarjas it does make the plugin unusable, and it was also miscatagorized as a feature request (these are both my bad). I changed both fields to better reflect the state of affairs.

             

            Carlos Sanchez Is there a way to log the entire pod spec being sent to kubernetes? That might help narrow down the issue.

            Show
            pnovotnak Peter Novotnak added a comment - - edited Just wanted to say–I'm still getting this with version 1.6. Let me know if I can provide any additional debugging info. Deiwin Sarjas it does make the plugin unusable, and it was also miscatagorized as a feature request (these are both my bad). I changed both fields to better reflect the state of affairs.   Carlos Sanchez Is there a way to log the entire pod spec being sent to kubernetes? That might help narrow down the issue.
            pnovotnak Peter Novotnak made changes -
            Issue Type New Feature [ 2 ] Bug [ 1 ]
            pnovotnak Peter Novotnak made changes -
            Priority Minor [ 4 ] Major [ 3 ]
            Hide
            jgangemi Jae Gangemi added a comment -

            i'm currently running 1.4.1 w/o any issue. if you don't have the option to roll back to that, you can download it directly from the jenkins maven repository and install it, which is what i did.

            Show
            jgangemi Jae Gangemi added a comment - i'm currently running 1.4.1 w/o any issue. if you don't have the option to roll back to that, you can download it directly from the jenkins maven repository and install it, which is what i did.
            Hide
            pnovotnak Peter Novotnak added a comment -

            For reference, the kubernetes plugin repository is here: https://updates.jenkins-ci.org/download/plugins/kubernetes/

            Show
            pnovotnak Peter Novotnak added a comment - For reference, the kubernetes plugin repository is here:  https://updates.jenkins-ci.org/download/plugins/kubernetes/
            Hide
            twz123 Tom Wieczorek added a comment -

            I fixed this by removing the custom emptyDir volume for the Jenkins workspace from the Pod Template's configuration. Plugin versions starting with 1.5 apparently add their own volume mount for the Jenkins workspace automatically. This collides with the custom one, resulting in a broken volume mount config.

            Show
            twz123 Tom Wieczorek added a comment - I fixed this by removing the custom emptyDir volume for the Jenkins workspace from the Pod Template's configuration. Plugin versions starting with 1.5 apparently add their own volume mount for the Jenkins workspace automatically. This collides with the custom one, resulting in a broken volume mount config.
            korovkind Dmitriy Korovkin made changes -
            Priority Major [ 3 ] Blocker [ 1 ]
            Hide
            korovkind Dmitriy Korovkin added a comment -

            this bug blockng ugrade from 1.4 version  to latest one

            Show
            korovkind Dmitriy Korovkin added a comment - this bug blockng ugrade from 1.4 version  to latest one
            csanchez Carlos Sanchez made changes -
            Status Reopened [ 4 ] Open [ 1 ]
            csanchez Carlos Sanchez made changes -
            Assignee Carlos Sanchez [ csanchez ]
            csanchez Carlos Sanchez made changes -
            Status Open [ 1 ] In Progress [ 3 ]
            csanchez Carlos Sanchez made changes -
            Status In Progress [ 3 ] In Review [ 10005 ]
            Show
            csanchez Carlos Sanchez added a comment - https://github.com/jenkinsci/kubernetes-plugin/pull/346
            csanchez Carlos Sanchez made changes -
            Status In Review [ 10005 ] Resolved [ 5 ]
            Resolution Fixed [ 1 ]
            Hide
            jgangemi Jae Gangemi added a comment -

            this is still broken. i just upgraded to 1.8.4 and get this error

            Error in provisioning; agent=KubernetesSlave name: maven-8-debian-lms2j, template=PodTemplate{inheritFrom='', name='maven-8-debian', slaveConnectTimeout=0, label='maven-8-debian', nodeSelector='', workspaceVolume=EmptyDirWorkspaceVolume [memory=false], containers=[ContainerTemplate{name='jnlp', image='registry.battery-park.conductor.com/jenkins-slave-8-maven-debian', alwaysPullImage=true, command='', args='${computer.jnlpmac} ${computer.name}', ttyEnabled=true, resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory=''}]}
            java.lang.IllegalStateException: Containers are terminated with exit codes: {jnlp=127}
            	at org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher.launch(KubernetesLauncher.java:156)
            	at hudson.slaves.SlaveComputer$1.call(SlaveComputer.java:294)
            	at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)
            	at jenkins.security.ImpersonatingExecutorService$2.call(ImpersonatingExecutorService.java:71)
            	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
            	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
            	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
            	at java.lang.Thread.run(Thread.java:748)

            everything works using 1.4.1

            Show
            jgangemi Jae Gangemi added a comment - this is still broken. i just upgraded to 1.8.4 and get this error Error in provisioning; agent=KubernetesSlave name: maven-8-debian-lms2j, template=PodTemplate{inheritFrom= '', name=' maven-8-debian ', slaveConnectTimeout=0, label=' maven-8-debian ', nodeSelector=' ', workspaceVolume=EmptyDirWorkspaceVolume [memory= false ], containers=[ContainerTemplate{name=' jnlp ', image=' registry.battery-park.conductor.com/jenkins-slave-8-maven-debian ', alwaysPullImage= true , command=' ', args=' ${computer.jnlpmac} ${computer.name} ', ttyEnabled= true , resourceRequestCpu=' ', resourceRequestMemory=' ', resourceLimitCpu=' ', resourceLimitMemory=' '}]} java.lang.IllegalStateException: Containers are terminated with exit codes: {jnlp=127} at org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher.launch(KubernetesLauncher.java:156) at hudson.slaves.SlaveComputer$1.call(SlaveComputer.java:294) at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46) at jenkins.security.ImpersonatingExecutorService$2.call(ImpersonatingExecutorService.java:71) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang. Thread .run( Thread .java:748) everything works using 1.4.1
            jgangemi Jae Gangemi made changes -
            Resolution Fixed [ 1 ]
            Status Resolved [ 5 ] Reopened [ 4 ]
            Hide
            csanchez Carlos Sanchez added a comment -

            that is a different error. What are the pod logs ?

            Show
            csanchez Carlos Sanchez added a comment - that is a different error. What are the pod logs ?
            Hide
            jgangemi Jae Gangemi added a comment -

            then something else has changed here b/c my pod configurations aren't changing in between upgrading plugin versions.

            Show
            jgangemi Jae Gangemi added a comment - then something else has changed here b/c my pod configurations aren't changing in between upgrading plugin versions.
            Hide
            korovkind Dmitriy Korovkin added a comment -

            I've upgraded to 1.8.4 version. It is working now, thanks!

            Show
            korovkind Dmitriy Korovkin added a comment - I've upgraded to 1.8.4 version. It is working now, thanks!
            Hide
            jgangemi Jae Gangemi added a comment -

            is there some way to prevent the plugin from immediately destroying the container if it fails to provision? i can't catch it fast enough to look at the logs.

            Show
            jgangemi Jae Gangemi added a comment - is there some way to prevent the plugin from immediately destroying the container if it fails to provision? i can't catch it fast enough to look at the logs.
            Hide
            jgangemi Jae Gangemi added a comment -

            so right of the bat, it seems i am now being forced to have some value for the working directory which seems to constantly default to /home/jenkins if nothing is specified and this was NOT required as part of 1.4.1.

             

            Show
            jgangemi Jae Gangemi added a comment - so right of the bat, it seems i am now being forced to have some value for the working directory which seems to constantly default to /home/jenkins if nothing is specified and this was NOT required as part of 1.4.1.  
            Hide
            csanchez Carlos Sanchez added a comment -

            see https://github.com/jenkinsci/kubernetes-plugin/pull/347 for pod retention proposal
            for other issues please open a new jira

            Show
            csanchez Carlos Sanchez added a comment - see https://github.com/jenkinsci/kubernetes-plugin/pull/347 for pod retention proposal for other issues please open a new jira
            csanchez Carlos Sanchez made changes -
            Status Reopened [ 4 ] Resolved [ 5 ]
            Resolution Fixed [ 1 ]

              People

              • Assignee:
                csanchez Carlos Sanchez
                Reporter:
                pnovotnak Peter Novotnak
              • Votes:
                8 Vote for this issue
                Watchers:
                13 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: