Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-50801

Cannot provision Slave Pod: mountPath: Required value

    Details

    • Type: Bug
    • Status: Closed (View Workflow)
    • Priority: Major
    • Resolution: Duplicate
    • Component/s: kubernetes-plugin
    • Labels:
      None
    • Environment:
    • Similar Issues:

      Description

      We are running the standard Jenkins container (lts) on Kubernetes and it is provisioning build slaves on the same cluster. After upgrading to Kubernetes Plugin v1.5.1 (from v1.3.2), Jenkins was no longer able to provision any build slaves and the Jenkins logs showed many errors like:

       

      {{Error in provisioning; agent=KubernetesSlave name: builder-lfc8x, template=PodTemplate{, name='builder', namespace='jenkins', label='builder', nodeSelector='', nodeUsageMode=NORMAL, workspaceVolume=org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume@68a57ce6, volumes=[org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume@8e436e0, HostPathVolume [mountPath=/var/run/docker.sock, hostPath=/var/run/docker.sock], HostPathVolume [mountPath=/usr/bin/docker, hostPath=/usr/bin/docker]], containers=[ContainerTemplate
      {name='git-auth', image='us.gcr.io/myproject/git-auth:v1', workingDir='', command='', args='', resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory='', livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@4f8c0bfd}
      , ContainerTemplate
      {name='jnlp', image='us.gcr.io/myproject/jnlp-linux-node:v1', workingDir='/workspace/', command='', args='', resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory='', livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@26954299}
      , ContainerTemplate
      {name='builder', image='us.gcr.io/myproject/builder:v8', workingDir='/workspace/', command='cat', args='', ttyEnabled=true, resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory='', livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@6d1df705}
      ]}}}
       io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://kubernetes.default/api/v1/namespaces/jenkins/pods. Message: Pod "builder-lfc8x" is invalid: spec.containers[0].volumeMounts[3].mountPath: Required value. Received status: Status(apiVersion=v1, code=422, details=StatusDetails(causes=[StatusCause(field=spec.containers[0].volumeMounts[3].mountPath, message=Required value, reason=FieldValueRequired, additionalProperties={})], group=null, kind=Pod, name=builder-lfc8x, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=Pod "builder-lfc8x" is invalid: spec.containers[0].volumeMounts[3].mountPath: Required value, metadata=ListMeta(resourceVersion=null, selfLink=null, additionalProperties={}), reason=Invalid, status=Failure, additionalProperties={}).
       {{ at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:472)}}
       {{ at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:411)}}
       {{ at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:381)}}
       {{ at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:344)}}
       {{ at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:227)}}
       {{ at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:756)}}
       {{ at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:334)}}
       {{ at org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher.launch(KubernetesLauncher.java:105)}}
       {{ at hudson.slaves.SlaveComputer$1.call(SlaveComputer.java:288)}}
       {{ at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)}}
       {{ at java.util.concurrent.FutureTask.run(FutureTask.java:266)}}
       {{ at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)}}
       {{ at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)}}
       {{ at java.lang.Thread.run(Thread.java:748)}}{{}}
       
      

      This seems like it is probably related to https://issues.jenkins-ci.org/browse/JENKINS-50525, but since that fix is in 1.5.1, perhaps we are experiencing something slightly different.

      We have reverted the Kubernetes plugin back to 1.3.2 while we figure this out.

       

        Attachments

          Issue Links

            Activity

            rwehner Robert Wehner created issue -
            rwehner Robert Wehner made changes -
            Field Original Value New Value
            Description We are running the standard Jenkins container (lts) on Kubernetes and it is provisioning build slaves on the same cluster. After upgrading to Kubernetes Plugin v1.5.1 (from v1.3.2), Jenkins was no longer able to provision any build slaves and the Jenkins logs showed many errors like:

            {{Error in provisioning; agent=KubernetesSlave name: builder-lfc8x, template=PodTemplate\{, name='builder', namespace='jenkins', label='builder', nodeSelector='', nodeUsageMode=NORMAL, workspaceVolume=org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume@68a57ce6, volumes=[org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume@8e436e0, HostPathVolume [mountPath=/var/run/docker.sock, hostPath=/var/run/docker.sock], HostPathVolume [mountPath=/usr/bin/docker, hostPath=/usr/bin/docker]], containers=[ContainerTemplate\{name='git-on-borg-auth', image='us.gcr.io/myproject/git-on-borg-auth:v1', workingDir='', command='', args='', resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory='', livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@4f8c0bfd}, ContainerTemplate\{name='jnlp', image='us.gcr.io/myproject/jnlp-linux-node:v1', workingDir='/workspace/', command='', args='', resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory='', livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@26954299}, ContainerTemplate\{name='builder', image='us.gcr.io/myproject/builder:v8', workingDir='/workspace/', command='cat', args='', ttyEnabled=true, resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory='', livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@6d1df705}]}}}
            {{io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://kubernetes.default/api/v1/namespaces/jenkins/pods. Message: Pod "builder-lfc8x" is invalid: spec.containers[0].volumeMounts[3].mountPath: Required value. Received status: Status(apiVersion=v1, code=422, details=StatusDetails(causes=[StatusCause(field=spec.containers[0].volumeMounts[3].mountPath, message=Required value, reason=FieldValueRequired, additionalProperties=\{})], group=null, kind=Pod, name=builder-lfc8x, retryAfterSeconds=null, uid=null, additionalProperties=\{}), kind=Status, message=Pod "builder-lfc8x" is invalid: spec.containers[0].volumeMounts[3].mountPath: Required value, metadata=ListMeta(resourceVersion=null, selfLink=null, additionalProperties=\{}), reason=Invalid, status=Failure, additionalProperties=\{}).}}
            {{ at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:472)}}
            {{ at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:411)}}
            {{ at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:381)}}
            {{ at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:344)}}
            {{ at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:227)}}
            {{ at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:756)}}
            {{ at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:334)}}
            {{ at org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher.launch(KubernetesLauncher.java:105)}}
            {{ at hudson.slaves.SlaveComputer$1.call(SlaveComputer.java:288)}}
            {{ at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)}}
            {{ at java.util.concurrent.FutureTask.run(FutureTask.java:266)}}
            {{ at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)}}
            {{ at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)}}
            {{ at java.lang.Thread.run(Thread.java:748)}}{{}}

            This seems like it is probably related to https://issues.jenkins-ci.org/browse/JENKINS-50525, but since that fix is in 1.5.1, perhaps we are experiencing something slightly different.

            We have reverted the Kubernetes plugin back to 1.3.2 while we figure this out.

             
            We are running the standard Jenkins container (lts) on Kubernetes and it is provisioning build slaves on the same cluster. After upgrading to Kubernetes Plugin v1.5.1 (from v1.3.2), Jenkins was no longer able to provision any build slaves and the Jenkins logs showed many errors like:

            {{Error in provisioning; agent=KubernetesSlave name: builder-lfc8x, template=PodTemplate\{, name='builder', namespace='jenkins', label='builder', nodeSelector='', nodeUsageMode=NORMAL, workspaceVolume=org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume@68a57ce6, volumes=[org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume@8e436e0, HostPathVolume [mountPath=/var/run/docker.sock, hostPath=/var/run/docker.sock], HostPathVolume [mountPath=/usr/bin/docker, hostPath=/usr/bin/docker]], containers=[ContainerTemplate\\{name='git-auth', image='us.gcr.io/myproject/git-auth:v1', workingDir='', command='', args='', resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory='', livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@4f8c0bfd}, ContainerTemplate\\{name='jnlp', image='us.gcr.io/myproject/jnlp-linux-node:v1', workingDir='/workspace/', command='', args='', resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory='', livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@26954299}, ContainerTemplate\\{name='builder', image='us.gcr.io/myproject/builder:v8', workingDir='/workspace/', command='cat', args='', ttyEnabled=true, resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory='', livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@6d1df705}]}}}
             {{io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: [https://kubernetes.default/api/v1/namespaces/jenkins/pods]. Message: Pod "builder-lfc8x" is invalid: spec.containers[0].volumeMounts[3].mountPath: Required value. Received status: Status(apiVersion=v1, code=422, details=StatusDetails(causes=[StatusCause(field=spec.containers[0].volumeMounts[3].mountPath, message=Required value, reason=FieldValueRequired, additionalProperties=\{})], group=null, kind=Pod, name=builder-lfc8x, retryAfterSeconds=null, uid=null, additionalProperties=\{}), kind=Status, message=Pod "builder-lfc8x" is invalid: spec.containers[0].volumeMounts[3].mountPath: Required value, metadata=ListMeta(resourceVersion=null, selfLink=null, additionalProperties=\{}), reason=Invalid, status=Failure, additionalProperties=\{}).}}
             \{\{ at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:472)}}
             \{\{ at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:411)}}
             \{\{ at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:381)}}
             \{\{ at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:344)}}
             \{\{ at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:227)}}
             \{\{ at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:756)}}
             \{\{ at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:334)}}
             \{\{ at org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher.launch(KubernetesLauncher.java:105)}}
             \{\{ at hudson.slaves.SlaveComputer$1.call(SlaveComputer.java:288)}}
             \{\{ at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)}}
             \{\{ at java.util.concurrent.FutureTask.run(FutureTask.java:266)}}
             \{\{ at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)}}
             \{\{ at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)}}
             \{\{ at java.lang.Thread.run(Thread.java:748)}}\{\{}}

            This seems like it is probably related to https://issues.jenkins-ci.org/browse/JENKINS-50525, but since that fix is in 1.5.1, perhaps we are experiencing something slightly different.

            We have reverted the Kubernetes plugin back to 1.3.2 while we figure this out.

             
            rwehner Robert Wehner made changes -
            Description We are running the standard Jenkins container (lts) on Kubernetes and it is provisioning build slaves on the same cluster. After upgrading to Kubernetes Plugin v1.5.1 (from v1.3.2), Jenkins was no longer able to provision any build slaves and the Jenkins logs showed many errors like:

            {{Error in provisioning; agent=KubernetesSlave name: builder-lfc8x, template=PodTemplate\{, name='builder', namespace='jenkins', label='builder', nodeSelector='', nodeUsageMode=NORMAL, workspaceVolume=org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume@68a57ce6, volumes=[org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume@8e436e0, HostPathVolume [mountPath=/var/run/docker.sock, hostPath=/var/run/docker.sock], HostPathVolume [mountPath=/usr/bin/docker, hostPath=/usr/bin/docker]], containers=[ContainerTemplate\\{name='git-auth', image='us.gcr.io/myproject/git-auth:v1', workingDir='', command='', args='', resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory='', livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@4f8c0bfd}, ContainerTemplate\\{name='jnlp', image='us.gcr.io/myproject/jnlp-linux-node:v1', workingDir='/workspace/', command='', args='', resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory='', livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@26954299}, ContainerTemplate\\{name='builder', image='us.gcr.io/myproject/builder:v8', workingDir='/workspace/', command='cat', args='', ttyEnabled=true, resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory='', livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@6d1df705}]}}}
             {{io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: [https://kubernetes.default/api/v1/namespaces/jenkins/pods]. Message: Pod "builder-lfc8x" is invalid: spec.containers[0].volumeMounts[3].mountPath: Required value. Received status: Status(apiVersion=v1, code=422, details=StatusDetails(causes=[StatusCause(field=spec.containers[0].volumeMounts[3].mountPath, message=Required value, reason=FieldValueRequired, additionalProperties=\{})], group=null, kind=Pod, name=builder-lfc8x, retryAfterSeconds=null, uid=null, additionalProperties=\{}), kind=Status, message=Pod "builder-lfc8x" is invalid: spec.containers[0].volumeMounts[3].mountPath: Required value, metadata=ListMeta(resourceVersion=null, selfLink=null, additionalProperties=\{}), reason=Invalid, status=Failure, additionalProperties=\{}).}}
             \{\{ at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:472)}}
             \{\{ at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:411)}}
             \{\{ at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:381)}}
             \{\{ at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:344)}}
             \{\{ at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:227)}}
             \{\{ at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:756)}}
             \{\{ at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:334)}}
             \{\{ at org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher.launch(KubernetesLauncher.java:105)}}
             \{\{ at hudson.slaves.SlaveComputer$1.call(SlaveComputer.java:288)}}
             \{\{ at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)}}
             \{\{ at java.util.concurrent.FutureTask.run(FutureTask.java:266)}}
             \{\{ at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)}}
             \{\{ at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)}}
             \{\{ at java.lang.Thread.run(Thread.java:748)}}\{\{}}

            This seems like it is probably related to https://issues.jenkins-ci.org/browse/JENKINS-50525, but since that fix is in 1.5.1, perhaps we are experiencing something slightly different.

            We have reverted the Kubernetes plugin back to 1.3.2 while we figure this out.

             
            We are running the standard Jenkins container (lts) on Kubernetes and it is provisioning build slaves on the same cluster. After upgrading to Kubernetes Plugin v1.5.1 (from v1.3.2), Jenkins was no longer able to provision any build slaves and the Jenkins logs showed many errors like:

             
            {code:java}
            {{Error in provisioning; agent=KubernetesSlave name: builder-lfc8x, template=PodTemplate{, name='builder', namespace='jenkins', label='builder', nodeSelector='', nodeUsageMode=NORMAL, workspaceVolume=org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume@68a57ce6, volumes=[org.csanchez.jenkins.plugins.kubernetes.volumes.EmptyDirVolume@8e436e0, HostPathVolume [mountPath=/var/run/docker.sock, hostPath=/var/run/docker.sock], HostPathVolume [mountPath=/usr/bin/docker, hostPath=/usr/bin/docker]], containers=[ContainerTemplate
            {name='git-auth', image='us.gcr.io/myproject/git-auth:v1', workingDir='', command='', args='', resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory='', livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@4f8c0bfd}
            , ContainerTemplate
            {name='jnlp', image='us.gcr.io/myproject/jnlp-linux-node:v1', workingDir='/workspace/', command='', args='', resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory='', livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@26954299}
            , ContainerTemplate
            {name='builder', image='us.gcr.io/myproject/builder:v8', workingDir='/workspace/', command='cat', args='', ttyEnabled=true, resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory='', livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@6d1df705}
            ]}}}
             io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://kubernetes.default/api/v1/namespaces/jenkins/pods. Message: Pod "builder-lfc8x" is invalid: spec.containers[0].volumeMounts[3].mountPath: Required value. Received status: Status(apiVersion=v1, code=422, details=StatusDetails(causes=[StatusCause(field=spec.containers[0].volumeMounts[3].mountPath, message=Required value, reason=FieldValueRequired, additionalProperties={})], group=null, kind=Pod, name=builder-lfc8x, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=Pod "builder-lfc8x" is invalid: spec.containers[0].volumeMounts[3].mountPath: Required value, metadata=ListMeta(resourceVersion=null, selfLink=null, additionalProperties={}), reason=Invalid, status=Failure, additionalProperties={}).
             {{ at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:472)}}
             {{ at io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:411)}}
             {{ at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:381)}}
             {{ at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:344)}}
             {{ at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:227)}}
             {{ at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:756)}}
             {{ at io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:334)}}
             {{ at org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher.launch(KubernetesLauncher.java:105)}}
             {{ at hudson.slaves.SlaveComputer$1.call(SlaveComputer.java:288)}}
             {{ at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46)}}
             {{ at java.util.concurrent.FutureTask.run(FutureTask.java:266)}}
             {{ at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)}}
             {{ at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)}}
             {{ at java.lang.Thread.run(Thread.java:748)}}{{}}
             
            {code}
            This seems like it is probably related to https://issues.jenkins-ci.org/browse/JENKINS-50525, but since that fix is in 1.5.1, perhaps we are experiencing something slightly different.

            We have reverted the Kubernetes plugin back to 1.3.2 while we figure this out.

             
            csanchez Carlos Sanchez made changes -
            Link This issue duplicates JENKINS-50525 [ JENKINS-50525 ]
            csanchez Carlos Sanchez made changes -
            Status Open [ 1 ] Closed [ 6 ]
            Resolution Duplicate [ 3 ]

              People

              • Assignee:
                csanchez Carlos Sanchez
                Reporter:
                rwehner Robert Wehner
              • Votes:
                0 Vote for this issue
                Watchers:
                2 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: