-
Type:
Bug
-
Status: Closed (View Workflow)
-
Priority:
Major
-
Resolution: Not A Defect
-
Component/s: kubernetes-plugin
-
Labels:None
-
Similar Issues:
When I try to override the jnlp container image I still always get the alpine image and do not get the specified image. It seems that event hough i specify a container with the name jnlp it just ignores it and uses the default alpine image.
agent {
kubernetes {
label "human-review-ui-pipeline-${env.BUILD_ID}"
defaultContainer 'jnlp'
yaml """
apiVersion: v1
kind: Pod
metadata:
{{ labels:}}
{{ pod-template: jenkins-slave-npm}}
spec:
{{ containers:}}
{{ - name: jnlp}}
image: "redhat-cop/jenkins-slave-image-mgmt"
"""
- is duplicated by
-
JENKINS-56375 custom jnlp not working
-
- Resolved
-
Carlos Sanchez, I have the same problem:
I have custom pod template with jnlp container override created via UI (custom image). When I use its label from my pipelines everything work ok.
However once I try to extend this pod with additional containers via inheritFrom statement or using pod nesting, I get default jnlp container with default alpine image.
My plugin version is 1.13.8
I've tried to enable the plugin logging using Jenkins custom loggers feature, however it's always empty for some reason.
Update:
I've upgraded Jenkins version to the 2.150.1 and plugin to 1.14.3, now I was able to override jnlp container image, but podTemplate inheritance is still broken, my settings are:
The pipeline code is:
podTemplate(label: 'myPod', inheritFrom: 'defaultpod', containers: [ containerTemplate(name: 'jnlp', image: 'mycustomimage'), containerTemplate(name: 'golang', image: 'golang:1.8.0', ttyEnabled: true, command: 'cat') ]) { node('myPod') { ....
but the pod template I get in k8s doesn't has TEST_VAR variable defined
you are overriding the jnlp container in your pod template, so that's the one that gets picked up, with no env vars
Carlos Sanchez I don't understand why you closed this?
> you are overriding the jnlp container in your pod template, so that's the one that gets picked up, with no env vars
the problem is that I am overriding the JNLP container in my pod template in the declarative pipeline, and that is not the one that is getting picked up, I am always getting the default JNLP container.
That was for Dmitry
yaml is not merged with parent and then UI defined containers take precedence over yaml defined containers
You can't mix UI inheritance with overriding yaml fields, you need to override using containerTemplate
I am facing the same issue
here are logs
Error in provisioning; agent=KubernetesSlave name: eks-cluster-zjkg8, template=PodTemplate{inheritFrom='', name='eks-cluster', namespace='default', slaveConnectTimeout=30, label='eks-cluster', nodeSelector='', nodeUsageMode=NORMAL, workspaceVolume=EmptyDirWorkspaceVolume [memory=false], volumes=[HostPathVolume [mountPath=/var/run/docker.sock, hostPath=/var/run/docker.sock]], containers=[ContainerTemplate
{name='jnlp', image='jenkins/jnlp-slave:alpine', alwaysPullImage=true, workingDir='/home/jenkins', command='/bin/sh -c', args='cat', ttyEnabled=true, resourceRequestCpu='', resourceRequestMemory='', resourceLimitCpu='', resourceLimitMemory='', livenessProbe=org.csanchez.jenkins.plugins.kubernetes.ContainerLivenessProbe@1c48445b}], yaml=} java.lang.IllegalStateException: Agent is not connected after 30 seconds, status: Running at org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher.launch(KubernetesLauncher.java:224) at hudson.slaves.SlaveComputer$1.call(SlaveComputer.java:294) at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:46) at jenkins.security.ImpersonatingExecutorService$2.call(ImpersonatingExecutorService.java:71) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)
I tried with custom image didn't work so given jenkins-jnlp official image even didn't work
using this on scripted pipeline
node('eks-cluster') {
stage('Check jnlp') {
sh 'rm -rf *'
I also have similar issue. Irrespective of yaml or ui override, the plugin is always pulling jnlp:alpine even though i have my custom jnlp named as jnlp
Jenkins: 2.150.3 and Kubernetes plugin: 1.14.3
Can we please reopen this ticket or explain what configuration will make my custom jnlp work?
suryatej yaramada your pod is failing, you need to check why
Agent is not connected after 30 seconds
so it is probably using an old agent or another one you have with the same label
Carlos Sanchez I am still having this issue. I don't have any pod templates defined in the `Kubernetes` configuration. I only define the pod / containers in the declarative pipelines. So there is no merging going on. But for some reason no matter what i put in for `image` for the `jnlp` container in the declarative pipeline, it gets ignored and the default alipine container gets used.
Same issue here with a declarative pipline.
Jenkins version: 2.190.1
Kubernetes plugin: 1.20.1
I've created a pod template under Cloud section on the ui. With the following options:
Pod Template:: Name: jenkins-builder ... Container Template :: Name: my-jnlp Docker image: jenkins/jnlp-slave:latest Working directory: /home/jenkins/agent ... ... Workspace Volume: PVC Claim name: jenkins-slave-claim
Then created this basic pipline:
pipeline { agent { kubernetes { defaultContainer 'my-jnlp' yaml """ apiVersion: v1 kind: Pod metadata: name: jenkins-builder spec: containers: - name: busybox image: busybox command: - cat tty: true """ } } stages { stage('start') { steps{ container('busybox'){ sh "ls" } } } } }
In the console I always get the default jnlp container:
apiVersion: "v1" kind: "Pod" metadata: annotations: buildUrl: "http://jenkins.default.svc.k8s.si.net:8080/job/test/20/" labels: jenkins: "slave" jenkins/test_20-s37fr: "true" name: "test-20-s37fr-xq14x-z8rq5" spec: containers: - command: - "cat" image: "busybox" name: "busybox" tty: true volumeMounts: - mountPath: "/home/jenkins/agent" name: "workspace-volume" readOnly: false - command: - "cat" image: "maven:3-alpine" name: "builder-new" tty: true volumeMounts: - mountPath: "/home/jenkins/agent" name: "workspace-volume" readOnly: false - env: - name: "JENKINS_SECRET" value: "********" - name: "JENKINS_TUNNEL" value: "jenkins-agent.default.svc.k8s.si.net:50000" - name: "JENKINS_AGENT_NAME" value: "test-20-s37fr-xq14x-z8rq5" - name: "JENKINS_NAME" value: "test-20-s37fr-xq14x-z8rq5" - name: "JENKINS_AGENT_WORKDIR" value: "/home/jenkins/agent" - name: "JENKINS_URL" value: "http://jenkins.default.svc.k8s.si.net:8080/" image: "jenkins/jnlp-slave:alpine" name: "jnlp" volumeMounts: - mountPath: "/home/jenkins/agent" name: "workspace-volume" readOnly: false nodeSelector: {} restartPolicy: "Never" volumes: - emptyDir: medium: "" name: "workspace-volume"
So it's not what I'd like to see. I cannot find out how I can use my Pod with the PVC.
I cannot workaround this with JENKINS-56375 unfortunately.
that should work, do you have the debug logs?