Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-44609

Docker inspect failing on named multi-stage builds

    XMLWordPrintable

    Details

    • Type: Bug
    • Status: Reopened (View Workflow)
    • Priority: Minor
    • Resolution: Unresolved
    • Component/s: docker-workflow-plugin
    • Labels:
      None
    • Environment:
      Debian Jessie x64
      Docker Pipeline 1.11
      Jenkins ver. 2.46.3
      Docker version 17.05.0-ce, build 89658be
    • Similar Issues:

      Description

      When using named stages in a multistage build as in the example below, the Jenkins pipeline will fail with the following message right after the build has finished.

      <SNIP>
      Successfully built b59ee5bc6b07
      Successfully tagged bytesheep/odr-dabmux:latest
      [Pipeline] dockerFingerprintFrom
      [Pipeline] }
      [Pipeline] // stage
      [Pipeline] }
      [Pipeline] // node
      [Pipeline] End of Pipeline
      java.io.IOException: Cannot retrieve .Id from 'docker inspectalpine:3.6 AS builder'
          at org.jenkinsci.plugins.docker.workflow.client.DockerClient.inspectRequiredField(DockerClient.java:193)
          at org.jenkinsci.plugins.docker.workflow.FromFingerprintStep$Execution.run(FromFingerprintStep.java:119)
          at org.jenkinsci.plugins.docker.workflow.FromFingerprintStep$Execution.run(FromFingerprintStep.java:75)
          at org.jenkinsci.plugins.workflow.steps.AbstractSynchronousNonBlockingStepExecution$1$1.call(AbstractSynchronousNonBlockingStepExecution.java:47)
          at hudson.security.ACL.impersonate(ACL.java:260)
          at org.jenkinsci.plugins.workflow.steps.AbstractSynchronousNonBlockingStepExecution$1.run(AbstractSynchronousNonBlockingStepExecution.java:44)
          at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
          at java.util.concurrent.FutureTask.run(FutureTask.java:266)
          at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
          at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
          at java.lang.Thread.run(Thread.java:748)
      Finished: FAILURE

       

      Dockerfile

      #
      # Build environment
      #
      FROM alpine:3.6 AS builder
      
      <SNIP>
      
      #
      # Create final container
      #
      FROM alpine:3.6
      
      <SNIP>
      
      # Copy artifacts from builder
      COPY --from=builder /usr/local .
      

       

      There is a workaround to this issue by removing the names and using the build numbers instead. example:

      FROM alpine:3.6
      
      <SNIP>
      
      FROM alpine:3.6
      
      COPY --from=0 /usr/local .

       

      Looking at the related source at #100, it seems the code determains the image name looking at FROM and taking everything until EOL. Which would include 'AS buildname'.

      At a glance it also looks like the code will take the first stage for fingerprinting instead of the final stage (which is the resulting image).

        Attachments

          Issue Links

            Activity

            m4rcu5 Marcus van Dam created issue -
            Hide
            dduportal Damien Duportal added a comment -

            One clue here that might be useful here: https://github.com/moby/moby/pull/33185/files 

            There is a "–target" option to docker build that can help. But if you target only the build part in your example

            docker build -t myimage:1.0.0 --target=build ./
            

            then, the child images (the other FROM) will be ignored.

             

            The user need for the docker workflow plugin to express instructions to build in the docker-provided build environment are the same as this multi-stage build.

            The plugin does not seems currently useful with multi stage: moving to a simple sh 'docker build -t image ./' should be enough there, outside the need for fingerprinting maybe ?

            => The big concern is "how to make jenkins access the intermediate container, like for publishing tests units or reports" ?

            For today, I'm trying to parse the docker build output, and the use docker cp to get the files in the workspace, which is portable in term of UID, and less painful.

             

            Show
            dduportal Damien Duportal added a comment - One clue here that might be useful here: https://github.com/moby/moby/pull/33185/files   There is a "–target" option to docker build that can help. But if you target only the build part in your example docker build -t myimage:1.0.0 --target=build ./ then, the child images (the other FROM) will be ignored.   The user need for the docker workflow plugin to express instructions to build in the docker-provided build environment are the same as this multi-stage build. The plugin does not seems currently useful with multi stage: moving to a simple sh 'docker build -t image ./' should be enough there, outside the need for fingerprinting maybe ? => The big concern is "how to make jenkins access the intermediate container, like for publishing tests units or reports" ? For today, I'm trying to parse the docker build output, and the use docker cp to get the files in the workspace, which is portable in term of UID, and less painful.  
            Hide
            rootvm Oleksandr Korniienko added a comment - - edited

            Another way you can add the first line

             

            #Build environment
            #
            FROM alpine:3.6

            FROM alpine:3.6 AS builder

            <SNIP>

            #Create final container
            #
            FROM alpine:3.6

            <SNIP>

            #Copy artifacts from builder
            COPY --from=builder /usr/local .

            Show
            rootvm Oleksandr Korniienko added a comment - - edited Another way you can add the first line   #Build environment # FROM alpine:3.6 FROM alpine:3.6 AS builder <SNIP> #Create final container # FROM alpine:3.6 <SNIP> #Copy artifacts from builder COPY --from=builder /usr/local .
            philster_jenkins Phil Clay made changes -
            Field Original Value New Value
            Link This issue relates to JENKINS-44789 [ JENKINS-44789 ]
            Hide
            jglick Jesse Glick added a comment -

            Just use sh 'docker build .' rather than Image.build DSL.

            Show
            jglick Jesse Glick added a comment - Just use sh 'docker build .' rather than Image.build DSL.
            Hide
            esmalling Eric Smalling added a comment -

            Docker multi-stage build throws away intermediate containers as it goes, so the concern above is not going to be able to be fixed in Jenkins code.   You can see this in the build output with lines like: `Removing intermediate container 66e6311b3971`

            (unless you were meaning to say image, instead of container)

            A quick fix to get the final "FROM" would be as simple as changing the "break" to a "continue" in the loop at 

            https://github.com/jenkinsci/docker-workflow-plugin/blob/111b78e2c110a77826a2d6c7607fd24db4c8e440/src/main/java/org/jenkinsci/plugins/docker/workflow/FromFingerprintStep.java#L100

            If, however, you want to get at the intermediate images, that's going to take a lot more effort.

            Show
            esmalling Eric Smalling added a comment - Docker multi-stage build throws away intermediate containers as it goes, so the concern above is not going to be able to be fixed in Jenkins code.   You can see this in the build output with lines like: `Removing intermediate container 66e6311b3971` (unless you were meaning to say image , instead of container ) A quick fix to get the final "FROM" would be as simple as changing the "break" to a "continue" in the loop at  https://github.com/jenkinsci/docker-workflow-plugin/blob/111b78e2c110a77826a2d6c7607fd24db4c8e440/src/main/java/org/jenkinsci/plugins/docker/workflow/FromFingerprintStep.java#L100 If, however, you want to get at the intermediate images, that's going to take a lot more effort.
            Hide
            dduportal Damien Duportal added a comment -

            Eric Smalling be careful, you only need the intermediate image, not the intermediate container (which are effectively deleted as you said).

            Those intermediate images are used for the "caching" of the docker builds, and contains the FS (so all the files).

            The containers are just instantiations of this immutable image. They are deleted by default because duplicating things (not really true, in term of layers, but let's see things like this).

             

            You can access these intermediate images with the flag "-a" added to "docker image ls" of to "docker images" (if you have an older docker version):

             

            docker image ls -a
            
            Show
            dduportal Damien Duportal added a comment - Eric Smalling be careful, you only need the intermediate image, not the intermediate container (which are effectively deleted as you said). Those intermediate images are used for the "caching" of the docker builds, and contains the FS (so all the files). The containers are just instantiations of this immutable image. They are deleted by default because duplicating things (not really true, in term of layers, but let's see things like this).   You can access these intermediate images with the flag "-a" added to "docker image ls" of to "docker images" (if you have an older docker version):   docker image ls -a
            Hide
            esmalling Eric Smalling added a comment -

            Damien Duportal - I think we are saying the same thing.  (I was quoting your original comment where you mentioned the intermediate containers.)

            What I am saying is that a simple change to loop until the last FROM statement would fix the parsing error and would make docker.build work like it does for non-multi-stage builds.

             

            The problem of obtaining the image ID's for the intermediate images is a bigger one to try to solve which probably should be a separate feature enhancement as opposed to the bug that is occurring with grabbing the first one and pulling the " AS..." part of the line.

             

            Show
            esmalling Eric Smalling added a comment - Damien Duportal - I think we are saying the same thing.  (I was quoting your original comment where you mentioned the intermediate containers .) What I am saying is that a simple change to loop until the last FROM statement would fix the parsing error and would make docker.build work like it does for non-multi-stage builds.   The problem of obtaining the image ID's for the intermediate images is a bigger one to try to solve which probably should be a separate feature enhancement as opposed to the bug that is occurring with grabbing the first one and pulling the " AS..." part of the line.  
            esmalling Eric Smalling made changes -
            Assignee Eric Smalling [ esmalling ]
            Hide
            esmalling Eric Smalling added a comment -

            Opened tentative PR for this: https://github.com/jenkinsci/docker-workflow-plugin/pull/111

            As stated there, am open to enhancements to the JUnit

            Show
            esmalling Eric Smalling added a comment - Opened tentative PR for this: https://github.com/jenkinsci/docker-workflow-plugin/pull/111 As stated there, am open to enhancements to the JUnit
            Hide
            dduportal Damien Duportal added a comment -

            Eric Smalling Great!

            Sorry I misunderstood your comment.

             

            Thanks for this contribution, it's nice!

            Show
            dduportal Damien Duportal added a comment - Eric Smalling Great! Sorry I misunderstood your comment.   Thanks for this contribution, it's nice!
            jglick Jesse Glick made changes -
            Status Open [ 1 ] In Progress [ 3 ]
            jglick Jesse Glick made changes -
            Status In Progress [ 3 ] In Review [ 10005 ]
            Hide
            esmalling Eric Smalling added a comment -

            Fixed and released in v1.13

            Show
            esmalling Eric Smalling added a comment - Fixed and released in v1.13
            esmalling Eric Smalling made changes -
            Status In Review [ 10005 ] Resolved [ 5 ]
            Resolution Fixed [ 1 ]
            esmalling Eric Smalling made changes -
            Status Resolved [ 5 ] Closed [ 6 ]
            Hide
            esmalling Eric Smalling added a comment -

            Marcus van Dam  Please test with v1.13 of the plugin and re-open with comments if still an issue for you

            Show
            esmalling Eric Smalling added a comment - Marcus van Dam  Please test with v1.13 of the plugin and re-open with comments if still an issue for you
            Hide
            kgorlick Kevin Gorlick added a comment -

            Eric Smalling, I have the same issue. I just updated to 1.13 of the Docker Pipeline plugin and am still getting the same error.

            Show
            kgorlick Kevin Gorlick added a comment - Eric Smalling , I have the same issue. I just updated to 1.13 of the Docker Pipeline plugin and am still getting the same error.
            Hide
            esmalling Eric Smalling added a comment -

            Reviewing release build - will update shortly

            Show
            esmalling Eric Smalling added a comment - Reviewing release build - will update shortly
            esmalling Eric Smalling made changes -
            Resolution Fixed [ 1 ]
            Status Closed [ 6 ] Reopened [ 4 ]
            Hide
            kgorlick Kevin Gorlick added a comment -

            In case it helps:

            Docker version 17.06.2-ce, build cec0b72

            Jenkins 2.76

             

            Show
            kgorlick Kevin Gorlick added a comment - In case it helps: Docker version 17.06.2-ce, build cec0b72 Jenkins 2.76  
            Hide
            kgorlick Kevin Gorlick added a comment -

            More details. Here is a stub of my Dockerfile. If I understand the attempted fix in 1.13, we are now looking at the last FROM. Since I am aliasing the "Release" step that may be why this is still blowing up.

            # Base image
            FROM 12345/node-base:latest AS base
            WORKDIR /app
            
            # Dependencies
            FROM base AS dependencies
            *STUFF*
            
            # Test
            FROM dependencies AS test
            *STUFF*
            
            # Build
            FROM dependencies AS build
            *STUFF*
            
            # Release
            FROM base AS release
            COPY --from=dependencies /app/prod_node_modules ./node_modules
            COPY --from=build /app/dist ./dist
            *STUFF*
            Show
            kgorlick Kevin Gorlick added a comment - More details. Here is a stub of my Dockerfile. If I understand the attempted fix in 1.13, we are now looking at the last FROM. Since I am aliasing the "Release" step that may be why this is still blowing up. # Base image FROM 12345/node-base:latest AS base WORKDIR /app # Dependencies FROM base AS dependencies *STUFF* # Test FROM dependencies AS test *STUFF* # Build FROM dependencies AS build *STUFF* # Release FROM base AS release COPY --from=dependencies /app/prod_node_modules ./node_modules COPY --from=build /app/dist ./dist *STUFF*
            Hide
            esmalling Eric Smalling added a comment -

            Yes - I'm sure that's the issue. 

            I'm curious what is the purpose of using that label on the final stage?

            Show
            esmalling Eric Smalling added a comment - Yes - I'm sure that's the issue.  I'm curious what is the purpose of using that label on the final stage?
            Hide
            kgorlick Kevin Gorlick added a comment -

            Elegance

             

            Let me try removing it and seeing if that fixes the issue.

            Show
            kgorlick Kevin Gorlick added a comment - Elegance   Let me try removing it and seeing if that fixes the issue.
            Hide
            kgorlick Kevin Gorlick added a comment -

            Eric Smalling, that did not fix the issue:

             
            Step 17/23 : FROM base
            ---> 94a0dbc48319
            Step 18/23 : COPY --from=dependencies /app/prod_node_modules ./node_modules
            ---> Using cache
            ---> ae78693390f4
            Step 19/23 : COPY --from=build /app/dist ./dist
            ---> Using cache
            ---> cc6134f1e1a6
            STUFF

            Step 23/23 : WORKDIR /app/dist
            ---> Using cache
            ---> dbbe53379363
            Successfully built dbbe53379363
            Successfully tagged a3-configuration:latest
            [Pipeline] dockerFingerprintFrom
            [Pipeline] }
            [Pipeline] // dir
            [Pipeline] }
            [Pipeline] // stage
            [Pipeline] }
            [Pipeline] // node
            [Pipeline] End of Pipeline
            java.io.IOException: Cannot retrieve .Id from 'docker inspectbase'

            Show
            kgorlick Kevin Gorlick added a comment - Eric Smalling , that did not fix the issue:   Step 17/23 : FROM base ---> 94a0dbc48319 Step 18/23 : COPY --from=dependencies /app/prod_node_modules ./node_modules ---> Using cache ---> ae78693390f4 Step 19/23 : COPY --from=build /app/dist ./dist ---> Using cache ---> cc6134f1e1a6 STUFF Step 23/23 : WORKDIR /app/dist ---> Using cache ---> dbbe53379363 Successfully built dbbe53379363 Successfully tagged a3-configuration:latest [Pipeline] dockerFingerprintFrom [Pipeline] } [Pipeline] // dir [Pipeline] } [Pipeline] // stage [Pipeline] } [Pipeline] // node [Pipeline] End of Pipeline java.io.IOException: Cannot retrieve .Id from 'docker inspectbase'
            Hide
            kgorlick Kevin Gorlick added a comment -

            Eric Smalling, as a temp workaround, I bet I can change the last from to (instead of "FROM base")
            FROM 12345/node-base:latest AS base
            WORKDIR /app
             

            But at some point someone (maybe even me) will be doing something a bit more complex in the base image and not want to have to duplicate all the steps in the final build.

            Show
            kgorlick Kevin Gorlick added a comment - Eric Smalling , as a temp workaround, I bet I can change the last from to (instead of "FROM base") FROM 12345/node-base:latest AS base WORKDIR /app   But at some point someone (maybe even me) will be doing something a bit more complex in the base image and not want to have to duplicate all the steps in the final build.
            Hide
            kgorlick Kevin Gorlick added a comment -

            Eric Smalling, that fixed my problem. I still recommend we handle the case where the last from in a Docker file is built from an alias.

            Show
            kgorlick Kevin Gorlick added a comment - Eric Smalling , that fixed my problem. I still recommend we handle the case where the last from in a Docker file is built from an alias.
            Hide
            esmalling Eric Smalling added a comment -

            Ah - I missed the fact that your were coming from a prior stage in your last FROM...   not something I've heard people doing since they usually want to come from alpine or something.  I can see why you're doing it though - I'll try to get a fix in this week and will post here when I have an hpi to test.

            Show
            esmalling Eric Smalling added a comment - Ah - I missed the fact that your were coming from a prior stage in your last FROM...   not something I've heard people doing since they usually want to come from alpine or something.  I can see why you're doing it though - I'll try to get a fix in this week and will post here when I have an hpi to test.
            Hide
            bitgandtter Yasmany Cubela Medina added a comment -

            Same issue here with multi-stage builds, any fix on the road? or any way to disable tractability or workaround?

            Show
            bitgandtter Yasmany Cubela Medina added a comment - Same issue here with multi-stage builds, any fix on the road? or any way to disable tractability or workaround?
            Hide
            kgorlick Kevin Gorlick added a comment -

            Yasmany Cubela Medina, my workaround was to change the last FROM to not use a named prior build. Instead I re-used the original FROM

             

            I changed this:

            # Base image
            FROM 12345/node-base:latest AS base
            WORKDIR /app
            
            # Dependencies
            FROM base AS dependencies
            *STUFF*
            
            # Test
            FROM dependencies AS test
            *STUFF*
            
            # Build
            FROM dependencies AS build
            *STUFF*
            
            # Release
            FROM base AS release
            COPY --from=dependencies /app/prod_node_modules ./node_modules
            COPY --from=build /app/dist ./dist
            *STUFF*

             

            TO (notice the last FROM is different):

             

            # Base image
            FROM 12345/node-base:latest AS base
            WORKDIR /app
            
            # Dependencies
            FROM base AS dependencies
            *STUFF*
            
            # Test
            FROM dependencies AS test
            *STUFF*
            
            # Build
            FROM dependencies AS build
            *STUFF*
            
            # Release
            FROM 12345/node-base:latest AS base
            WORKDIR /app
            COPY --from=dependencies /app/prod_node_modules ./node_modules
            COPY --from=build /app/dist ./dist
            *STUFF*
            Show
            kgorlick Kevin Gorlick added a comment - Yasmany Cubela Medina , my workaround was to change the last FROM to not use a named prior build. Instead I re-used the original FROM   I changed this: # Base image FROM 12345/node-base:latest AS base WORKDIR /app # Dependencies FROM base AS dependencies *STUFF* # Test FROM dependencies AS test *STUFF* # Build FROM dependencies AS build *STUFF* # Release FROM base AS release COPY --from=dependencies /app/prod_node_modules ./node_modules COPY --from=build /app/dist ./dist *STUFF*   TO (notice the last FROM is different):   # Base image FROM 12345/node-base:latest AS base WORKDIR /app # Dependencies FROM base AS dependencies *STUFF* # Test FROM dependencies AS test *STUFF* # Build FROM dependencies AS build *STUFF* # Release FROM 12345/node-base:latest AS base WORKDIR /app COPY --from=dependencies /app/prod_node_modules ./node_modules COPY --from=build /app/dist ./dist *STUFF*
            Hide
            esmalling Eric Smalling added a comment -

            Sorry, I've not had time to look at this further yet.  Until I (or someone else) does, I recommend doing as Jesse Glick says and just run docker build via an "sh".

            Show
            esmalling Eric Smalling added a comment - Sorry, I've not had time to look at this further yet.  Until I (or someone else) does, I recommend doing as Jesse Glick says and just run docker build via an "sh".
            Hide
            davidiam David Ihnen added a comment - - edited

            I have been given the go-ahead to attempt to fix this using the --iidfile option with a few hours of my paid work time to fix our use of this feature.  Jesse Glick in an informal email exchange agreed that this sounded like a reasonable mode of repair, for what that's worth.

            ....

            So having determined the problem had to do not with being able to acquire the specific id of the just created image but the SOURCE image ids, I decided that the proper way to fix it for multi-stage builds was to make the primitive naive parser (which is making the 'space in my FROM line cause exception bug thanks to a unnamed constant (5) ) more smart - so that 1. knows about build-arg so it can substitute in  2. understands whitespace the same way Docker does.

             

            Unfortunately, ain't got time for that.  What I DID notice in analyzing the code is that this exceedingly naive parser will ignore any FROM line that isn't pegged to the first column - so I can put 'FROM scratch\n FROM ${whatever}/thing:${whateverelse} and it won't spot the line with the space on the first column.  Fortunately, Docker 17.09 doesn't care about that space - allowing me to fool the plugin into thinking I sourced from scratch when I didn't.  

            Its a workaround, but it works. I have no further action here at this time, but I hope to sometime get to make this work.

             

            WORKAROUND - MAKE IT LOOK MORE LIKE THIS:

            FROM scratch
             FROM buildimage:$\{POSSIBLY_ARGS}
            RUN build stuff
             FROM runimage:${whatever}
            COPY --from=1 /built/code /deploy/location
            CMD startup
            
            Show
            davidiam David Ihnen added a comment - - edited I have been given the go-ahead to attempt to fix this using the --iidfile option with a few hours of my paid work time to fix our use of this feature.  Jesse Glick in an informal email exchange agreed that this sounded like a reasonable mode of repair, for what that's worth. .... So having determined the problem had to do not with being able to acquire the specific id of the just created image but the SOURCE image ids, I decided that the proper way to fix it for multi-stage builds was to make the primitive naive parser (which is making the 'space in my FROM line cause exception bug thanks to a unnamed constant (5) ) more smart - so that 1. knows about build-arg so it can substitute in  2. understands whitespace the same way Docker does.   Unfortunately, ain't got time for that.  What I DID notice in analyzing the code is that this exceedingly naive parser will ignore any FROM line that isn't pegged to the first column - so I can put 'FROM scratch\n FROM ${whatever}/thing:${whateverelse} and it won't spot the line with the space on the first column.  Fortunately, Docker 17.09 doesn't care about that space - allowing me to fool the plugin into thinking I sourced from scratch when I didn't.   Its a workaround, but it works. I have no further action here at this time, but I hope to sometime get to make this work.   WORKAROUND - MAKE IT LOOK MORE LIKE THIS: FROM scratch  FROM buildimage:$\{POSSIBLY_ARGS} RUN build stuff FROM runimage:${whatever} COPY --from=1 /built/code /deploy/location CMD startup
            Hide
            andreaslutro Andreas Lutro added a comment -

            I'm not an experienced java developer but this has been annoying me for way too long. I've opened a PR here with more fixes for this issue, but could use some guidance. https://github.com/jenkinsci/docker-workflow-plugin/pull/149

            Show
            andreaslutro Andreas Lutro added a comment - I'm not an experienced java developer but this has been annoying me for way too long. I've opened a PR here with more fixes for this issue, but could use some guidance.  https://github.com/jenkinsci/docker-workflow-plugin/pull/149
            Hide
            alex_dubrouski Alex Dubrouski added a comment - - edited

            Andreas Lutro et al.

            I feel it is rather work around. We faced this issue couple of weeks ago and I tried to poke around. So far the only solution I see is to change:

            https://github.com/jenkinsci/pipeline-model-definition-plugin/blob/6cd7cb80203bd34cc0909e0e0a673c0d5d6a178b/pipeline-model-definition/src/main/resources/org/jenkinsci/plugins/pipeline/modeldefinition/agent/impl/DockerPipelineFromDockerfileScript.groovy

            workflow and add additional parameter to DSL like `iidfile="path/to/file"`, so on the other side Docker Workflow plugin can check whether it is set or not and use ID from file instead of relying on naive parser.

            I could create PRs for both plugins, but would prefer to discuss this solutions first.

            Theoretically `–iidfile` can be added as a part of `buildArgs`, but then Workflow plugin will have to parse build args, in case of separate DSL parameter it is just a simple isEmpty check and a logic like this to get ID: 

            FilePath dockeridfile = workspace.child(step.iidfile);
            String id;
            try(InputStream isid = dockeridfile.read()) {
                try(BufferedReader r = new BufferedReader(new InputStreamReader(isid, "ISO-8859-1"))) {
                    id = r.readLine();
                }
            }
            

            Just to be clear I mean pipeline scenario like this:

                    stage('Build') {
                        agent {
                            dockerfile {
                                filename 'Dockerfile'
                                dir 'deployment'
                                additionalBuildArgs '--target base'
                            }
                        }
                        steps {
                            sh "echo TEST123"
                        }
                    }
            

            where Dockerfile is multi-stage and contains aliases like "FROM base AS prod"

            Show
            alex_dubrouski Alex Dubrouski added a comment - - edited Andreas Lutro et al. I feel it is rather work around. We faced this issue couple of weeks ago and I tried to poke around. So far the only solution I see is to change: https://github.com/jenkinsci/pipeline-model-definition-plugin/blob/6cd7cb80203bd34cc0909e0e0a673c0d5d6a178b/pipeline-model-definition/src/main/resources/org/jenkinsci/plugins/pipeline/modeldefinition/agent/impl/DockerPipelineFromDockerfileScript.groovy workflow and add additional parameter to DSL like `iidfile="path/to/file"`, so on the other side Docker Workflow plugin can check whether it is set or not and use ID from file instead of relying on naive parser. I could create PRs for both plugins, but would prefer to discuss this solutions first. Theoretically `–iidfile` can be added as a part of `buildArgs`, but then Workflow plugin will have to parse build args, in case of separate DSL parameter it is just a simple isEmpty check and a logic like this to get ID:  FilePath dockeridfile = workspace.child(step.iidfile); String id; try (InputStream isid = dockeridfile.read()) { try (BufferedReader r = new BufferedReader( new InputStreamReader(isid, "ISO-8859-1" ))) { id = r.readLine(); } } Just to be clear I mean pipeline scenario like this: stage( 'Build' ) { agent { dockerfile { filename 'Dockerfile' dir 'deployment' additionalBuildArgs '--target base' } } steps { sh "echo TEST123" } } where Dockerfile is multi-stage and contains aliases like " FROM base AS prod "
            Hide
            andreaslutro Andreas Lutro added a comment -

            If you want to submit a "better" PR then don't let me stop you, but I'd rather have a working plugin with workarounds than a non-working one until someone (whoever that is) submits a "proper" solution.

            That being said, is this plugin even being maintained? Am I wasting my time commenting here and making a PR?

            Show
            andreaslutro Andreas Lutro added a comment - If you want to submit a "better" PR then don't let me stop you, but I'd rather have a working plugin with workarounds than a non-working one until someone (whoever that is) submits a "proper" solution. That being said, is this plugin even being maintained? Am I wasting my time commenting here and making a PR?
            Hide
            jglick Jesse Glick added a comment - - edited

            is this plugin even being maintained?

            Not that I know of. IMO you should not use the docker DSL, nor the withDockerContainer step (including Declarative Pipeline’s agent {docker …} and agent {dockerfile …}), and at most use the withDockerRegistry and withDockerServer steps.

            Show
            jglick Jesse Glick added a comment - - edited is this plugin even being maintained? Not that I know of. IMO you should not use the docker DSL, nor the withDockerContainer step (including Declarative Pipeline’s agent {docker … } and agent {dockerfile … }), and at most use the withDockerRegistry and withDockerServer steps.
            Hide
            antoinemvh Antoine Monnet added a comment -

            I've modified FromFingerprintStep, removed the Dockerfile parser and use docker inspect to walk up the image history up to the previous properly tagged image.

            This solves the multistage issue for me.

            https://github.com/jenkinsci/docker-workflow-plugin/pull/155

            Show
            antoinemvh Antoine Monnet added a comment - I've modified FromFingerprintStep, removed the Dockerfile parser and use docker inspect to walk up the image history up to the previous properly tagged image. This solves the multistage issue for me. https://github.com/jenkinsci/docker-workflow-plugin/pull/155
            Hide
            shiro Matic Gacar added a comment -

            We are also having trouble because of this, since we have to rewrite all of our Dockerfiles to be compatible with jenkins.

            Any news on this is appreciated. 

            Show
            shiro Matic Gacar added a comment - We are also having trouble because of this, since we have to rewrite all of our Dockerfiles to be compatible with jenkins. Any news on this is appreciated. 
            Hide
            skullclown Steven Weathers added a comment -

            I just setup Jenkins and ran into this issue as well, would love to see it resolved so I can utilize Jenkins otherwise this is a blocker for me.

            Show
            skullclown Steven Weathers added a comment - I just setup Jenkins and ran into this issue as well, would love to see it resolved so I can utilize Jenkins otherwise this is a blocker for me.
            Hide
            jglick Jesse Glick added a comment -

            Steven Weathers simply run sh 'docker build…' and do not use this feature.

            Show
            jglick Jesse Glick added a comment - Steven Weathers simply run sh 'docker build…' and do not use this feature.
            Hide
            skullclown Steven Weathers added a comment -

            Jesse Glick that's one thing I tried but makes things like deploying to registry more of a hassle.  Overall I've since abandoned trying to use Jenkins and have gone with another CI that worked flawlessly from the start since it was docker oriented.

            Show
            skullclown Steven Weathers added a comment - Jesse Glick that's one thing I tried but makes things like deploying to registry more of a hassle.  Overall I've since abandoned trying to use Jenkins and have gone with another CI that worked flawlessly from the start since it was docker oriented.
            Hide
            jglick Jesse Glick added a comment -

            makes things like deploying to registry more of a hassle

            For what it’s worth, my recommendation is

            withDockerRegistry(url: 'https://docker.corp/', credentialsId: 'docker-creds') {
              sh 'sh build-and-push'
            }
            

            with the script being something like

            docker build -t docker.corp/x/y:$TAG .
            docker push docker.corp/x/y:$TAG
            
            Show
            jglick Jesse Glick added a comment - makes things like deploying to registry more of a hassle For what it’s worth, my recommendation is withDockerRegistry(url: 'https: //docker.corp/' , credentialsId: 'docker-creds' ) { sh 'sh build-and-push' } with the script being something like docker build -t docker.corp/x/y: $TAG . docker push docker.corp/x/y: $TAG
            Hide
            taylorp36 Taylor Patton added a comment - - edited

            Also seeing this same issue, but interestingly enough, it works in 1 job but not another, doing almost the exact same thing.

            Working code in question:

            docker.build('mydockerimage', "--file ${DOCKERFILE} --pull --build-arg BUILD_NUMBER=${BUILD_NUMBER} .")

            Code that doesn't work:

            docker.build('mydockerimage', "--file ${myProperties.DOCKERFILE} --pull --build-arg BUILD_NUMBER=${params.BUILD_TO_DEPLOY} .")

            Where "myProperties" is read from a properties file using "readProperties" from stage utils plugin. The docker image seems to be built fine in both cases, but in the latter, we see the error:

            Successfully built 204ce2321dab
            Successfully tagged <redacted>
            [Pipeline] dockerFingerprintFrom
            [Pipeline] }
            [Pipeline] // withCredentials
            [Pipeline] }
            [Pipeline] // withDockerRegistry
            [Pipeline] }
            [Pipeline] // withEnv
            [Pipeline] }
            [Pipeline] // stage
            [Pipeline] }
            [Pipeline] // node
            [Pipeline] End of Pipeline
            java.io.IOException: Cannot retrieve .Id from 'docker inspect<redacted>'

            We do use the multi-stage docker build with "AS name" for the docker stages in the starting Dockerfile being built.

            Show
            taylorp36 Taylor Patton added a comment - - edited Also seeing this same issue, but interestingly enough, it works in 1 job but not another, doing almost the exact same thing. Working code in question: docker.build( 'mydockerimage' , "--file ${DOCKERFILE} --pull --build-arg BUILD_NUMBER=${BUILD_NUMBER} ." ) Code that doesn't work: docker.build( 'mydockerimage' , "--file ${myProperties.DOCKERFILE} --pull --build-arg BUILD_NUMBER=${params.BUILD_TO_DEPLOY} ." ) Where "myProperties" is read from a properties file using "readProperties" from stage utils plugin. The docker image seems to be built fine in both cases, but in the latter, we see the error: Successfully built 204ce2321dab Successfully tagged <redacted> [Pipeline] dockerFingerprintFrom [Pipeline] } [Pipeline] // withCredentials [Pipeline] } [Pipeline] // withDockerRegistry [Pipeline] } [Pipeline] // withEnv [Pipeline] } [Pipeline] // stage [Pipeline] } [Pipeline] // node [Pipeline] End of Pipeline java.io.IOException: Cannot retrieve .Id from 'docker inspect<redacted>' We do use the multi-stage docker build with "AS name" for the docker stages in the starting Dockerfile being built.
            Hide
            dbensoussan david bensoussan added a comment -

            There is a PR solving this here: https://github.com/jenkinsci/docker-workflow-plugin/pull/162

            Could a maintainer take a look at it?

            Show
            dbensoussan david bensoussan added a comment - There is a PR solving this here: https://github.com/jenkinsci/docker-workflow-plugin/pull/162 Could a maintainer take a look at it?

              People

              • Assignee:
                esmalling Eric Smalling
                Reporter:
                m4rcu5 Marcus van Dam
              • Votes:
                40 Vote for this issue
                Watchers:
                51 Start watching this issue

                Dates

                • Created:
                  Updated: