Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-43758

Parameters disappear from pipeline job after running the job

    Details

    • Type: Bug
    • Status: Reopened (View Workflow)
    • Priority: Major
    • Resolution: Unresolved
    • Component/s: job-dsl-plugin
    • Labels:
      None
    • Environment:
      Jenkins ver. 2.53,
      Build Pipeline Plugin 1.5.6
      Pipeline
    • Similar Issues:

      Description

       

       

       

      Steps to reproduce

      1. I have created Pipeline job
      2. During creation I checked up "This project is parameterized" checkbox and added two Choice parameters
      3. I have run the job and it failed
      4. I checked the configuration of the job and parameters are no longer there and "This project is parameterized"  checkbox is no longer checked up.

        Attachments

          Issue Links

            Activity

            Hide
            dimakievua Dmytro Zhernosiekov added a comment -

            Hi,

            I have the same issue. When job finished successfully there is no such issue. After failure all defined parameters are gone.

            I tried to add at the beginning of job import hudson.mode.* - didn't help. I t help only if I define parameters in pipeline code - but I cannot use Active Choice Reactive Parameter there. Here is the pipeline code:

            import hudson.model.*
            import hudson.EnvVars
            
            pipeline {
                tools {
                    nodejs 'Node 8.1.1'
                }
                environment {
                    BUILD_DIR = "/var/www/new/${BUILD_TAG}"
                }
                agent {
                    label NODE
                }    
                stages {
                    stage('Create dir') {
                        steps {
                            sh 'mkdir -p ${BUILD_DIR}'
                        }
                    }
                    stage('Fetch code') {
                        steps{
                            retry(3) {
                                git credentialsId: 'jenkins', url: 'git@......', branch: '${BRANCH}'
                            }
                        }
                    }
                    stage('npm run build') {
                        steps {
                            sh '''
                            npm install
                            npm run build
                            '''
                        }
                    }
                }
                post {
                    always {
                        echo "Deployment of ${JOB_NAME} service finished with next result:"
                    }
                    success {
                        echo ' SUCCESSFULY!'
                    }
                    failure {
                        echo ' with FAILURE!'
                    }
                    unstable {
                        echo 'UNSTABLE'
                    }
                    changed {
                        echo 'Previous build had different state. If you see it, please carefully chack status.'
                    }
                }    
            }
            
            Show
            dimakievua Dmytro Zhernosiekov added a comment - Hi, I have the same issue. When job finished successfully there is no such issue. After failure all defined parameters are gone. I tried to add at the beginning of job import hudson.mode.* - didn't help. I t help only if I define parameters in pipeline code - but I cannot use Active Choice Reactive Parameter there. Here is the pipeline code: import hudson.model.* import hudson.EnvVars pipeline {     tools {         nodejs 'Node 8.1.1'     }     environment {         BUILD_DIR = "/ var /www/ new /${BUILD_TAG}"     }     agent {         label NODE     }         stages {         stage( 'Create dir' ) {             steps {                 sh 'mkdir -p ${BUILD_DIR}'             }         }         stage( 'Fetch code' ) {             steps{                 retry(3) {                     git credentialsId: 'jenkins' , url: 'git@......' , branch: '${BRANCH}'                 }             }         }         stage( 'npm run build' ) {             steps {                 sh '''                 npm install                 npm run build                 '''             }         }     }     post {         always {             echo "Deployment of ${JOB_NAME} service finished with next result:"         }         success {             echo ' SUCCESSFULY!'         }         failure {             echo ' with FAILURE!'         }         unstable {             echo 'UNSTABLE'         }         changed {             echo 'Previous build had different state. If you see it, please carefully chack status.'         }     }     }
            Hide
            dimakievua Dmytro Zhernosiekov added a comment -

            HI,

            Just found that all parameters described in Jenkins job ( not in pipeline) disappear when I use next section in pipeline:

                options {
                    buildDiscarder(logRotator(numToKeepStr: '5', artifactNumToKeepStr: '15'))
                    timeout(time: 20, unit: 'MINUTES')
                    timestamps()
                }
            

            Looks like some conflict in logic. I don't mind to have all options and parameters described in pipeline code, but not all pluging supported there.

            Show
            dimakievua Dmytro Zhernosiekov added a comment - HI, Just found that all parameters described in Jenkins job ( not in pipeline) disappear when I use next section in pipeline:     options {         buildDiscarder(logRotator(numToKeepStr: '5' , artifactNumToKeepStr: '15' ))         timeout(time: 20, unit: 'MINUTES' )         timestamps()     } Looks like some conflict in logic. I don't mind to have all options and parameters described in pipeline code, but not all pluging supported there.
            Hide
            abayer Andrew Bayer added a comment -

            As of current versions of Declarative (1.1.6 or later), job properties (such as parameters) defined in the job config UI will not be nuked by use of options or triggers (and as of workflow-multibranch 2.16, the same thing is the case for the properties step). The first time you run a build with options/triggers/parameters in a Declarative Pipeline or properties in a Scripted Pipeline after upgrading, the job properties configured in the UI will still get wiped out, but every run after that (and any run of a new job or one that didn't already have job properties configured) will keep them.

            Show
            abayer Andrew Bayer added a comment - As of current versions of Declarative (1.1.6 or later), job properties (such as parameters) defined in the job config UI will not be nuked by use of options or triggers (and as of workflow-multibranch 2.16, the same thing is the case for the properties step). The first time you run a build with options / triggers / parameters in a Declarative Pipeline or properties in a Scripted Pipeline after upgrading, the job properties configured in the UI will still get wiped out, but every run after that (and any run of a new job or one that didn't already have job properties configured) will keep them.
            Hide
            akom Alexander Komarov added a comment - - edited

            How is this resolved?  

            1. I generate a pipeline job with Job DSL plugin, with parameters.
            2. In the job's pipeline script, I configure the build discarder using properties{} closure.
            3. Job runs.
            4. Parameters are gone.
            5. I regenerate the job (parameters are back).
            6. Job runs
            7. Parameters are gone.

            Is the solution to only configure parameters in the script?  Should the build properties not be shown in the UI then?  Behavior seems to be misleading.

            What makes matters worse is that this doesn't always happen (or doesn't happen to all of my jobs), and I don't know why.

             

            Show
            akom Alexander Komarov added a comment - - edited How is this resolved?   I generate a pipeline job with Job DSL plugin, with parameters. In the job's pipeline script, I configure the build discarder using properties{} closure. Job runs. Parameters are gone. I regenerate the job (parameters are back). Job runs Parameters are gone. Is the solution to only configure parameters in the script?  Should the build properties not be shown in the UI then?  Behavior seems to be misleading. What makes matters worse is that this doesn't always happen (or doesn't happen to all of my jobs), and I don't know why.  
            Hide
            akom Alexander Komarov added a comment -

            I'm reopening because this clearly still happens in my installation. I have core and all plugins from May 2018, way later than the previous comment.  I'm not using Declarative.  

            As I mentioned, I don't know exactly under which conditions this occurs  - it's pretty random.

            Show
            akom Alexander Komarov added a comment - I'm reopening because this clearly still happens in my installation. I have core and all plugins from May 2018, way later than the previous comment.  I'm not using Declarative.   As I mentioned, I don't know exactly under which conditions this occurs  - it's pretty random.
            Hide
            vzshar Varun Shar added a comment -

            Thanks Alexander Komarov for reopening this.

            In my case when my pipeline job is triggered from upstream job parameters remains intact even if my job fails but when someone trigger the job manually then it loses the parameters.

                displayName "Feature Tests"      parameters {
                    stringParam('tags', "@regression", 'Test tag to run')
                    stringParam('timeout', "10", 'Test timeout in minutes') lo
                    stringParam('environment', "ci", 'environment to run against')
                }
            }
            Show
            vzshar Varun Shar added a comment - Thanks Alexander Komarov for reopening this. In my case when my pipeline job is triggered from upstream job parameters remains intact even if my job fails but when someone trigger the job manually then it loses the parameters. displayName "Feature Tests" parameters { stringParam('tags', "@regression", 'Test tag to run') stringParam('timeout', "10", 'Test timeout in minutes') lo stringParam('environment', "ci", 'environment to run against') } }
            Hide
            akom Alexander Komarov added a comment - - edited

            I was managing to work around this issue by setting all properties I need at once in the properties{} closure (I generate jobs with Job DSL, and would rather have them set there).  This was working fine, until I reached a total blocker: I cannot set the "Trigger builds remotely" token.

            In other words, I generate a job that has the token set, but after running the job the token disappears, and there is no pipeline DSL for setting it.  

            Sorry, correction - even Job DSL plugin no longer handles "Trigger builds remotely" any more in pipelineJobs (seems to only apply to freestyleJob now), so it's not configurable either at generation time or at runtime by any means.  I had to downgrade to Job DSL 1.69 due to this, see JENKINS-52743

            Jenkins: 2.141, Pipeline-API: 2.29

            BTW: I am not using Declarative.

            Show
            akom Alexander Komarov added a comment - - edited I was managing to work around this issue by setting all properties I need at once in the properties{} closure (I generate jobs with Job DSL, and would rather have them set there).  This was working fine, until I reached a  total blocker : I cannot set the "Trigger builds remotely" token. In other words, I generate a job that has the token set, but after running the job the token disappears, and there is no pipeline DSL for setting it.    Sorry, correction - even Job DSL plugin no longer handles "Trigger builds remotely" any more in pipelineJobs (seems to only apply to freestyleJob now), so it's not configurable either at generation time or at runtime by any means.  I had to downgrade to Job DSL 1.69 due to this, see  JENKINS-52743 Jenkins: 2.141, Pipeline-API: 2.29 BTW: I am not using Declarative.
            Hide
            nfollett Nick Follett added a comment - - edited

            I am having this problem with declarative pipelines.

            pipeline {
              agent any
              options {
                ansiColor('xterm')
                // Prevent multiple pipelines from running concurrently and failing due to tfstate lock file
                disableConcurrentBuilds()
              }
              triggers {
                gitlab(
                  branchFilterType: 'All',
                  triggerOnPush: false,
                  triggerOnMergeRequest: true,
                  triggerOpenMergeRequestOnPush: "never",
                  triggerOnNoteRequest: true,
                  noteRegex: "jenkins rebuild",
                  skipWorkInProgressMergeRequest: true,
                )
              }
              ...
            }

            All of my pipelines are auto-imported from a separate job using the Job DSL plugin. I need to manually run each job once in order for the config.xml to be populated with the settings from the Jenkinsfile.  When I check the configuration in the UI after this initial run I see that the pipeline is configured with the settings shown above.  After I trigger the pipeline with a test merge request in GitLab the pipeline will succeed, but the trigger settings will disappear in the UI.  If I trigger the job again manually the trigger settings for GitLab webhooks will re-appear.

            Jenkins 2.107.3

            Alexander Komarov I am having the same problem where I have 2 jobs which have their own version of the same pipeline, are both treated exactly the same, and one has this problem and the other does not.  It's a very frustrating issue because it's inconsistent.

            Show
            nfollett Nick Follett added a comment - - edited I am having this problem with declarative pipelines. pipeline { agent any options { ansiColor( 'xterm' ) // Prevent multiple pipelines from running concurrently and failing due to tfstate lock file disableConcurrentBuilds() } triggers { gitlab( branchFilterType: 'All' , triggerOnPush: false , triggerOnMergeRequest: true , triggerOpenMergeRequestOnPush: "never" , triggerOnNoteRequest: true , noteRegex: "jenkins rebuild" , skipWorkInProgressMergeRequest: true , ) } ... } All of my pipelines are auto-imported from a separate job using the Job DSL plugin. I need to manually run each job once in order for the config.xml to be populated with the settings from the Jenkinsfile.  When I check the configuration in the UI after this initial run I see that the pipeline is configured with the settings shown above.  After I trigger the pipeline with a test merge request in GitLab the pipeline will succeed, but the trigger settings will disappear in the UI.  If I trigger the job again manually the trigger settings for GitLab webhooks will re-appear. Jenkins 2.107.3 Alexander Komarov I am having the same problem where I have 2 jobs which have their own version of the same pipeline, are both treated exactly the same, and one has this problem and the other does not.  It's a very frustrating issue because it's inconsistent.
            Hide
            khalilj Khalil Jiries added a comment -

            I'm having same issue, here is my use case:

            I have a seed job that loads all Jenkins declarative pipelines jobs to jenkins.

            These pipelines have job parameters defined in their Jenkinsfile.

            Each time the seed job runs, it overrides the job parameters and remove them all.

            Next trigger of the build will reflect the parameters back to the job.

             

            Show
            khalilj Khalil Jiries added a comment - I'm having same issue, here is my use case: I have a seed job that loads all Jenkins declarative pipelines jobs to jenkins. These pipelines have job parameters defined in their Jenkinsfile. Each time the seed job runs, it overrides the job parameters and remove them all. Next trigger of the build will reflect the parameters back to the job.  
            Hide
            emailbob Bob Lee added a comment -

            I had the same issue in which my parameters and GitHub Pull Request Builder settings would disappear after a job builds.  I had to run my seed job to recreate the project using the Job DSL to get the settings back.

            What worked for me to fix it was to delete the project then run the seed job. After that my settings stayed. There was probably an invalid config that was left around that caused the bug and deleting the project got rid of it.

            Show
            emailbob Bob Lee added a comment - I had the same issue in which my parameters and GitHub Pull Request Builder settings would disappear after a job builds.  I had to run my seed job to recreate the project using the Job DSL to get the settings back. What worked for me to fix it was to delete the project then run the seed job. After that my settings stayed. There was probably an invalid config that was left around that caused the bug and deleting the project got rid of it.
            Hide
            dzizes972 Dzizes dzizes added a comment -

            Hello! Any update here or possible workaround?

            Show
            dzizes972 Dzizes dzizes added a comment - Hello! Any update here or possible workaround?
            Hide
            akom Alexander Komarov added a comment - - edited

            Dzizes dzizes I am using a workaround, but you may not like it.

            (The following applies to traditional pipelines.  For declarative, you may need to adjust a few things)

            The workaround:

            1. You need to bottleneck all of your properties setting into a single call in the pipeline - you need set all of them always, not piecemeal.  You can do it any way you like, I am including one approach below.
            2. In this call to the properties closure, you need to duplicate all the settings you've set when you initially created the job (any that you omit will be lost).
            3. Make sure that the rest of the code does not set individual properties again (or if it does, then it must set all of them in one call again)

            My example:

            1. All my jobs are generated via the Job DSL plugin, and the initial values for job properties (including parameters) are set there.  (The result is the same as creating the job by hand)
            2. In addition to the normal pipeline code, I insert a block that sets properties{} at the top, this block duplicates all initially configured options.  

            Since I'm using the Job DSL plugin, I have it prepend the pipeline code with an extra chunk that takes care of all that.

            Here is a utility method I use in pipeline code.  It covers all the properties that I ever set, and parameters (I only use string parameters) are supplied as an array in this format: ['NAME:DEFAULTVALUE:DESCRIPTION', etc] 

             

            /**
             * This exists primarily because of a bug in Jenkins pipeline that causes
             * any call to the "properties" closure to overwrite all job property settings,
             * not just the ones being set.  Therefore, we set all properties that
             * the generator may have set when it generated this job (or a human).
             *
             * @param settingsOverrides a map, see defaults below.
             * @return
             */
            def setJobProperties(Map settingsOverrides = [:]) {
                def settings = [discarder_builds_to_keep:'10', discarder_days_to_keep: '', cron: null, paramsList: [], upstreamTriggers: null, disableConcurrentBuilds: false] + settingsOverrides
            
            //    echo "Setting job properties.  discarder is '${settings.discarder_builds_to_keep}' and cron is '${settings.cron}' (${settings.cron?.getClass()})"
                def jobProperties = [
                        //these have to be strings:
                        buildDiscarder(logRotator(artifactDaysToKeepStr: '', artifactNumToKeepStr: '', daysToKeepStr: "${settings.discarder_days_to_keep}", numToKeepStr: "${settings.discarder_builds_to_keep}"))
                ]
            
                if (settings.cron) {
                    jobProperties << pipelineTriggers([cron(settings.cron)])
                }
            
                if (settings.upstreamTriggers) {
                    jobProperties << pipelineTriggers([upstream(settings.upstreamTriggers)])
                }
            
                if (settings.disableConcurrentBuilds) {
                    jobProperties << disableConcurrentBuilds()
                }
            
                if (settings.paramsList?.size() > 0) {
                    def generatedParams = []
                    settings.paramsList.each { //params are specified as name:default:description
                        def parts = it.split(':', 3).toList() //I need to honor all delimiters but I want a list
                        generatedParams << string(name: "${parts[0]}", defaultValue: "${parts[1] ?: ''}", description: "${parts[2] ?: ''}", trim: true)
                    }
                    jobProperties << parameters(generatedParams)
                }
            
                echo "Setting job properties: ${jobProperties}"
            
                properties(jobProperties)
            }
            

            So my job's pipeline definition looks like this:

            setJobProperties(
               //each of these is optional, you may simply need the paramsList and that's it.
               discarder_builds_to_keep: "30", 
               //cron: "", 
               paramsList: ['SAMPLE_PARAM:apple:Some description'], 
               //upstreamTriggers: 'some-job',
               //disableConcurrentBuilds: true
            )
            
            //now regular pipeline code...

             

            If this doesn't fit your situation, there are plenty of other ways, just make sure to follow the rules at the top.

            Show
            akom Alexander Komarov added a comment - - edited Dzizes dzizes I am using a workaround, but you may not like it. (The following applies to traditional pipelines.  For declarative, you may need to adjust a few things) The workaround: You need to bottleneck all of your  properties setting into a single call in the pipeline - you need set all of them always, not piecemeal.  You can do it any way you like, I am including one approach below. In this call to the properties  closure, you need to  duplicate all the settings you've set when you initially created the job (any that you omit will be lost). Make sure that the rest of the code does not set individual  properties  again (or if it does, then it must set all of them in one call again) My example: All my jobs are generated via the Job DSL plugin, and the initial values for job properties (including parameters) are set there.  (The result is the same as creating the job by hand) In addition to the normal pipeline code, I insert a block that sets  properties{} at the top, this block duplicates all initially configured options.   Since I'm using the Job DSL plugin, I have it prepend the pipeline code with an extra chunk that takes care of all that. Here is a utility method I use in pipeline code.  It covers all the properties that I ever set, and parameters (I only use string parameters) are supplied as an array in this format: ['NAME:DEFAULTVALUE:DESCRIPTION', etc]     /** * This exists primarily because of a bug in Jenkins pipeline that causes * any call to the "properties" closure to overwrite all job property settings, * not just the ones being set. Therefore, we set all properties that * the generator may have set when it generated this job (or a human). * * @param settingsOverrides a map, see defaults below. * @ return */ def setJobProperties(Map settingsOverrides = [:]) { def settings = [discarder_builds_to_keep: '10' , discarder_days_to_keep: '', cron: null , paramsList: [], upstreamTriggers: null , disableConcurrentBuilds: false ] + settingsOverrides // echo "Setting job properties. discarder is '${settings.discarder_builds_to_keep}' and cron is '${settings.cron}' (${settings.cron?.getClass()})" def jobProperties = [ //these have to be strings: buildDiscarder(logRotator(artifactDaysToKeepStr: '', artifactNumToKeepStr: ' ', daysToKeepStr: "${settings.discarder_days_to_keep}" , numToKeepStr: "${settings.discarder_builds_to_keep}" )) ] if (settings.cron) { jobProperties << pipelineTriggers([cron(settings.cron)]) } if (settings.upstreamTriggers) { jobProperties << pipelineTriggers([upstream(settings.upstreamTriggers)]) } if (settings.disableConcurrentBuilds) { jobProperties << disableConcurrentBuilds() } if (settings.paramsList?.size() > 0) { def generatedParams = [] settings.paramsList.each { //params are specified as name: default :description def parts = it.split( ':' , 3).toList() //I need to honor all delimiters but I want a list generatedParams << string(name: "${parts[0]}" , defaultValue: "${parts[1] ?: ''}" , description: "${parts[2] ?: ' '}" , trim: true ) } jobProperties << parameters(generatedParams) } echo "Setting job properties: ${jobProperties}" properties(jobProperties) } So my job's pipeline definition looks like this: setJobProperties( //each of these is optional, you may simply need the paramsList and that's it. discarder_builds_to_keep: "30" , //cron: "", paramsList: [ 'SAMPLE_PARAM:apple:Some description' ], //upstreamTriggers: 'some-job' , //disableConcurrentBuilds: true ) //now regular pipeline code...   If this doesn't fit your situation, there are plenty of other ways, just make sure to follow the rules at the top.
            Hide
            bspeagle Brian Speagle added a comment - - edited

            I have the the same issue  I have a pipeline job that is pulled from a Github repo and the script includes the following:

             
            parameters

            {     string(name: 'S3_BUCKET', description: 'Which bucket should we store files in?') }

             
            After I save the pipeline initially I am able to go back into the config and set the default value for my param. I then run the job once and everything runs great but when I access my pipeline again the default value I set in the UI for the param is gone. This happens every time I set the default value for the param in the UI and then run the job.
             
            Thanks in advance.

             

            *FIXED! - I had some other issues with pipelines not registering webhooks for 'GitHub hook trigger for GITScm polling'. I have one pipeline that does save parameters and works with webhook registering so I compared the pipelines and noticed that I wrote them differently. I then rewrote one of my pipelines to match the structure of the one that does work and now it seems that I am able to save parameters after a build!

            Show
            bspeagle Brian Speagle added a comment - - edited I have the the same issue  I have a pipeline job that is pulled from a Github repo and the script includes the following:   parameters {     string(name: 'S3_BUCKET', description: 'Which bucket should we store files in?') }   After I save the pipeline initially I am able to go back into the config and set the default value for my param. I then run the job once and everything runs great but when I access my pipeline again the default value I set in the UI for the param is gone. This happens every time I set the default value for the param in the UI and then run the job.   Thanks in advance.   *FIXED! - I had some other issues with pipelines not registering webhooks for 'GitHub hook trigger for GITScm polling'. I have one pipeline that does save parameters and works with webhook registering so I compared the pipelines and noticed that I wrote them differently. I then rewrote one of my pipelines to match the structure of the one that does work and now it seems that I am able to save parameters after a build!
            Hide
            dzizes972 Dzizes dzizes added a comment - - edited

            Andrew Bayer Any update on this?

            Show
            dzizes972 Dzizes dzizes added a comment - - edited Andrew Bayer Any update on this?
            Hide
            mrysanek Michal Rysanek added a comment -

            Alexander Komarov Thank you for your comment - cleared up why I was having the same issue, using all .groovy pipelines and our parameters were being wiped out by a subsequent "parameters([disableConcurrentBuilds()])".

            I have not seen an official roadmap for bugfixes (is there one?)  Either way I have upvoted this issue, it has (and suspect will again) caused confusion and problems for me.

            Show
            mrysanek Michal Rysanek added a comment - Alexander Komarov Thank you for your comment - cleared up why I was having the same issue, using all .groovy pipelines and our parameters were being wiped out by a subsequent "parameters( [disableConcurrentBuilds()] )". I have not seen an official roadmap for bugfixes (is there one?)  Either way I have upvoted this issue, it has (and suspect will again) caused confusion and problems for me.
            Hide
            mrysanek Michal Rysanek added a comment - - edited

            Last comment/question - is there any way to read the properties LinkedHashMap?  I would like to expand on Alexander Komarov's workaround - read/store the live properties, add/modify one then write them back into the actual live object; this way I wouldn't have to force others to use my processes in the pipeline (I develop libraries for other developers to use in their builds, and need to set SOME properties, but don't want to stomp on theirs). Alternately providing a "properties <<" or a "properties.append([somepropertylist])" might be less destructive to other people already using the current behavior to intentionally wipe previous properties.

            Show
            mrysanek Michal Rysanek added a comment - - edited Last comment/question - is there any way to read the properties LinkedHashMap?  I would like to expand on Alexander Komarov 's workaround - read/store the live properties, add/modify one then write them back into the actual live object; this way I wouldn't have to force others to use my processes in the pipeline (I develop libraries for other developers to use in their builds, and need to set SOME properties, but don't want to stomp on theirs). Alternately providing a "properties <<" or a "properties.append( [somepropertylist] )" might be less destructive to other people already using the current behavior to intentionally wipe previous properties.
            Hide
            aarondmarasco_vsi Aaron D. Marasco added a comment -

            Just stumbled on this bug as well, wondering why I lose my JobDSL parameters after a run. Alexander Komarov has a great workaround above, but I really need what Michal Rysanek is asking for - a way to get the current properties so I can add to them.

            Show
            aarondmarasco_vsi Aaron D. Marasco added a comment - Just stumbled on this bug as well, wondering why I lose my JobDSL parameters after a run. Alexander Komarov has a great workaround above, but I really need what Michal Rysanek is asking for - a way to get the current properties so I can add to them.
            Hide
            akom Alexander Komarov added a comment - - edited

            Aaron D. Marasco, to my knowledge there is no way to get current job properties in the format suitable for properties{}.   

            The only way I can see of doing this would be to access the individual getters on currentBuild.rawBuild.parent (an instance of WorkflowRun ) and then transform their current values into arguments to properties{}.  This would certainly be brittle, and if you use Script Security, this will require approval.

             

            Show
            akom Alexander Komarov added a comment - - edited Aaron D. Marasco , to my knowledge there is no way to get current job properties in the format suitable for  properties{} .    The only way I can see of doing this would be to access the individual getters on  currentBuild.rawBuild.parent (an instance of WorkflowRun ) and then transform their current values into arguments to  properties{}.   This would certainly be brittle, and if you use Script Security, this will require approval.  
            Hide
            ss_vinoth22 vinoth SS added a comment -

            Andrew Bayer Any update on this issue? was there any upgrade in the plugin to fix this issue, Or do we need to go with Alexander's Wrokaround? I tried with latest JOB DSL 1.74 plugin as well it is still having the issue. Please do update the roadmap/ fix

            Show
            ss_vinoth22 vinoth SS added a comment - Andrew Bayer Any update on this issue? was there any upgrade in the plugin to fix this issue, Or do we need to go with Alexander's Wrokaround? I tried with latest JOB DSL 1.74 plugin as well it is still having the issue. Please do update the roadmap/ fix
            Hide
            abayer Andrew Bayer added a comment -

            So if you're not using Job DSL, please open a separate JIRA. If you're using the properties step in Scripted Pipeline or the parameters directive in Declarative, those do try to preserve job properties and build parameters defined outside of the pipeline, but Job DSL is still going to wipe out whatever is in the properties and parameters when it runs its seed job. Also, don't ever call properties step more than once in a pipeline - you're gonna run into a bunch of potential pitfalls there.

            Show
            abayer Andrew Bayer added a comment - So if you're not using Job DSL, please open a separate JIRA. If you're using the properties step in Scripted Pipeline or the parameters directive in Declarative, those do try to preserve job properties and build parameters defined outside of the pipeline, but Job DSL is still going to wipe out whatever is in the properties and parameters when it runs its seed job. Also, don't ever call properties step more than once in a pipeline - you're gonna run into a bunch of potential pitfalls there.
            Hide
            famod Falko Modler added a comment -

            Annother super annoyed user here (sorry to say that, but that's just the truth).
            We are setting up most of our jobs via JCasC (which wraps JobDSL) and every single time we execute our JCasC yaml files, all properties that are defined by the respective pipeline scripts are lost: parameters, triggers, sidebar links etc.
            Losing parameters of jobs that are triggered not by human project members but by other systems/scripts (e.g. Pull Request Notifier for Bitbucket Server) is especially painful.
            Those jobs frequently triggered by human project members will sooner or later re-receive their parameters because someone will just click "Build Now" eventually but those jobs triggered from outside will just never run (rejected because of "unknown" parameters?).
            Every single time we execute our JCasC scripts we have to go through a list of jobs and "fix" them by clicking "Build Now". Yes, we could write a script for that but some jobs don't have parameters.
            Instead they need to have their scm-polling re-initialized. Since some of those jobs run for many hours, so we need to abort them right away. Writing a script for all those cases feels like investing too much time on the wrong end of the problem.

            I am willing to contribute a fix but where to start? What is the right approach? Should we start with an opt-in to preserve (instead of wipe) parameters, triggers etc.?

            Show
            famod Falko Modler added a comment - Annother super annoyed user here (sorry to say that, but that's just the truth). We are setting up most of our jobs via JCasC (which wraps JobDSL) and every single time we execute our JCasC yaml files, all properties that are defined by the respective pipeline scripts are lost: parameters, triggers, sidebar links etc. Losing parameters of jobs that are triggered not by human project members but by other systems/scripts (e.g. Pull Request Notifier for Bitbucket Server) is especially painful. Those jobs frequently triggered by human project members will sooner or later re-receive their parameters because someone will just click "Build Now" eventually but those jobs triggered from outside will just never run (rejected because of "unknown" parameters?). Every single time we execute our JCasC scripts we have to go through a list of jobs and "fix" them by clicking "Build Now". Yes, we could write a script for that but some jobs don't have parameters. Instead they need to have their scm-polling re-initialized. Since some of those jobs run for many hours, so we need to abort them right away. Writing a script for all those cases feels like investing too much time on the wrong end of the problem. I am willing to contribute a fix but where to start? What is the right approach? Should we start with an opt-in to preserve (instead of wipe) parameters, triggers etc.?

              People

              • Assignee:
                Unassigned
                Reporter:
                dzieciou Maciej Gawinecki
              • Votes:
                18 Vote for this issue
                Watchers:
                23 Start watching this issue

                Dates

                • Created:
                  Updated: