Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-32690

Cannot manually provision new slave if one already exists

    Details

    • Type: Bug
    • Status: Closed (View Workflow)
    • Priority: Major
    • Resolution: Fixed
    • Component/s: ec2-plugin
    • Labels:
      None
    • Environment:
      Jenkins: 1.634
      ec2-plugin: 1.31
    • Similar Issues:

      Description

      In version 1.29, I could manually provision a new slave if I was below the instance cap by clicking the button on the Manage Nodes page. In version 1.31, clicking the button takes me to the status page of an existing ec2 slave node.

        Attachments

          Issue Links

            Activity

            rpilachowski Robert Pilachowski created issue -
            Hide
            francisu Francis Upton added a comment -

            Yes, this is a side effect of the change to use previously provisioned stopped instances that are already there, so I would expect this behavior and in fact call it a feature. Can you explain why it's a problem?

            Show
            francisu Francis Upton added a comment - Yes, this is a side effect of the change to use previously provisioned stopped instances that are already there, so I would expect this behavior and in fact call it a feature. Can you explain why it's a problem?
            Hide
            rpilachowski Robert Pilachowski added a comment -

            There are couple of reasons why I would like to be able to create a new slaves when I already a slave, either active or stopped:

            1. I have a single active slave and none in the stopped state, but I know that soon there will be a need for several more. And to create an slave from scratch take a significant period of time. I want to start a few new slaves so I am ready for the work.
            2. I am making changes to an existing slave definition, and I want to test them out to see if they work, but still leave the existing slaves incase there is a problem with my changes. I could copy the existing definition, but that is not easy the moment.
            Show
            rpilachowski Robert Pilachowski added a comment - There are couple of reasons why I would like to be able to create a new slaves when I already a slave, either active or stopped: I have a single active slave and none in the stopped state, but I know that soon there will be a need for several more. And to create an slave from scratch take a significant period of time. I want to start a few new slaves so I am ready for the work. I am making changes to an existing slave definition, and I want to test them out to see if they work, but still leave the existing slaves incase there is a problem with my changes. I could copy the existing definition, but that is not easy the moment.
            Hide
            ohadbasan Ohad Basan added a comment - - edited

            I am also hitting this issue and I have a different use case for it
            I have a workflow job that does stress tests with several workers
            when the job runs it starts 10 slaves concurrently and then runs a another job once on each slave.
            I can't use this architecture since the latest change.

            Show
            ohadbasan Ohad Basan added a comment - - edited I am also hitting this issue and I have a different use case for it I have a workflow job that does stress tests with several workers when the job runs it starts 10 slaves concurrently and then runs a another job once on each slave. I can't use this architecture since the latest change.
            Hide
            ltagliamonte Luigi Tagliamonte added a comment -

            i'm also hitting the same problem.
            I need to have a fleet of slaves already configured, since it took more than 10min at the first boot to initialise all the dependencies that i need.
            Can we add a + button that overrides this new behaviour?

            Show
            ltagliamonte Luigi Tagliamonte added a comment - i'm also hitting the same problem. I need to have a fleet of slaves already configured, since it took more than 10min at the first boot to initialise all the dependencies that i need. Can we add a + button that overrides this new behaviour?
            Hide
            sureshjayapal Suresh Jayapal added a comment -

            We are also encountering the same issue and it slows down our jobs.

            We need the ability to launch 10( or upto instance cap) slaves in parallel at the least

            Show
            sureshjayapal Suresh Jayapal added a comment - We are also encountering the same issue and it slows down our jobs. We need the ability to launch 10( or upto instance cap) slaves in parallel at the least
            Hide
            mihelich Patrick Mihelich added a comment -

            Similar problem here. It takes ~45 minutes to complete our init script on a brand new instance. When deploying an updated AMI, I'd like to spin up 10-15 instances ahead of time. Letting them spin up one by one to catch up with demand over a whole day is painful.

            Show
            mihelich Patrick Mihelich added a comment - Similar problem here. It takes ~45 minutes to complete our init script on a brand new instance. When deploying an updated AMI, I'd like to spin up 10-15 instances ahead of time. Letting them spin up one by one to catch up with demand over a whole day is painful.
            Hide
            ferrante Matt Ferrante added a comment -

            This causes an issue for me as well. The button says "provision", but it doesn't provision. Either the button text should say "do whatever i want to do" or it should actually provision an instance.

            Show
            ferrante Matt Ferrante added a comment - This causes an issue for me as well. The button says "provision", but it doesn't provision. Either the button text should say "do whatever i want to do" or it should actually provision an instance.
            Hide
            ferrante Matt Ferrante added a comment - - edited

            When I'm trying to remove a slave from use during slave ami upgrade or if the slave has been corrupted somehow, I remove the labels so the current jobs running on it can finish and spin up a new one so new jobs go on the new one. That's not possible now. This is not a feature, it is a bug.

            Show
            ferrante Matt Ferrante added a comment - - edited When I'm trying to remove a slave from use during slave ami upgrade or if the slave has been corrupted somehow, I remove the labels so the current jobs running on it can finish and spin up a new one so new jobs go on the new one. That's not possible now. This is not a feature, it is a bug.
            ferrante Matt Ferrante made changes -
            Field Original Value New Value
            Priority Minor [ 4 ] Major [ 3 ]
            Hide
            buckmeisterq Peter Buckley added a comment -

            Definitely a bug - if I've logged into jenkins, clicked Manage Jenkins, clicked Manage Nodes, chosen Provision via [region], picked a slave AMI and it doesn't provision that slave AMI but takes me to an existing one that is not expected behavior and that is not doing what I as the user am requesting.

            Show
            buckmeisterq Peter Buckley added a comment - Definitely a bug - if I've logged into jenkins, clicked Manage Jenkins, clicked Manage Nodes, chosen Provision via [region] , picked a slave AMI and it doesn't provision that slave AMI but takes me to an existing one that is not expected behavior and that is not doing what I as the user am requesting.
            Hide
            ferrante Matt Ferrante added a comment -

            I recommend pinning to 1.29 until this bug is resolved, it causes multiple problems

            Show
            ferrante Matt Ferrante added a comment - I recommend pinning to 1.29 until this bug is resolved, it causes multiple problems
            francisu Francis Upton made changes -
            Link This issue is duplicated by JENKINS-33879 [ JENKINS-33879 ]
            Hide
            trbaker Trevor Baker added a comment -

            Another behavior introduced with the commits addressing JENKINS-23787 is that you can not provision an new node while one is pending. In the use case where you are provisioning multiple nodes intentionally in anticipation of demand, you shouldn't have to wait for each newly provisioned node to be fully online before you can create another.

            I've rolled back to commit ad614dbbe2866a9b5ba9674d88b07184cbafd2a3 which is just before the stopped node additions but also includes the instance cap changes that I need.

            Show
            trbaker Trevor Baker added a comment - Another behavior introduced with the commits addressing JENKINS-23787 is that you can not provision an new node while one is pending. In the use case where you are provisioning multiple nodes intentionally in anticipation of demand, you shouldn't have to wait for each newly provisioned node to be fully online before you can create another. I've rolled back to commit ad614dbbe2866a9b5ba9674d88b07184cbafd2a3 which is just before the stopped node additions but also includes the instance cap changes that I need.
            Hide
            francisu Francis Upton added a comment -

            BTW - I agree this is a problem now and will fix it. When you manually provision, it should make a new node not matter what (unless it exceeds the caps). I will make this change.

            Show
            francisu Francis Upton added a comment - BTW - I agree this is a problem now and will fix it. When you manually provision, it should make a new node not matter what (unless it exceeds the caps). I will make this change.
            Hide
            francisu Francis Upton added a comment -

            If I changed the code to ignore offline nodes when manually provisioning nodes would this be acceptable? Similar to what was done in https://github.com/jenkinsci/ec2-plugin/pull/187, but only in the case of a manual provision.

            If a node was unknown to the current jenkins (but was previously started by Jenkins) and is stopped, it would be used to satisfy a manual provision request (this is the current behavior which would be kept).

            Would this work for everyone?

            Show
            francisu Francis Upton added a comment - If I changed the code to ignore offline nodes when manually provisioning nodes would this be acceptable? Similar to what was done in https://github.com/jenkinsci/ec2-plugin/pull/187 , but only in the case of a manual provision. If a node was unknown to the current jenkins (but was previously started by Jenkins) and is stopped, it would be used to satisfy a manual provision request (this is the current behavior which would be kept). Would this work for everyone?
            Hide
            ferrante Matt Ferrante added a comment -

            I don't think that would do it. I was unable to provision nodes regardless of the online state of the other nodes. Only when all my online nodes were at full capacity could I create a new one. I need to be able to create a new one always, unless I'm at the cap.

            Show
            ferrante Matt Ferrante added a comment - I don't think that would do it. I was unable to provision nodes regardless of the online state of the other nodes. Only when all my online nodes were at full capacity could I create a new one. I need to be able to create a new one always, unless I'm at the cap.
            Hide
            rpilachowski Robert Pilachowski added a comment -

            I agree with Matt, I don't think that will be acceptable.

            I guess I don't understand what the problem this change was trying to solve. If I click the button to provision a new slave, what is the purpose of taking me to an existing slave? I can see the slaves right in the page and if I want to go to a particular slave, I can easily.

            Show
            rpilachowski Robert Pilachowski added a comment - I agree with Matt, I don't think that will be acceptable. I guess I don't understand what the problem this change was trying to solve. If I click the button to provision a new slave, what is the purpose of taking me to an existing slave? I can see the slaves right in the page and if I want to go to a particular slave, I can easily.
            Hide
            mikedougherty Mike Dougherty added a comment -

            > I guess I don't understand what the problem this change was trying to solve.

            My understanding is that the plugin is meant to be less greedy on the EC2 instances - instead of potentially provisioning a new node for every job in the queue, it would return a pending-online node instead. Unfortunately it seems this affects every provision request, not just the 'automated' ones made by the system scheduler.

            In the long run I'd like to see an option to control the max number of slaves being provisioned simultaneously. Even with (what I understand to be) the original intent of this change, starting 1 node at a time is much too slow.

            Show
            mikedougherty Mike Dougherty added a comment - > I guess I don't understand what the problem this change was trying to solve. My understanding is that the plugin is meant to be less greedy on the EC2 instances - instead of potentially provisioning a new node for every job in the queue, it would return a pending-online node instead. Unfortunately it seems this affects every provision request, not just the 'automated' ones made by the system scheduler. In the long run I'd like to see an option to control the max number of slaves being provisioned simultaneously. Even with (what I understand to be) the original intent of this change, starting 1 node at a time is much too slow.
            Hide
            trbaker Trevor Baker added a comment - - edited

            starting 1 node at a time is much too slow.

            I agree. We've been toying with the idea of creating a cli script using create-node or some groovy snippet to loop over to create nodes so we don't have to click through the UI repeatedly.

            Show
            trbaker Trevor Baker added a comment - - edited starting 1 node at a time is much too slow. I agree. We've been toying with the idea of creating a cli script using create-node or some groovy snippet to loop over to create nodes so we don't have to click through the UI repeatedly.
            Hide
            trbaker Trevor Baker added a comment -

            If I changed the code to ignore offline nodes when manually provisioning nodes would this be acceptable? Similar to what was done in https://github.com/jenkinsci/ec2-plugin/pull/187, but only in the case of a manual provision.
            If a node was unknown to the current jenkins (but was previously started by Jenkins) and is stopped, it would be used to satisfy a manual provision request (this is the current behavior which would be kept).
            Would this work for everyone?

            Yes, I think that would work fine. In this manual provisioning use case, check to see if there is a stopped node of the same config, and if so, start it. If there is none, provision a new one. This path should not check idle capacity and should always add node capacity to the pool.

            Show
            trbaker Trevor Baker added a comment - If I changed the code to ignore offline nodes when manually provisioning nodes would this be acceptable? Similar to what was done in https://github.com/jenkinsci/ec2-plugin/pull/187 , but only in the case of a manual provision. If a node was unknown to the current jenkins (but was previously started by Jenkins) and is stopped, it would be used to satisfy a manual provision request (this is the current behavior which would be kept). Would this work for everyone? Yes, I think that would work fine. In this manual provisioning use case, check to see if there is a stopped node of the same config, and if so, start it. If there is none, provision a new one. This path should not check idle capacity and should always add node capacity to the pool.
            Hide
            rpilachowski Robert Pilachowski added a comment -

            Here is where the suggested change is cumbersome. Say I have modified the init script that runs when a slave is provisioned. Instead of just verifying it works by provisioning a new slave, I have to either start the existing stopped slaves or delete them.

            The proposed change is better than what exists now, I just don't like it when I try to provision a new slave, the system instead thinks this is not what I want, and does something else.

            Show
            rpilachowski Robert Pilachowski added a comment - Here is where the suggested change is cumbersome. Say I have modified the init script that runs when a slave is provisioned. Instead of just verifying it works by provisioning a new slave, I have to either start the existing stopped slaves or delete them. The proposed change is better than what exists now, I just don't like it when I try to provision a new slave, the system instead thinks this is not what I want, and does something else.
            Hide
            trbaker Trevor Baker added a comment -

            I have to either start the existing stopped slaves or delete them.

            Couldn't you modify the ec2 tags on your stopped instance, then the master wouldn't see nor start it? At least I think this would work from reading the code.

            Show
            trbaker Trevor Baker added a comment - I have to either start the existing stopped slaves or delete them. Couldn't you modify the ec2 tags on your stopped instance, then the master wouldn't see nor start it? At least I think this would work from reading the code.
            Hide
            rpilachowski Robert Pilachowski added a comment -

            Couldn't you modify the ec2 tags on your stopped instance, then the master wouldn't see nor start it? At least I think this would work from reading the code.

            I am not sure it would work, but even if it did, it is another hoop to jump through.

            Show
            rpilachowski Robert Pilachowski added a comment - Couldn't you modify the ec2 tags on your stopped instance, then the master wouldn't see nor start it? At least I think this would work from reading the code. I am not sure it would work, but even if it did, it is another hoop to jump through.
            Hide
            trbaker Trevor Baker added a comment - - edited

            it is another hoop to jump through

            Perhaps the cleanest option from a user experience would be that manually provisioning a slave via the UI or cli would always provision new instance, regardless of stop vs terminate idle configuration.

            For users with stop on idle, the automatic codepath based on queue demand could restart stopped nodes. The stopping and starting is done automagically based on demand. You idle stop and you restart based on demand. The cli has disconnect-node and reconnect-node commands, so that seems covered too.

            If Jenkins is configured with stop on idle and you want to hide an ec2 instance it from Jenkins, manually updating ec2 tags doesn't seem too onerous. You simply revert back to the original tags to put it back under management. Wanting to temporarily assert control over the node and have Jenkins leave it alone is akin to temporarily detaching a running EC2 instance from an ELB or an ASG. It is a managed machine, and you need to take explicit action to remove it from management.

            With the above, the in-context help is pretty easy to explain.

            Show
            trbaker Trevor Baker added a comment - - edited it is another hoop to jump through Perhaps the cleanest option from a user experience would be that manually provisioning a slave via the UI or cli would always provision new instance, regardless of stop vs terminate idle configuration. For users with stop on idle, the automatic codepath based on queue demand could restart stopped nodes. The stopping and starting is done automagically based on demand. You idle stop and you restart based on demand. The cli has disconnect-node and reconnect-node commands, so that seems covered too. If Jenkins is configured with stop on idle and you want to hide an ec2 instance it from Jenkins, manually updating ec2 tags doesn't seem too onerous. You simply revert back to the original tags to put it back under management. Wanting to temporarily assert control over the node and have Jenkins leave it alone is akin to temporarily detaching a running EC2 instance from an ELB or an ASG. It is a managed machine, and you need to take explicit action to remove it from management. With the above, the in-context help is pretty easy to explain.
            Hide
            rpilachowski Robert Pilachowski added a comment -

            I can live with it. I don't agree with it, but it is better than what there is now.

            Now if this plugin had more ASG like features such as being able to do a blue/green deploy when the init script changed or the AMI changed, that would be cool.

            As for the tags, please verify this. From personal experience, someone re-tagged one of our slaves and because it was never used, no new slaves were provisioned even though the job queue was long.

            Show
            rpilachowski Robert Pilachowski added a comment - I can live with it. I don't agree with it, but it is better than what there is now. Now if this plugin had more ASG like features such as being able to do a blue/green deploy when the init script changed or the AMI changed, that would be cool. As for the tags, please verify this. From personal experience, someone re-tagged one of our slaves and because it was never used, no new slaves were provisioned even though the job queue was long.
            Hide
            ferrante Matt Ferrante added a comment -

            Robert Pilachowski I experienced that as well, even though i changed the jenkins slave labels, i was unable to spin up a new slave. It wasn't until that slave was at full capacity or (i think) the tags in AWS were changed that we were able to make a new slave.

            Show
            ferrante Matt Ferrante added a comment - Robert Pilachowski I experienced that as well, even though i changed the jenkins slave labels, i was unable to spin up a new slave. It wasn't until that slave was at full capacity or (i think) the tags in AWS were changed that we were able to make a new slave.
            Hide
            trbaker Trevor Baker added a comment - - edited

            It worked for me.

            1. Provision a node with some tags that Jenkins sets.
            2. Stop the ec2 instance.
            3. Attempt to provision another node same as step 1, get returned the instance id from step #1
            4. Choose "Launch slave agent" and it will start and reconnect to instance
            5. Stop the ec2 instance in ec2 console
            6. Remove or modify the ec2 tags via the ec2 console so they don't match what is on the Jenkins slave template
            7. Attempt to provision another node same as step 1, you get a new instance launch
            8. Delete new second instance from Jenkins UI
            9. Modify the EC2 tags on the original instance via the ec2 console back to match the Jenkins slave template
            10. Attempt to provision another node same as step 1, get returned the original instance id from step #1

            If the proposed "always provision a new node when asked for interactively" was implemented, deleting instance 2 in step 8 above wouldn't have been necessary.

            The code in question is here:
            https://github.com/jenkinsci/ec2-plugin/blob/master/src/main/java/hudson/plugins/ec2/SlaveTemplate.java#L525-L549

            Show
            trbaker Trevor Baker added a comment - - edited It worked for me. Provision a node with some tags that Jenkins sets. Stop the ec2 instance. Attempt to provision another node same as step 1, get returned the instance id from step #1 Choose "Launch slave agent" and it will start and reconnect to instance Stop the ec2 instance in ec2 console Remove or modify the ec2 tags via the ec2 console so they don't match what is on the Jenkins slave template Attempt to provision another node same as step 1, you get a new instance launch Delete new second instance from Jenkins UI Modify the EC2 tags on the original instance via the ec2 console back to match the Jenkins slave template Attempt to provision another node same as step 1, get returned the original instance id from step #1 If the proposed "always provision a new node when asked for interactively" was implemented, deleting instance 2 in step 8 above wouldn't have been necessary. The code in question is here: https://github.com/jenkinsci/ec2-plugin/blob/master/src/main/java/hudson/plugins/ec2/SlaveTemplate.java#L525-L549
            Hide
            buckmeisterq Peter Buckley added a comment -

            It seems like the fix being proposed for the broken behavior of "provision via region" is to create all these extra steps outside of jenkins, including logging into the AWS console, searching for an instance ID and changing its tags. It seems like an enormous workflow bloat that we're forcing on the user instead of behaving in the least surprising manner - "provision via region" when that button is pressed. Or is this only when "stop on idle" is set/configured that we plan to boobytrap the "provision via region" button to require so large a song and dance ?

            Show
            buckmeisterq Peter Buckley added a comment - It seems like the fix being proposed for the broken behavior of "provision via region" is to create all these extra steps outside of jenkins, including logging into the AWS console, searching for an instance ID and changing its tags. It seems like an enormous workflow bloat that we're forcing on the user instead of behaving in the least surprising manner - "provision via region" when that button is pressed. Or is this only when "stop on idle" is set/configured that we plan to boobytrap the "provision via region" button to require so large a song and dance ?
            Hide
            trbaker Trevor Baker added a comment -

            The problem case, IMO, is "stop on idle." The terminate case seems straight forward.

            I'm not a committer, nor do I use stop-on-idle. I am a heavy user of the plugin though and would rather use the head revision than my own fork so am trying to spitball solutions.

            Show
            trbaker Trevor Baker added a comment - The problem case, IMO, is "stop on idle." The terminate case seems straight forward. I'm not a committer, nor do I use stop-on-idle. I am a heavy user of the plugin though and would rather use the head revision than my own fork so am trying to spitball solutions.
            Hide
            francisu Francis Upton added a comment -

            I like Trevor's suggestion of always provisioning a new node (subject to the caps) when manual provisioning is requested; even if there is a stopped node that would already satisfy the request. I think this essentially gets us back to the old behavior, and resolves the concerns of reusing an existing node that has been stopped. It also leaves in place the behavior of reusing stopped nodes when demand requires it (not the manual provisioning case).

            I don't think we should require users to mess with the tags on the node for any normal operations just to make this stuff work. It should be clear and easy.

            Any objections to this approach?

            Show
            francisu Francis Upton added a comment - I like Trevor's suggestion of always provisioning a new node (subject to the caps) when manual provisioning is requested; even if there is a stopped node that would already satisfy the request. I think this essentially gets us back to the old behavior, and resolves the concerns of reusing an existing node that has been stopped. It also leaves in place the behavior of reusing stopped nodes when demand requires it (not the manual provisioning case). I don't think we should require users to mess with the tags on the node for any normal operations just to make this stuff work. It should be clear and easy. Any objections to this approach?
            Hide
            ferrante Matt Ferrante added a comment -

            Francis Upton, that approach sounds right. As long as we can manually provision a node, that satisfies my needs. Other enhancements around finding nodes to use are good, but sometimes we just need to spin up a brand new one. Thanks.

            Show
            ferrante Matt Ferrante added a comment - Francis Upton , that approach sounds right. As long as we can manually provision a node, that satisfies my needs. Other enhancements around finding nodes to use are good, but sometimes we just need to spin up a brand new one. Thanks.
            Hide
            mihelich Patrick Mihelich added a comment -

            Sounds good, thanks. I support always provisioning a new node on manual request.

            Show
            mihelich Patrick Mihelich added a comment - Sounds good, thanks. I support always provisioning a new node on manual request.
            Hide
            rpilachowski Robert Pilachowski added a comment -

            Francis Upton I agree with the approach. And taking from previous comments, correct me if I am wrong, the behaviour should be:

            1. A manual provision request will always provision a new slave as long as instance caps are not exceeded. This provisioning happens regardless of any stopped nodes. I.E., even if there are stopped nodes, a new slave can be manually provisioned (subject to instance caps.)
            2. Multiple manual provision requests can occur at the same time, again subject to instance caps. For example, I can manually provision 3 new slaves concurrently. I do not have to wait for a manual provision to complete before being allowed to manually provision another.
            3. Automated provisioning will restart slaves that are in a stopped state before provisioning new nodes. This is existing behavior.
            Show
            rpilachowski Robert Pilachowski added a comment - Francis Upton I agree with the approach. And taking from previous comments, correct me if I am wrong, the behaviour should be: A manual provision request will always provision a new slave as long as instance caps are not exceeded. This provisioning happens regardless of any stopped nodes. I.E., even if there are stopped nodes, a new slave can be manually provisioned (subject to instance caps.) Multiple manual provision requests can occur at the same time, again subject to instance caps. For example, I can manually provision 3 new slaves concurrently. I do not have to wait for a manual provision to complete before being allowed to manually provision another. Automated provisioning will restart slaves that are in a stopped state before provisioning new nodes. This is existing behavior.
            Hide
            scm_issue_link SCM/JIRA link daemon added a comment -

            Code changed in jenkins
            User: Francis Upton IV
            Path:
            src/main/java/hudson/plugins/ec2/EC2Cloud.java
            src/main/java/hudson/plugins/ec2/SlaveTemplate.java
            http://jenkins-ci.org/commit/ec2-plugin/5949378da15a2fcb0f6e688b0fa7458b6ed4190a
            Log:
            JENKINS-32690 Make manually provision really work

            Show
            scm_issue_link SCM/JIRA link daemon added a comment - Code changed in jenkins User: Francis Upton IV Path: src/main/java/hudson/plugins/ec2/EC2Cloud.java src/main/java/hudson/plugins/ec2/SlaveTemplate.java http://jenkins-ci.org/commit/ec2-plugin/5949378da15a2fcb0f6e688b0fa7458b6ed4190a Log: JENKINS-32690 Make manually provision really work
            Hide
            francisu Francis Upton added a comment -

            Can people test with this PR https://github.com/jenkinsci/ec2-plugin/pull/195 and see if this meets your needs?

            Show
            francisu Francis Upton added a comment - Can people test with this PR https://github.com/jenkinsci/ec2-plugin/pull/195 and see if this meets your needs?
            francisu Francis Upton made changes -
            Status Open [ 1 ] In Progress [ 3 ]
            Hide
            scm_issue_link SCM/JIRA link daemon added a comment -

            Code changed in jenkins
            User: Francis Upton IV
            Path:
            src/main/java/hudson/plugins/ec2/EC2Cloud.java
            src/main/java/hudson/plugins/ec2/SlaveTemplate.java
            http://jenkins-ci.org/commit/ec2-plugin/a5ab74708841a8828ece8136d70c82f9911b83d3
            Log:
            JENKINS-32690 Make manually provision really work (change to EnumSets)

            Show
            scm_issue_link SCM/JIRA link daemon added a comment - Code changed in jenkins User: Francis Upton IV Path: src/main/java/hudson/plugins/ec2/EC2Cloud.java src/main/java/hudson/plugins/ec2/SlaveTemplate.java http://jenkins-ci.org/commit/ec2-plugin/a5ab74708841a8828ece8136d70c82f9911b83d3 Log: JENKINS-32690 Make manually provision really work (change to EnumSets)
            Hide
            rpilachowski Robert Pilachowski added a comment -

            I grabbed the PR, built locally and am testing now. Everything appears to be working as expected. Being able to manually provision is nice.

            Show
            rpilachowski Robert Pilachowski added a comment - I grabbed the PR, built locally and am testing now. Everything appears to be working as expected. Being able to manually provision is nice.
            Hide
            francisu Francis Upton added a comment -

            Excellent, thanks for testing. I will merge this and make a release within 24 hours.

            Show
            francisu Francis Upton added a comment - Excellent, thanks for testing. I will merge this and make a release within 24 hours.
            Hide
            scm_issue_link SCM/JIRA link daemon added a comment -

            Code changed in jenkins
            User: Francis Upton IV
            Path:
            src/main/java/hudson/plugins/ec2/EC2Cloud.java
            src/main/java/hudson/plugins/ec2/SlaveTemplate.java
            http://jenkins-ci.org/commit/ec2-plugin/e52a2eae39445157eef79b45da51f435ee5adfaa
            Log:
            JENKINS-32690 Make manually provision really work (#195)

            • JENKINS-32690 Make manually provision really work (change to EnumSets)
            Show
            scm_issue_link SCM/JIRA link daemon added a comment - Code changed in jenkins User: Francis Upton IV Path: src/main/java/hudson/plugins/ec2/EC2Cloud.java src/main/java/hudson/plugins/ec2/SlaveTemplate.java http://jenkins-ci.org/commit/ec2-plugin/e52a2eae39445157eef79b45da51f435ee5adfaa Log: JENKINS-32690 Make manually provision really work (#195) JENKINS-32690 Make manually provision really work JENKINS-32690 Make manually provision really work (change to EnumSets)
            francisu Francis Upton made changes -
            Status In Progress [ 3 ] Resolved [ 5 ]
            Resolution Fixed [ 1 ]
            francisu Francis Upton made changes -
            Status Resolved [ 5 ] Closed [ 6 ]
            francisu Francis Upton made changes -
            Link This issue is duplicated by JENKINS-33945 [ JENKINS-33945 ]
            rtyler R. Tyler Croy made changes -
            Workflow JNJira [ 168351 ] JNJira + In-Review [ 209659 ]

              People

              • Assignee:
                francisu Francis Upton
                Reporter:
                rpilachowski Robert Pilachowski
              • Votes:
                7 Vote for this issue
                Watchers:
                13 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: