Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-57795

Orphaned EC2 instances after Jenkins restart

    Details

    • Type: Bug
    • Status: Open (View Workflow)
    • Priority: Critical
    • Resolution: Unresolved
    • Component/s: ec2-plugin
    • Labels:
      None
    • Environment:
      Jenkins ver. 2.176.1, 2.204.2
      ec2 plugin 1.43, 1.44, 1.45, 1.49.1
    • Similar Issues:

      Description

      Sometimes after a Jenkins restart the plugin won't be able to spawn more agents.

      The plugin will just loop on this:

      SlaveTemplate{ami='ami-0efbb291c6e8cc847', labels='docker'}. Attempting to provision slave needed by excess workload of 1 units
      May 31, 2019 2:23:53 PM INFO hudson.plugins.ec2.EC2Cloud getNewOrExistingAvailableSlave
      SlaveTemplate{ami='ami-0efbb291c6e8cc847', labels='docker'}. Cannot provision - no capacity for instances: 0
      May 31, 2019 2:23:53 PM WARNING hudson.plugins.ec2.EC2Cloud provision
      Can't raise nodes for SlaveTemplate{ami='ami-0efbb291c6e8cc847', labels='docker'}
      

      If I go to the EC2 console and terminate the instance manually the plugin will spawn a new one and use it.

      It seems like there is some mismatch in the plugin logic. The part responsible for calculating the number of instances and checking the cap sees the EC2 instance. However the part responsible for picking up running EC2 instances doesn't seem to be able to find it.

      We use a single subnet, security group and vpc (I've seen some reports about this causing problems).

      We use instanceCap = 1 setting as we are testing the plugin, this might make this problem more visible than with a higher cap.

        Attachments

          Activity

          Hide
          pyieh Pierson Yieh added a comment - - edited

          We've identified the cause of our issue. The orphan re-attachment logic is tied the EC2Cloud's provision method. But the issue occurs when the actual number of existing AWS nodes has hit an instance cap (i.e. no more nodes can be provisioned). Because we've hit an instance cap, provisioning isn't even attempted and the orphan re-attachment logic isn't triggered. Submitted a PR here: https://github.com/jenkinsci/ec2-plugin/pull/448

          Show
          pyieh Pierson Yieh added a comment - - edited We've identified the cause of our issue. The orphan re-attachment logic is tied the EC2Cloud's provision method. But the issue occurs when the actual number of existing AWS nodes has hit an instance cap (i.e. no more nodes can be provisioned). Because we've hit an instance cap, provisioning isn't even attempted and the orphan re-attachment logic isn't triggered. Submitted a PR here:  https://github.com/jenkinsci/ec2-plugin/pull/448
          Hide
          pyieh Pierson Yieh added a comment -

          Jakub Bochenski Was the problem that was solved the issue of orphan nodes not getting reconnected or agents dying during launch? Our issue is of orphan nodes not getting re-attached to their respective Jenkins Masters. 

          Show
          pyieh Pierson Yieh added a comment - Jakub Bochenski Was the problem that was solved the issue of orphan nodes not getting reconnected or agents dying during launch? Our issue is of orphan nodes not getting re-attached to their respective Jenkins Masters. 
          Hide
          jbochenski Jakub Bochenski added a comment -

          I was able to resolve my problem in the same way as described in https://issues.jenkins-ci.org/browse/JENKINS-61370?focusedCommentId=388247&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-388247

          The swallowing of useful error output is a big issue that should be improved

          Show
          jbochenski Jakub Bochenski added a comment - I was able to resolve my problem in the same way as described in https://issues.jenkins-ci.org/browse/JENKINS-61370?focusedCommentId=388247&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-388247 The swallowing of useful error output is a big issue that should be improved
          Hide
          pyieh Pierson Yieh added a comment - - edited

          We've also seen this behavior before though we're not sure how to reproduce the problem. We saw it when we'd hit our max AWS request limit and Jenkins started losing track of nodes and couldn't spin up new ones cause the orphaned nodes were still being counted towards the max instance count, but weren't showing up in the Jenkins UI.

          I'm able to "simulate" the "losing track of nodes" by running a groovy script on the Jenkins Master to manually remove the node for the Jenkins object. And we're looking into implementing a feature to automatically re-attach these orphaned nodes to Jenkins. 

          Update: seems the SlaveTemplate.checkInstance() finds our orphan nodes and were able to re-attach them to the Jenkins Master. Not sure why in the past they weren't getting re-attached.

          Show
          pyieh Pierson Yieh added a comment - - edited We've also seen this behavior before though we're not sure how to reproduce the problem. We saw it when we'd hit our max AWS request limit and Jenkins started losing track of nodes and couldn't spin up new ones cause the orphaned nodes were still being counted towards the max instance count, but weren't showing up in the Jenkins UI. I'm able to "simulate" the "losing track of nodes" by running a groovy script on the Jenkins Master to manually remove the node for the Jenkins object. And we're looking into implementing a feature to automatically re-attach these orphaned nodes to Jenkins.  Update: seems the SlaveTemplate.checkInstance() finds our orphan nodes and were able to re-attach them to the Jenkins Master. Not sure why in the past they weren't getting re-attached.
          Hide
          jbochenski Jakub Bochenski added a comment -

          Raihaan Shouhell I have filled a separate issue about the agents dying during launch as it happens independently of this issue.

          Show
          jbochenski Jakub Bochenski added a comment - Raihaan Shouhell I have filled a separate issue about the agents dying during launch as it happens independently of this issue.

            People

            • Assignee:
              thoulen FABRIZIO MANFREDI
              Reporter:
              jbochenski Jakub Bochenski
            • Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

              • Created:
                Updated: