Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-57649

Lockable resource plugin should not lock a resources when it is offline

    Details

    • Similar Issues:

      Description

      Lockable resources plugin should not lock a resource when it is offline.

       

      for example,

      lock(label: <label>, quantity: 3, variable: 'RESOURCES')

      will lock 3 resources without checking whether the nodes are offline

       

      when the servers are taken out for maintenance or requires troubleshooting, currently there is no way to prevent lock on the offline resource without ADMIN access (which is basically removing the resource from the Lockable Resources Manager ).

       

      Normal users (without ADMIN access) cannot control locks over an offline resource for maintenance/troubleshooting

       

       

        Attachments

          Activity

          Hide
          jimklimov Jim Klimov added a comment - - edited

          While there is a good purpose in this request, I believe it is destined to be an offtopic/WontFix here. The plugin manages the generic concept of resources, does not even care if they are physically represented or not (you can use it to e.g. throttle a maximum amount of jobs holding a limited amount of tokens at a time).

          That said, you can use the Groovy script option of implementing your use-case dependent logic and eventually return a `true` or `false` about whether a proposed resource is eligible for your job. The plugin calls such script in a loop for each resource (your script should test that it matches the label expression you want then) and makes a list of items with a "true" verdict, to offer one (or first?) of those to be locked by a job.

          I am not sure if it levels the load on physical resources (giving first vs random eligible).

          As part of this logic you can do anything; for some of our tests in the house, we have indeed a script that probes SSH availability of a remote VM we'd be setting up with our product as part of such logic, so broken VMs are not offered to jobs. A trimmed-down example would be:

          public static boolean serverListening(String host, int port)
          {
              Socket s = null;
              try
              {
                  s = new Socket(host, port);
                  return true;
              }
              catch (Exception e)
              {
                  return false;
              }
              finally
              {
                  if(s != null)
                      try {s.close();}
                      catch(Exception e){}
              }
          }
          
          //println "Inspecting the resource to lock for requested CONTROLLER='" + CONTROLLER + "' (looking at resourceName='" + resourceName + "' resourceDescription='" + resourceDescription + "' resourceLabels='" + resourceLabels + "')"
          // + "' in build number " + build.getNumber() + 
          
          if ( serverListening(resourceName, 22) ) {
              println "ACCEPTED '" + resourceName + "'"
              return true;
          }
          
          println "Resource '" + resourceName + "' is not suitable for this job"
          return false; // Tested resource is not appropriate for this build
          

          With this, we do however miss another ability: to see quickly which resources were last diagnosed dead. Arguably, this is out of LockableResources' scope as well however (rather belongs in zabbix or similar monitoring tool, or maybe a custom job to inspect the list of resources and "lock" broken ones by a specific holder, and unlock fixed ones... maybe along the lines of this https://stackoverflow.com/a/52744986/4715872 fine example).

          But it would be helpful to have such a status anyway and have it all displayed in the same list of Available/Reserved/Locked/Broken Jenkins resources

          Show
          jimklimov Jim Klimov added a comment - - edited While there is a good purpose in this request, I believe it is destined to be an offtopic/WontFix here. The plugin manages the generic concept of resources, does not even care if they are physically represented or not (you can use it to e.g. throttle a maximum amount of jobs holding a limited amount of tokens at a time). That said, you can use the Groovy script option of implementing your use-case dependent logic and eventually return a `true` or `false` about whether a proposed resource is eligible for your job. The plugin calls such script in a loop for each resource (your script should test that it matches the label expression you want then) and makes a list of items with a "true" verdict, to offer one (or first?) of those to be locked by a job. I am not sure if it levels the load on physical resources (giving first vs random eligible). As part of this logic you can do anything; for some of our tests in the house, we have indeed a script that probes SSH availability of a remote VM we'd be setting up with our product as part of such logic, so broken VMs are not offered to jobs. A trimmed-down example would be: public static boolean serverListening( String host, int port) { Socket s = null ; try { s = new Socket(host, port); return true ; } catch (Exception e) { return false ; } finally { if (s != null ) try {s.close();} catch (Exception e){} } } //println "Inspecting the resource to lock for requested CONTROLLER= '" + CONTROLLER + "' (looking at resourceName= '" + resourceName + "' resourceDescription= '" + resourceDescription + "' resourceLabels= '" + resourceLabels + "' )" // + "' in build number " + build.getNumber() + if ( serverListening(resourceName, 22) ) { println "ACCEPTED '" + resourceName + "' " return true ; } println "Resource '" + resourceName + "' is not suitable for this job" return false ; // Tested resource is not appropriate for this build With this, we do however miss another ability: to see quickly which resources were last diagnosed dead. Arguably, this is out of LockableResources' scope as well however (rather belongs in zabbix or similar monitoring tool, or maybe a custom job to inspect the list of resources and "lock" broken ones by a specific holder, and unlock fixed ones... maybe along the lines of this https://stackoverflow.com/a/52744986/4715872 fine example). But it would be helpful to have such a status anyway and have it all displayed in the same list of Available/Reserved/Locked/ Broken Jenkins resources
          Hide
          tgr Tobias Gruetzmacher added a comment -

          As Jim Klimov already explained: This plugin has no concept of "offline" nodes or even the concept of "nodes" at all. Each lockable resource is just a string. There are several ways to achieve what you propose, the most basic is just creating jobs which take the lock and wait forever and then can be killed when "maintenance" is over - This is probably not the prettiest solution, just the first one that came to mind.

           

          I would agree that the permission concept for the plugin is currently pretty coarse - If you want to see that changed, please open a new ticket and be ready to "bring your own code"...

          Show
          tgr Tobias Gruetzmacher added a comment - As Jim Klimov already explained: This plugin has no concept of "offline" nodes or even the concept of "nodes" at all. Each lockable resource is just a string. There are several ways to achieve what you propose, the most basic is just creating jobs which take the lock and wait forever and then can be killed when "maintenance" is over - This is probably not the prettiest solution, just the first one that came to mind.   I would agree that the permission concept for the plugin is currently pretty coarse - If you want to see that changed, please open a new ticket and be ready to "bring your own code"...

            People

            • Assignee:
              Unassigned
              Reporter:
              smohanram sundararaman mohanram
            • Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: