Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-38906

Remove lock resource name from global configuration when lock is released

    Details

    • Similar Issues:

      Description

      'Lockable Resources' within Jenkins global configuration can grow quickly when lock names are selected which are dynamically created and are transient in nature.

      Would be nice to clean-up the global configuration when locks released by removing the named lock value.

        Attachments

          Issue Links

            Activity

            Hide
            aarondmarasco_vsi Aaron D. Marasco added a comment - - edited

            Using code from comments above, I have a Jenkins job that runs weekly to remove the ones that start with certain phrases. This may have race conditions (see later comments):

            stage ("JENKINS-38906") {
              def manager = org.jenkins.plugins.lockableresources.LockableResourcesManager.get()
              def resources = manager.getResources().findAll {
                  (!it.locked) && (
                      it.name.startsWith("docker_rpminstalled") ||
                      it.name.startsWith("docker-rpmbuild") ||
                      it.name.startsWith("rpm-deploy")
                  )
              }
              currentBuild.description = "${resources.size()} locks"
              resources.each {
                  println "Removing ${it.name}"   
                  manager.getResources().remove(it)
              }
              manager.save()
            } // stage
            
            
            Show
            aarondmarasco_vsi Aaron D. Marasco added a comment - - edited Using code from comments above, I have a Jenkins job that runs weekly to remove the ones that start with certain phrases. This may have race conditions (see later comments): stage ( "JENKINS-38906" ) { def manager = org.jenkins.plugins.lockableresources.LockableResourcesManager.get() def resources = manager.getResources().findAll { (!it.locked) && ( it.name.startsWith( "docker_rpminstalled" ) || it.name.startsWith( "docker-rpmbuild" ) || it.name.startsWith( "rpm-deploy" ) ) } currentBuild.description = "${resources.size()} locks" resources.each { println "Removing ${it.name}" manager.getResources().remove(it) } manager.save() } // stage
            Hide
            skorhone Sami Korhonen added a comment -

            Aaron D. Marasco You should synchronize access to LocalbleResouceManager, otherwise you might cause race condition. Considering LocakbleResourcesManager relies on synchronization, it should actually be a trivial task to delete lock after it has been released.

            Show
            skorhone Sami Korhonen added a comment - Aaron D. Marasco  You should synchronize access to LocalbleResouceManager, otherwise you might cause race condition. Considering LocakbleResourcesManager relies on synchronization, it should actually be a trivial task to delete lock after it has been released.
            Hide
            skorhone Sami Korhonen added a comment - - edited

            We're using this to delete locks after our ansible plays:

            @NonCPS
            def deleteLocks(lockNames) {
              def manager = org.jenkins.plugins.lockableresources.LockableResourcesManager.get()
              synchronized (manager) {
                manager.getResources().removeAll { r -> lockNames.contains(r.name) && !r.locked && !r.reserved }
                manager.save()
              }
            }
            

             Edit: I had time to study problem further. While this does resolve race condition when deleting item from list, it still isn't sufficient. Current lock allocation algorithm relies that locks are not deleted in any way. I think that's something I can fix. However I think that algorithm needs major rework. Everything related lock management has to be done with atomic operations - and to do so, management must be done in a single class. There might be some scalability issues when allocating hundreds of locks, that could be resolved as well.

            Show
            skorhone Sami Korhonen added a comment - - edited We're using this to delete locks after our ansible plays: @NonCPS def deleteLocks(lockNames) { def manager = org.jenkins.plugins.lockableresources.LockableResourcesManager.get() synchronized (manager) { manager.getResources().removeAll { r -> lockNames.contains(r.name) && !r.locked && !r.reserved } manager.save() } }  Edit: I had time to study problem further. While this does resolve race condition when deleting item from list, it still isn't sufficient. Current lock allocation algorithm relies that locks are not deleted in any way. I think that's something I can fix. However I think that algorithm needs major rework. Everything related lock management has to be done with atomic operations - and to do so, management must be done in a single class. There might be some scalability issues when allocating hundreds of locks, that could be resolved as well.
            Hide
            aarondmarasco_vsi Aaron D. Marasco added a comment -

            Thanks Sami Korhonen for the heads-up. I don't need to worry about race conditions in my unique situation. However, I'll make a note in case others just see it and copypasta.

             

            Show
            aarondmarasco_vsi Aaron D. Marasco added a comment - Thanks Sami Korhonen for the heads-up. I don't need to worry about race conditions in my unique situation . However, I'll make a note in case others just see it and copypasta.  
            Hide
            tgr Tobias Gruetzmacher added a comment -

            This should be fixed with the ephemeral lock support in release 2.6 - Everthing that is created automatically is now removed automatically.

            Show
            tgr Tobias Gruetzmacher added a comment - This should be fixed with the ephemeral lock support in release 2.6 - Everthing that is created automatically is now removed automatically.

              People

              • Assignee:
                tgr Tobias Gruetzmacher
                Reporter:
                tslifset Ted Lifset
              • Votes:
                41 Vote for this issue
                Watchers:
                52 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: