Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-4646

Ability to wipe out workspace on only selected node

    Details

    • Type: Improvement
    • Status: Open (View Workflow)
    • Priority: Major
    • Resolution: Unresolved
    • Component/s: core
    • Labels:
    • Environment:
      Platform: All, OS: All
    • Similar Issues:

      Description

      I recently created a new job and I thought tied it to a particular slave. Later
      I realized that I had in fact forgotten to tie it to that slave, and the first
      few builds ran on master. I corrected this and now it is running on the slave as
      desired.

      However now the master has an old copy of the job's workspace too. Since the
      workspace is ~1 Gb, I would like to clean it up. But I want to leave the
      actively used workspace on the slave intact, since recreating it is much slower
      than running an incremental build - needs to do a full SCM checkout and then do
      some big downloads.

      Unfortunately there seems to be no way through Hudson's GUI to delete the
      workspace on a particular node (master, in my case). I presume "Wipe Out
      Workspace" would delete all copies.

      Not sure what the best GUI would be, but perhaps the .../wipeOutWorkspace page
      which displays a confirmation button could have a list of checkboxes, all
      initially checked, listing the nodes on which a copy of the workspace currently
      resides (if there is >1 such node). You could uncheck some of them if you
      wished. The node on which the last build ran should be highlighted.

        Attachments

          Issue Links

            Activity

            Hide
            bklarson bklarson added a comment -

            I've ran into a slightly similar problem. I have several slaves, and due to a repository bug the repository on a few of them became corrupt. I need to clear the workspace on all slaves, but the 'Wipe Out Workspace' button just deletes the copy on the most recently built slave.

            I agree with the proposed interface - it'd be nice to see a list of checkboxes, 1 for each slave.

            Show
            bklarson bklarson added a comment - I've ran into a slightly similar problem. I have several slaves, and due to a repository bug the repository on a few of them became corrupt. I need to clear the workspace on all slaves, but the 'Wipe Out Workspace' button just deletes the copy on the most recently built slave. I agree with the proposed interface - it'd be nice to see a list of checkboxes, 1 for each slave.
            Hide
            mrpotes mrpotes added a comment -

            I have the same issue as bklarson. The suggested fix in the description would be great.

            Show
            mrpotes mrpotes added a comment - I have the same issue as bklarson. The suggested fix in the description would be great.
            Hide
            pmv pmv added a comment -

            At minimum could someone change the wording from 'Wipe Out Workspace' to 'Wipe Out Current Workspace' until this is looked at? I think that would better describe the current functionality.

            Ideally for us 'Wipe Out Workspace' would delete the workspace from all slaves, since we don't tie jobs to slaves and the broken job may not be the most recent one. The proposed checkbox solution would work well.

            Show
            pmv pmv added a comment - At minimum could someone change the wording from 'Wipe Out Workspace' to 'Wipe Out Current Workspace' until this is looked at? I think that would better describe the current functionality. Ideally for us 'Wipe Out Workspace' would delete the workspace from all slaves, since we don't tie jobs to slaves and the broken job may not be the most recent one. The proposed checkbox solution would work well.
            Hide
            astraujums Atis Straujums added a comment - - edited

            The following Groovy script wipes workspaces of certain jobs on all nodes. Execute it from <Jenkins host>/computer/(master)/script

            Something like this could be implemented as a command "Wipe Out All Workspaces".

            import hudson.model.*
            // For each job
            for (item in Hudson.instance.items)
            {
              jobName = item.getFullDisplayName()
              // check that job is not building
              if (!item.isBuilding())
              {
                // TODO: Modify the following condition to select which jobs to affect
                if (jobName == "MyJob")
                {
                  println("Wiping out workspaces of job " + jobName)
                  customWorkspace = item.getCustomWorkspace()
                  println("Custom workspace = " + customWorkspace)
                  
                  for (node in Hudson.getInstance().getNodes())
                  {
                    println("  Node: " + node.getDisplayName())
                    workspacePath = node.getWorkspaceFor(item)
                    if (workspacePath == null)
                    {
                      println("    Could not get workspace path")
                    }
                    else
                    {
                      if (customWorkspace != null)
                      {
                        workspacePath = node.getRootPath().child(customWorkspace)
                      }
            
                      pathAsString = workspacePath.getRemote()
                      if (workspacePath.exists())
                      {
                        workspacePath.deleteRecursive()
                        println("    Deleted from location " + pathAsString)
                      }
                      else
                      {
                        println("    Nothing to delete at " + pathAsString)
                      }
                    }
                  }
                }
              }
              else
              {
                println("Skipping job " + jobName + ", currently building")
              }
            }
            
            Show
            astraujums Atis Straujums added a comment - - edited The following Groovy script wipes workspaces of certain jobs on all nodes. Execute it from <Jenkins host>/computer/(master)/script Something like this could be implemented as a command "Wipe Out All Workspaces". import hudson.model.* // For each job for (item in Hudson.instance.items) { jobName = item.getFullDisplayName() // check that job is not building if (!item.isBuilding()) { // TODO: Modify the following condition to select which jobs to affect if (jobName == "MyJob") { println("Wiping out workspaces of job " + jobName) customWorkspace = item.getCustomWorkspace() println("Custom workspace = " + customWorkspace) for (node in Hudson.getInstance().getNodes()) { println(" Node: " + node.getDisplayName()) workspacePath = node.getWorkspaceFor(item) if (workspacePath == null) { println(" Could not get workspace path") } else { if (customWorkspace != null) { workspacePath = node.getRootPath().child(customWorkspace) } pathAsString = workspacePath.getRemote() if (workspacePath.exists()) { workspacePath.deleteRecursive() println(" Deleted from location " + pathAsString) } else { println(" Nothing to delete at " + pathAsString) } } } } } else { println("Skipping job " + jobName + ", currently building") } }
            Hide
            smd Stefan Drissen added a comment -

            Our use case is similar: the repository workspace (tfs) has become corrupt (due to manual deletion of workspace folders) on one (or more) agents. You think you've cleaned everything up and then months later a job is executed by a corrupt agent.

            Show
            smd Stefan Drissen added a comment - Our use case is similar: the repository workspace (tfs) has become corrupt (due to manual deletion of workspace folders) on one (or more) agents. You think you've cleaned everything up and then months later a job is executed by a corrupt agent.

              People

              • Assignee:
                Unassigned
                Reporter:
                jglick Jesse Glick
              • Votes:
                25 Vote for this issue
                Watchers:
                23 Start watching this issue

                Dates

                • Created:
                  Updated: