Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-7827

Workspace cleared randomly causing jobs to fail

    Details

    • Type: Bug
    • Status: Resolved (View Workflow)
    • Priority: Major
    • Resolution: Cannot Reproduce
    • Component/s: core
    • Labels:
      None
    • Environment:
      RHEL5
      Hudson 1.372
      Perforce Plugin 1.1.10
      Master/Slave (4 nodes, all RHEL5)
    • Similar Issues:

      Description

      I have several Maven2 jobs that run on a schedule and on occasion, randomly, one or more of those jobs will fail because the workspace is mysteriously empty. I will have to go into the Perforce configuration and force the workspace to be synced and run the build again and it'll be fine for a day or two. The Perforce plugin flag to clear the workspace is NOT set, and the flag to force a sync is also not automatically enabled (when I check after a failed build) - so it seems that maybe it's not the plugin clearing the workspace. I also can't tell when it's cleared, whether it's before or after a build, but for whatever reason the build thinks everything is kosher then fails because it can't find the POM. This began happening some time ago after upgrading 1.372, but never happened on the prior version (1.342 I think).

        Attachments

          Issue Links

            Activity

            Hide
            rpetti Rob Petti added a comment -

            I'll put in my 2 cents. There's a hook on the "Wipe out workspace" function that calls out to the SCM to notify it that it's being deleted. When that happens, the perforce plugin will force sync on the next build, which will pull all the code from the SCM again. Since this is not happening, I'm guessing that something else is deleting the workspace. Another plugin, or a bad build script could be at fault, and perhaps even a stray crontab or obnoxious co-worker...

            Show
            rpetti Rob Petti added a comment - I'll put in my 2 cents. There's a hook on the "Wipe out workspace" function that calls out to the SCM to notify it that it's being deleted. When that happens, the perforce plugin will force sync on the next build, which will pull all the code from the SCM again. Since this is not happening, I'm guessing that something else is deleting the workspace. Another plugin, or a bad build script could be at fault, and perhaps even a stray crontab or obnoxious co-worker...
            Hide
            rshelley rshelley added a comment -

            I've upgraded to 1.382 and I'm still experiencing the issue with the workspace disappearing. The build apparently still thinks the files are there, but fails when it can't find the POM. I'm going to have to enable the option to always clear the workspace to force it to sync every build.

            Show
            rshelley rshelley added a comment - I've upgraded to 1.382 and I'm still experiencing the issue with the workspace disappearing. The build apparently still thinks the files are there, but fails when it can't find the POM. I'm going to have to enable the option to always clear the workspace to force it to sync every build.
            Hide
            dankirkd Daniel Kirkdorffer added a comment -

            We've experience this odd, seemingly random behavior as well. In our case the workspaces are on a Windows slave with a UNIX master. We've even seen the behavior when we don't even have SCM polling turned on at all, so there should be no updating of the workspace.

            Like for the reporter of this issue, when this happens we have to reestablish the files before the job can run successfully again.

            We're using Jenkins 1.408 on master and slave boxes, running the slave as a service.

            Show
            dankirkd Daniel Kirkdorffer added a comment - We've experience this odd, seemingly random behavior as well. In our case the workspaces are on a Windows slave with a UNIX master. We've even seen the behavior when we don't even have SCM polling turned on at all, so there should be no updating of the workspace. Like for the reporter of this issue, when this happens we have to reestablish the files before the job can run successfully again. We're using Jenkins 1.408 on master and slave boxes, running the slave as a service.
            Hide
            rpetti Rob Petti added a comment -

            Come to think of it, are you guys using custom workspace locations? I've seen other people have the same problem. Basically, Jenkins automatically clears out directories from the Jenkins root on the machine (for example c:\jenkins\workspace). If you have a job (say JobA) with the workspace configured to be a different directory than the default but still inside the jenkins root (say C:\jenkins\workspace\SomeName) it will clear it out periodically since Jenkins thinks it belongs to some other job that doesn't exist anymore. Hopefully that makes sense...

            Show
            rpetti Rob Petti added a comment - Come to think of it, are you guys using custom workspace locations? I've seen other people have the same problem. Basically, Jenkins automatically clears out directories from the Jenkins root on the machine (for example c:\jenkins\workspace). If you have a job (say JobA) with the workspace configured to be a different directory than the default but still inside the jenkins root (say C:\jenkins\workspace\SomeName) it will clear it out periodically since Jenkins thinks it belongs to some other job that doesn't exist anymore. Hopefully that makes sense...
            Hide
            dankirkd Daniel Kirkdorffer added a comment -

            We're not overriding the locations. We're letting Jenkins determine the job's location from the job name.

            Show
            dankirkd Daniel Kirkdorffer added a comment - We're not overriding the locations. We're letting Jenkins determine the job's location from the job name.
            Hide
            gardner Gardner Bickford added a comment -

            I experienced this behavior when the slave node was configured to use /tmp/jenkins as its workspace. How are you slaves launched and configured?

            Show
            gardner Gardner Bickford added a comment - I experienced this behavior when the slave node was configured to use /tmp/jenkins as its workspace. How are you slaves launched and configured?
            Hide
            mandeepr Mandeep Rai added a comment -

            Does your build have "Execute concurrent builds if necessary" option turned on?

            Show
            mandeepr Mandeep Rai added a comment - Does your build have "Execute concurrent builds if necessary" option turned on?
            Hide
            dankirkd Daniel Kirkdorffer added a comment -

            Mandeep - I don't think so. I can't even find where that option lives.

            Show
            dankirkd Daniel Kirkdorffer added a comment - Mandeep - I don't think so. I can't even find where that option lives.
            Hide
            danielbeck Daniel Beck added a comment -

            Does this issue still occur on recent Jenkins versions?

            Show
            danielbeck Daniel Beck added a comment - Does this issue still occur on recent Jenkins versions?
            Hide
            danielbeck Daniel Beck added a comment -

            Resolving: No response to comment asking for updated information in over two weeks.

            Due to the age of this issue, please file a new issue if this still occurs on recent Jenkins/Perforce plugin versions.

            Show
            danielbeck Daniel Beck added a comment - Resolving: No response to comment asking for updated information in over two weeks. Due to the age of this issue, please file a new issue if this still occurs on recent Jenkins/Perforce plugin versions.

              People

              • Assignee:
                Unassigned
                Reporter:
                rshelley rshelley
              • Votes:
                2 Vote for this issue
                Watchers:
                4 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: