Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-7723

Occasional unlogged "ERROR: Expected to het an id but got instead on 'hg log --rev . --template {node} command." failure on an houly job

    Details

    • Type: Bug
    • Status: Closed (View Workflow)
    • Priority: Major
    • Resolution: Fixed
    • Component/s: mercurial-plugin
    • Labels:
      None
    • Environment:
      Hudson 1.379 / Mercurial plugin version 1.33
    • Similar Issues:

      Description

      I have a job that's kicked off hourly. On 5 of the last 24 runs, I've seen the failure below. This looks a lot like JENKINS-7111, but I'm not using the Timestamper plugin mentioned in that ticket.

      This has just started happening in the past day or so, after upgrading to 1.379. Prior to that I was running either 1.374 or 1.375.

      The project involved is currently pretty quiet commit-wise, espeically over the last couple days, so the 'hg log --rev' command value should be giving the same results for recent runs.

      I also see that these failures are not being logged in the hudson.log file.

      The console output is below.

      ==================================================================================

      Started by timer
      Building remotely on buildslave.fake
      [fake-job] $ hg incoming --quiet --bundle hg.bundle --template "<changeset node='

      {node}' author='{author|xmlescape}' rev='{rev}' date='{date}'><msg>{desc|xmlescape}</msg><added>{file_adds|stringify|xmlescape}</added><deleted>{file_dels|stringify|xmlescape}</deleted><files>{files|stringify|xmlescape}</files><parents>{parents}</parents></changeset>\n" --rev trunk
      [fake-job] $ hg log --rev . --template {node}

      ERROR: Expected to get an id but got instead.
      Sending e-mails to: me@buildslave.fake
      Finished: FAILURE

        Attachments

          Activity

          Hide
          jpabloae jpabloae added a comment -

          I can reproduce this too, using Hudson 1.381 and the Mercurial plugin 1.33. I'm not using the Timestamper plugin. Example:

          Started by upstream project "int-ctl-0" build number 423
          Building remotely on slave1
          [int-inc-pgsql] $ hg incoming --quiet --bundle hg.bundle --template "<changeset node='{node}' author='{author|xmlescape}' rev='{rev}' date='{date}'><msg>{desc|xmlescape}</msg><added>{file_adds|stringify|xmlescape}</added><deleted>{file_dels|stringify|xmlescape}</deleted><files>{files|stringify|xmlescape}</files><parents>{parents}</parents></changeset>\n" --rev default
          [int-inc-pgsql] $ hg unbundle hg.bundle
          adding changesets
          adding manifests
          adding file changes
          added 130 changesets with 339 changes to 79 files
          (run 'hg update' to get a working copy)
          [int-inc-pgsql] $ hg update --clean --rev default
          68 files updated, 0 files merged, 1 files removed, 0 files unresolved
          [int-inc-pgsql] $ hg log --rev . --template {node}
          ERROR: Expected to get an id but got  instead.
          Email was triggered for: Failure
          Sending email for trigger: Failure
          Sending email to: some@mail.net
          Notifying upstream build int-ctl-0 #423 of job completion
          Project int-ctl-0 still waiting for 1 builds to complete
          Notifying upstream projects of job completion
          Notifying upstream of completion: int-ctl-0 #423
          Finished: FAILURE
          
          Show
          jpabloae jpabloae added a comment - I can reproduce this too, using Hudson 1.381 and the Mercurial plugin 1.33. I'm not using the Timestamper plugin. Example: Started by upstream project "int-ctl-0" build number 423 Building remotely on slave1 [int-inc-pgsql] $ hg incoming --quiet --bundle hg.bundle --template "<changeset node='{node}' author='{author|xmlescape}' rev='{rev}' date='{date}'><msg>{desc|xmlescape}</msg><added>{file_adds|stringify|xmlescape}</added><deleted>{file_dels|stringify|xmlescape}</deleted><files>{files|stringify|xmlescape}</files><parents>{parents}</parents></changeset>\n" --rev default [int-inc-pgsql] $ hg unbundle hg.bundle adding changesets adding manifests adding file changes added 130 changesets with 339 changes to 79 files (run 'hg update' to get a working copy) [int-inc-pgsql] $ hg update --clean --rev default 68 files updated, 0 files merged, 1 files removed, 0 files unresolved [int-inc-pgsql] $ hg log --rev . --template {node} ERROR: Expected to get an id but got instead. Email was triggered for: Failure Sending email for trigger: Failure Sending email to: some@mail.net Notifying upstream build int-ctl-0 #423 of job completion Project int-ctl-0 still waiting for 1 builds to complete Notifying upstream projects of job completion Notifying upstream of completion: int-ctl-0 #423 Finished: FAILURE
          Hide
          jglick Jesse Glick added a comment -

          Any steps to reproduce from scratch? What does running

          hg log --rev . --template {node}
          

          in fact do if run from your workspace? It ought to print a single changeset ID, corresponding to the revision of the workspace.

          Show
          jglick Jesse Glick added a comment - Any steps to reproduce from scratch? What does running hg log --rev . --template {node} in fact do if run from your workspace? It ought to print a single changeset ID, corresponding to the revision of the workspace.
          Hide
          pwan pwan added a comment -

          I couldn't find a way to recreate it from scratch.

          The 'hg log --rev . --template

          {node}

          ' command is working as expected from the workspace.

          For what it's worth, I haven't seen the issue since I changed my job from running hourly to running daily. I am still on Hudson 1.379 / Mercurial plugin 1.33

          Show
          pwan pwan added a comment - I couldn't find a way to recreate it from scratch. The 'hg log --rev . --template {node} ' command is working as expected from the workspace. For what it's worth, I haven't seen the issue since I changed my job from running hourly to running daily. I am still on Hudson 1.379 / Mercurial plugin 1.33
          Hide
          jglick Jesse Glick added a comment -

          Well I have to close it until I can get some clue as to why it was happening. Maybe a broken Mercurial installation? Or some unusual config with repo location inside the workspace...?

          Show
          jglick Jesse Glick added a comment - Well I have to close it until I can get some clue as to why it was happening. Maybe a broken Mercurial installation? Or some unusual config with repo location inside the workspace...?
          Hide
          g7r g7r added a comment -

          On my hudson installation this happens only on linux SSH slave. It looks like master node occasionally doesn't read slave stdout in time skipping a bit of command output.
          I can't consistently reproduce this problem but 1 of 10 of my build are failed because of it.

          Show
          g7r g7r added a comment - On my hudson installation this happens only on linux SSH slave. It looks like master node occasionally doesn't read slave stdout in time skipping a bit of command output. I can't consistently reproduce this problem but 1 of 10 of my build are failed because of it.
          Hide
          jglick Jesse Glick added a comment -

          Revision: 37300
          Author : jglick
          Date : Dec 9, 2010 9:53:48 AM
          [FIXED JENKINS-7723] Occasionally finding the rev of tip fails, for unknown reasons.
          Try to behave more robustly, though without a way to reproduce I cannot really confirm that this does not just delay problems.

          Show
          jglick Jesse Glick added a comment - Revision: 37300 Author : jglick Date : Dec 9, 2010 9:53:48 AM [FIXED JENKINS-7723] Occasionally finding the rev of tip fails, for unknown reasons. Try to behave more robustly, though without a way to reproduce I cannot really confirm that this does not just delay problems.
          Hide
          jpabloae jpabloae added a comment -

          Thank you for the fix, this issue happens very frequently in our instances. Do you have any planned dates for the 1.35 release of the plugin?

          Show
          jpabloae jpabloae added a comment - Thank you for the fix, this issue happens very frequently in our instances. Do you have any planned dates for the 1.35 release of the plugin?
          Hide
          jglick Jesse Glick added a comment -

          No planned release date yet.

          As mentioned in commit log, I have no real way of knowing that the change improved matters. If you are seeing the issue frequently, you are a valuable tester. Please consider building your own copy of 1.35-SNAPSHOT and running it on your server. (I can also supply you with a mercurial.hpi if you are not comfortable building from source.)

          Show
          jglick Jesse Glick added a comment - No planned release date yet. As mentioned in commit log, I have no real way of knowing that the change improved matters. If you are seeing the issue frequently, you are a valuable tester. Please consider building your own copy of 1.35-SNAPSHOT and running it on your server. (I can also supply you with a mercurial.hpi if you are not comfortable building from source.)
          Hide
          g7r g7r added a comment -

          It seems that problem is fixed in trunk version of plugin.

          Show
          g7r g7r added a comment - It seems that problem is fixed in trunk version of plugin.
          Hide
          jglick Jesse Glick added a comment -

          I've tried to release 1.35 using new site infrastructure. So far so good; we'll see if it actually appears on the update center.

          Show
          jglick Jesse Glick added a comment - I've tried to release 1.35 using new site infrastructure. So far so good; we'll see if it actually appears on the update center.
          Hide
          jpabloae jpabloae added a comment - - edited

          jglick,

          After a week we haven't faced this issue any more. We previously used to have it at least once a day. So the fixes appears to be working very well, thank you!

          Show
          jpabloae jpabloae added a comment - - edited jglick, After a week we haven't faced this issue any more. We previously used to have it at least once a day. So the fixes appears to be working very well, thank you!

            People

            • Assignee:
              jglick Jesse Glick
              Reporter:
              pwan pwan
            • Votes:
              1 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: