Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-25340

lost trend history after skipping build

    Details

    • Type: Bug
    • Status: Reopened (View Workflow)
    • Priority: Major
    • Resolution: Unresolved
    • Component/s: junit-plugin
    • Labels:
      None
    • Environment:
      core: 1.565.3 LTS
      plugin: 0.6.0
    • Similar Issues:

      Description

      I had trend history, then aborted two builds (run job and press abort), after it trend history graph become clear.
      After 2 builds graph filled back with 2 results (but before abort results still doesn't appeared).

        Attachments

          Issue Links

            Activity

            Show
            teilo James Nord added a comment - hardly suprising... https://github.com/jenkinsci/jenkins/commit/8d8102036f4e52aada39db37c05025b9bb31516d
            Hide
            jglick Jesse Glick added a comment -

            Yup. No plans to fix until core provides a safer way of looking for builds by number.

            Show
            jglick Jesse Glick added a comment - Yup. No plans to fix until core provides a safer way of looking for builds by number.
            Hide
            integer Kanstantsin Shautsou added a comment -

            This is blocker for me. I'm limiting number of jobs to what i want see in trend - it's near 30, not sure what performance issues it may cause and what you want expect from core. Also we can't use Jenkins test results now.

            Do you have core issue that maybe linked to this issue?

            Show
            integer Kanstantsin Shautsou added a comment - This is blocker for me. I'm limiting number of jobs to what i want see in trend - it's near 30, not sure what performance issues it may cause and what you want expect from core. Also we can't use Jenkins test results now. Do you have core issue that maybe linked to this issue?
            Hide
            teilo James Nord added a comment -

            the code that is broken in core (in 1.565) no longer lives in core - it now lives in the junit plugin.

            Show
            teilo James Nord added a comment - the code that is broken in core (in 1.565) no longer lives in core - it now lives in the junit plugin.
            Hide
            jglick Jesse Glick added a comment -

            If you are already aggressively deleting jobs, then fine. But people have a reasonable expectation to not have to delete build records they might care about, just in order to not have their master thrash its disk by a mere view of the job index page.

            Show
            jglick Jesse Glick added a comment - If you are already aggressively deleting jobs, then fine. But people have a reasonable expectation to not have to delete build records they might care about, just in order to not have their master thrash its disk by a mere view of the job index page.
            Hide
            jglick Jesse Glick added a comment -

            A fix of JENKINS-24380 would probably make this solvable without a performance regression.

            Show
            jglick Jesse Glick added a comment - A fix of JENKINS-24380 would probably make this solvable without a performance regression.
            Hide
            teilo James Nord added a comment -

            So word fixing the issue to force loading of the first 10 builds.

            Our at least make it configurable for those that have a decent setup

            Show
            teilo James Nord added a comment - So word fixing the issue to force loading of the first 10 builds. Our at least make it configurable for those that have a decent setup
            Hide
            integer Kanstantsin Shautsou added a comment -

            Hm... afair AbstractProject.getNewBuilds() is limited with 100 jobs... isn't this limit enough for showing default trend without performance degradation?

            Show
            integer Kanstantsin Shautsou added a comment - Hm... afair AbstractProject.getNewBuilds() is limited with 100 jobs... isn't this limit enough for showing default trend without performance degradation?
            Hide
            jglick Jesse Glick added a comment -

            isn't this limit enough for showing default trend without performance degradation?

            Afraid not. 30 perhaps, since that many are shown in the build history widget (if you are showing this widget).

            Show
            jglick Jesse Glick added a comment - isn't this limit enough for showing default trend without performance degradation? Afraid not. 30 perhaps, since that many are shown in the build history widget (if you are showing this widget).
            Hide
            jglick Jesse Glick added a comment -

            As an aside, as to why JENKINS-23945 took so long to be reported: in part because there were other even more serious lazy-loading bugs masking it for a while (such as JENKINS-18065). After that, just because you were suffering from that bug does not mean you knew that a particular graph in the corner of the job index page was thrashing the system. You may just have concluded that Jenkins did not scale well to large systems and thrown more hardware at it, or put up with five-minute page loads.

            That is the problem with performance/scalability bugs: people cannot clamor for fixes of problems they cannot even identify without training and tools. JENKINS-25078 is another example: once the administrator was told what the source of the problem was, and how to opt out of what was for that installation an unnecessary feature, massive performance problems suddenly vanished. So we need to strike a balance of unconditionally showing useful data only when it is cheap to compute, and somehow allowing the admin (or a browsing user) to opt in to potentially more expensive but thorough reports in a way that makes the tradeoff clear.

            (And of course in the longer term we need to find a way of storing build records that makes display of routine metadata actually be cheap.)

            Show
            jglick Jesse Glick added a comment - As an aside, as to why JENKINS-23945 took so long to be reported: in part because there were other even more serious lazy-loading bugs masking it for a while (such as JENKINS-18065 ). After that, just because you were suffering from that bug does not mean you knew that a particular graph in the corner of the job index page was thrashing the system. You may just have concluded that Jenkins did not scale well to large systems and thrown more hardware at it, or put up with five-minute page loads. That is the problem with performance/scalability bugs: people cannot clamor for fixes of problems they cannot even identify without training and tools. JENKINS-25078 is another example: once the administrator was told what the source of the problem was, and how to opt out of what was for that installation an unnecessary feature, massive performance problems suddenly vanished. So we need to strike a balance of unconditionally showing useful data only when it is cheap to compute, and somehow allowing the admin (or a browsing user) to opt in to potentially more expensive but thorough reports in a way that makes the tradeoff clear. (And of course in the longer term we need to find a way of storing build records that makes display of routine metadata actually be cheap.)
            Hide
            jglick Jesse Glick added a comment -

            By the way I just tried to reproduce this bug with a freestyle project and could not. The aborted builds are still shown in Build History, and are simply skipped over in Test Result Trend (like any other build which exists on disk but has no test result action).

            Are you using another job type? Or is LogRotator configured to delete these aborted builds?

            AbstractTestResultAction.getPreviousResult is going to need a test using RunLoadCounter to avoid accidental regressions.

            Show
            jglick Jesse Glick added a comment - By the way I just tried to reproduce this bug with a freestyle project and could not. The aborted builds are still shown in Build History , and are simply skipped over in Test Result Trend (like any other build which exists on disk but has no test result action). Are you using another job type? Or is LogRotator configured to delete these aborted builds? AbstractTestResultAction.getPreviousResult is going to need a test using RunLoadCounter to avoid accidental regressions.
            Hide
            integer Kanstantsin Shautsou added a comment - - edited

            FreeStyle. When new success/unstable jobs appeared - only new jobs appeared in Trend. Then i deleted skipped builds and graph is still not full.

            Show
            integer Kanstantsin Shautsou added a comment - - edited FreeStyle. When new success/unstable jobs appeared - only new jobs appeared in Trend. Then i deleted skipped builds and graph is still not full.
            Hide
            jglick Jesse Glick added a comment -

            If you delete the aborted builds then definitely the graph will stop there; that is the limitation of the current algorithm. But builds which are displayed in Build History, including aborted ones, ought not be a barrier, because displaying the history widget should be forcing them to be in memory.

            Show
            jglick Jesse Glick added a comment - If you delete the aborted builds then definitely the graph will stop there; that is the limitation of the current algorithm. But builds which are displayed in Build History , including aborted ones, ought not be a barrier, because displaying the history widget should be forcing them to be in memory.
            Hide
            mbadran mohamed badran added a comment - - edited

            I'm not sure what happened but suddenly i lost seeing my trend results graph after updating Jenkins to v1.5.91

            Here is the log message i get in the system logs

            hudson.model.FreeStyleProject@33544533[<project_name>] did not contain <project_name> #720 to begin with

            Show
            mbadran mohamed badran added a comment - - edited I'm not sure what happened but suddenly i lost seeing my trend results graph after updating Jenkins to v1.5.91 Here is the log message i get in the system logs hudson.model.FreeStyleProject@33544533 [<project_name>] did not contain <project_name> #720 to begin with
            Hide
            danielbeck Daniel Beck added a comment -

            mohamed badran: If the graph doesn't show up at all, that's a completely unrelated issue. Ask on the jenkinsci-users mailing list for advice (maybe it's a known issue); or file a bug after reading the advice and instructions on https://wiki.jenkins-ci.org/display/JENKINS/How+to+report+an+issue

            The log entry is very likely unrelated and tracked as JENKINS-25788

            Show
            danielbeck Daniel Beck added a comment - mohamed badran : If the graph doesn't show up at all, that's a completely unrelated issue. Ask on the jenkinsci-users mailing list for advice (maybe it's a known issue); or file a bug after reading the advice and instructions on https://wiki.jenkins-ci.org/display/JENKINS/How+to+report+an+issue The log entry is very likely unrelated and tracked as JENKINS-25788
            Hide
            peterwinkler Peter Winkler added a comment -

            In our opinion the assumption that there is no gap in the number of history builds is a bad assumption.
            It is comparable with the assumption that the offered feature to delete builds is no longer a valid feature.
            Besides building our firmware we use JENKINS to test our firmware in several test suites in the hours of the night. In this topic means FAILED (red) that a firmware error avoids the complete test suite, UNSTABLE (yellow) that the suite has found firmware errors and STABLE (green) that the suite passed without finding firmware errors. To keep this meaning we delete test builds on following conditions:

            • Not our firmware raised a FAILED build (e.g. network problems or github server problems)
            • The test build was interrupted due to any reason (test developer want to change versions to be tested, forgotten configuration etc.)
            • For developing purpose (to test the test in the JENKINS environment) only a subset of test cases was started in the suite (because the whole suite would be taken hours)

            May be there are more reasons to use the “delete build” feature. In formerly JENKINS versions this procedure was possible without influence to the history graph. With the change due to JENKINS-23945 such a delete of a build disturbs the history graph that kind that deletion of builds is out of usability.
            We hope that there is another possibility to be aware JENKINS-23945 on the one hand and keep the history graph independent of gaps in the build list raised by deletion on the other hand.

            Show
            peterwinkler Peter Winkler added a comment - In our opinion the assumption that there is no gap in the number of history builds is a bad assumption. It is comparable with the assumption that the offered feature to delete builds is no longer a valid feature. Besides building our firmware we use JENKINS to test our firmware in several test suites in the hours of the night. In this topic means FAILED (red) that a firmware error avoids the complete test suite, UNSTABLE (yellow) that the suite has found firmware errors and STABLE (green) that the suite passed without finding firmware errors. To keep this meaning we delete test builds on following conditions: Not our firmware raised a FAILED build (e.g. network problems or github server problems) The test build was interrupted due to any reason (test developer want to change versions to be tested, forgotten configuration etc.) For developing purpose (to test the test in the JENKINS environment) only a subset of test cases was started in the suite (because the whole suite would be taken hours) May be there are more reasons to use the “delete build” feature. In formerly JENKINS versions this procedure was possible without influence to the history graph. With the change due to JENKINS-23945 such a delete of a build disturbs the history graph that kind that deletion of builds is out of usability. We hope that there is another possibility to be aware JENKINS-23945 on the one hand and keep the history graph independent of gaps in the build list raised by deletion on the other hand.
            Hide
            jglick Jesse Glick added a comment -

            There is no argument about the desirability of showing a trend past a few deleted builds, only the best way to implement that without regressing performance. Someone (perhaps myself) needs to spend a couple hours creating a test asserting that the number of additional build records loaded by virtue of displaying the trend graph (relative to what is shown in the Build History widget anyway) is zero or small, so that an alternate algorithm can be picked to decide which builds to include.

            (JENKINS-24380 may make it possible to introduce a core API for cheaply determining whether a build of a given number is supposed to exist without actually loading it; if so, this would be an attractive choice, though it would require a dependency on a post-LTS core release. TBD.)

            As an aside, Peter Winkler I think your use case would be better served by using the “skipped” status of test cases/suites. This records the fact that the build did run (at a certain time, with a certain changelog, etc.), and records what tests did run to completion—passing or failing—as well as recording which tests were omitted from the suite, or started to run but were killed due to some transient problem with the environment.

            Show
            jglick Jesse Glick added a comment - There is no argument about the desirability of showing a trend past a few deleted builds, only the best way to implement that without regressing performance. Someone (perhaps myself) needs to spend a couple hours creating a test asserting that the number of additional build records loaded by virtue of displaying the trend graph (relative to what is shown in the Build History widget anyway) is zero or small, so that an alternate algorithm can be picked to decide which builds to include. ( JENKINS-24380 may make it possible to introduce a core API for cheaply determining whether a build of a given number is supposed to exist without actually loading it; if so, this would be an attractive choice, though it would require a dependency on a post-LTS core release. TBD.) As an aside, Peter Winkler I think your use case would be better served by using the “skipped” status of test cases/suites. This records the fact that the build did run (at a certain time, with a certain changelog, etc.), and records what tests did run to completion—passing or failing—as well as recording which tests were omitted from the suite, or started to run but were killed due to some transient problem with the environment.
            Hide
            peterwinkler Peter Winkler added a comment -

            The suggestion to use “skipping” would only solve a small part of our problems. A problem which leads to a build failure before the first test case is started disturbs the statistics because the firmware to be tested wasn’t responsible for that. Moreover “skipping” is already used for test cases which meet hardware where they have to skip. We couldn’t differentiate these two things.

            But nevertheless I wonder that a well-established behavior of JENKINS is changed that restrictively kind without unease that the customers could have a problem with that change.
            I expected at least two things:

            • a configuration item to give the customer the control to switch back to the old behavior
            • a warning in the release notes about the new behavior (anyhow this would have spared the analysis time to find out the really reason of the new behavior, may be I’m wrong, but I haven’t seen such a warning)

            With respect to the fact that the change in https://github.com/jenkinsci/jenkins/commit/8d8102036f4e52aada39db37c05025b9bb31516d seems more like a work around than a solution of JENKINS-23945 (may be JENKINS-24380 is one) it is questionable whether a lost of a well-established functionality can be accepted.

            Show
            peterwinkler Peter Winkler added a comment - The suggestion to use “skipping” would only solve a small part of our problems. A problem which leads to a build failure before the first test case is started disturbs the statistics because the firmware to be tested wasn’t responsible for that. Moreover “skipping” is already used for test cases which meet hardware where they have to skip. We couldn’t differentiate these two things. But nevertheless I wonder that a well-established behavior of JENKINS is changed that restrictively kind without unease that the customers could have a problem with that change. I expected at least two things: a configuration item to give the customer the control to switch back to the old behavior a warning in the release notes about the new behavior (anyhow this would have spared the analysis time to find out the really reason of the new behavior, may be I’m wrong, but I haven’t seen such a warning) With respect to the fact that the change in https://github.com/jenkinsci/jenkins/commit/8d8102036f4e52aada39db37c05025b9bb31516d seems more like a work around than a solution of JENKINS-23945 (may be JENKINS-24380 is one) it is questionable whether a lost of a well-established functionality can be accepted.
            Hide
            jglick Jesse Glick added a comment -

            Release notes for 1.576 mention the fix of JENKINS-23945.

            A new configuration option is not desirable; rather better tuning of the standard behavior.

            Show
            jglick Jesse Glick added a comment - Release notes for 1.576 mention the fix of JENKINS-23945 . A new configuration option is not desirable; rather better tuning of the standard behavior.
            Hide
            peterwinkler Peter Winkler added a comment -

            So I've a last two questions:
            1. Is there any timeline where it is planned to start with JENKINS-24380?
            2. Is it guaranteed that after JENKINS-24380 is fulfilled trend history comes back to old behavior?

            Show
            peterwinkler Peter Winkler added a comment - So I've a last two questions: 1. Is there any timeline where it is planned to start with JENKINS-24380 ? 2. Is it guaranteed that after JENKINS-24380 is fulfilled trend history comes back to old behavior?
            Hide
            jglick Jesse Glick added a comment -

            1. Not definite, but I hope soon. 2. No, matching changes would also be needed in the JUnit plugin. Anyway this is just one possible avenue to a fix.

            Show
            jglick Jesse Glick added a comment - 1. Not definite, but I hope soon. 2. No, matching changes would also be needed in the JUnit plugin. Anyway this is just one possible avenue to a fix.
            Hide
            peterwinkler Peter Winkler added a comment -

            Thanks for the information.
            I've made an interesting detection. It seems that the complete history trend information is still there.
            To be found at <url-head>/<jenkins-job-name>/lastCompletedBuild/testReport/history/countGraph/png?start=0&end=100".
            (There I can see a history of 100 items although there was gaps in the history due to deleted builds.)
            For the moment I can put this url in a img-html statement in the job description and have back the old trend graph.
            That would means that the history graph shown by junit plugin isn't the same as before and it would be enough to change something in the junit plugin only.
            Is that right?

            Show
            peterwinkler Peter Winkler added a comment - Thanks for the information. I've made an interesting detection. It seems that the complete history trend information is still there. To be found at <url-head>/<jenkins-job-name>/lastCompletedBuild/testReport/history/countGraph/png?start=0&end=100". (There I can see a history of 100 items although there was gaps in the history due to deleted builds.) For the moment I can put this url in a img-html statement in the job description and have back the old trend graph. That would means that the history graph shown by junit plugin isn't the same as before and it would be enough to change something in the junit plugin only. Is that right?
            Hide
            jglick Jesse Glick added a comment -

            Of course the issue is in the JUnit plugin.

            Show
            jglick Jesse Glick added a comment - Of course the issue is in the JUnit plugin.
            Hide
            brantone Brantone added a comment -

            If a work around is to hit `<url-head>/<jenkins-job-name>/lastCompletedBuild/testReport/history/countGraph/png?start=0&end=100` ... why not just update the Project Job page to use that??

            Also, now that JENKINS-24380 is done and out the door, worth revisiting this sucker? Even if matter of swapping in the workaround png?

            Show
            brantone Brantone added a comment - If a work around is to hit `<url-head>/<jenkins-job-name>/lastCompletedBuild/testReport/history/countGraph/png?start=0&end=100` ... why not just update the Project Job page to use that?? Also, now that JENKINS-24380 is done and out the door, worth revisiting this sucker? Even if matter of swapping in the workaround png?
            Hide
            danielbeck Daniel Beck added a comment -

            Builds still need to be loaded from disk to show the tests, so nothing changed AFAICT.

            Show
            danielbeck Daniel Beck added a comment - Builds still need to be loaded from disk to show the tests, so nothing changed AFAICT.
            Hide
            jglick Jesse Glick added a comment -

            JENKINS-24380 makes it feasible to add an API by which a plugin could ask about the list of builds which exist, which previously would have been very hard; it does not actually include such an API.

            Show
            jglick Jesse Glick added a comment - JENKINS-24380 makes it feasible to add an API by which a plugin could ask about the list of builds which exist, which previously would have been very hard; it does not actually include such an API.
            Hide
            ashu3112 Ashish Rathi added a comment -

            I am using xUnit plugin and JUnit.
            I am assigning build number to my jobs via a groovy script(so that all my jobs can share unique build numbers across jenkins),and I feel the same causes the graphs to have the only the latest build number,and thus the blank "Test Result Trend" graph.
            The Test results however can be easily used/consumed by other plugins like "tests results analyser"
            If Job 1 has build numbers 5,7,10,12 then the trend history will be blank,and will show build number 12 only.

            Show
            ashu3112 Ashish Rathi added a comment - I am using xUnit plugin and JUnit. I am assigning build number to my jobs via a groovy script(so that all my jobs can share unique build numbers across jenkins),and I feel the same causes the graphs to have the only the latest build number,and thus the blank "Test Result Trend" graph. The Test results however can be easily used/consumed by other plugins like "tests results analyser" If Job 1 has build numbers 5,7,10,12 then the trend history will be blank,and will show build number 12 only.
            Hide
            jl74 Jean-Luc Raffin added a comment -

            I am using matrix (multi configuration) projects with filters to build only a subset of all the possible configurations.
            These filters are dynamic and change from one build to another, so some configurations are sometimes built and sometimes not.
            This causes "holes" in the history of the configurations, preventing the test result trend to display the entire history for a given configuration.

            For example:
            Matrix can build CONF1 and CONF2
            Build #1 is CONF1
            Build #2 is CONF1
            Build #3 is CONF2
            Build #4 is CONF2
            Build #5 is CONF1
            Build #6 is CONF1
            => CONF1 history is #1 #2 #5 #6, CONF2 history is #3 #4, and CONF1 test result trend only shows #5 #6

            I would appreciate any suggestions to get all the builds for the configuration in the test result trend...

            Show
            jl74 Jean-Luc Raffin added a comment - I am using matrix (multi configuration) projects with filters to build only a subset of all the possible configurations. These filters are dynamic and change from one build to another, so some configurations are sometimes built and sometimes not. This causes "holes" in the history of the configurations, preventing the test result trend to display the entire history for a given configuration. For example: Matrix can build CONF1 and CONF2 Build #1 is CONF1 Build #2 is CONF1 Build #3 is CONF2 Build #4 is CONF2 Build #5 is CONF1 Build #6 is CONF1 => CONF1 history is #1 #2 #5 #6, CONF2 history is #3 #4, and CONF1 test result trend only shows #5 #6 I would appreciate any suggestions to get all the builds for the configuration in the test result trend...
            Hide
            witokondoria Javier Delgado added a comment -

            In my use case, I perform jobs cleanups so that unneeded-unwanted jobs (self aborted ones) wont pollute the history. This impacts the jUnit trend.

            Show
            witokondoria Javier Delgado added a comment - In my use case, I perform jobs cleanups so that unneeded-unwanted jobs (self aborted ones) wont pollute the history. This impacts the jUnit trend.
            Hide
            david_potts David Potts added a comment -

            I had assumed in the past that I was just seeing a glitch in my setup when the 'test result trend' graph "went funny".

            However, I have had a spate of build failures more recently that caused me to 'tidy up' and delete failed builds for which the testing simply was not relevant - and now can only see 2 or 3 builds in the trend chart. Not at all useful.

            And yes, I have also seen memory issues in the past, and traced it back to the way the trendChart 'recursively' read the Junit XML logs (or rather, it looked that way from the stack trace), rather than linearly counting back the jobs, and simply culling out the necessary data. From a data processing point of view, the same amount of file reading is performed; but one is locking up data on the stack, the other has the opportunity of getting rid of data not pertinent to the job in hand [ oh, and feel free to tell me I'm nowhere near correct here; I didn't look at the code in detail, merely filled in the blanks from what I saw in the stack trace combined with a very cursory glance at the code ]

            I find the 'test result trend' graph to be incredibly useful for what I do, and can't believe that it has been so compromised for so long.

            Show
            david_potts David Potts added a comment - I had assumed in the past that I was just seeing a glitch in my setup when the 'test result trend' graph "went funny". However, I have had a spate of build failures more recently that caused me to 'tidy up' and delete failed builds for which the testing simply was not relevant - and now can only see 2 or 3 builds in the trend chart. Not at all useful. And yes, I have also seen memory issues in the past, and traced it back to the way the trendChart 'recursively' read the Junit XML logs (or rather, it looked that way from the stack trace), rather than linearly counting back the jobs, and simply culling out the necessary data. From a data processing point of view, the same amount of file reading is performed; but one is locking up data on the stack, the other has the opportunity of getting rid of data not pertinent to the job in hand [ oh, and feel free to tell me I'm nowhere near correct here; I didn't look at the code in detail, merely filled in the blanks from what I saw in the stack trace combined with a very cursory glance at the code ] I find the 'test result trend' graph to be incredibly useful for what I do, and can't believe that it has been so compromised for so long.
            Hide
            divanov Daniil Ivanov added a comment - - edited

            This is caused by having non-sequential build numbers in build directory for your job jenkins/jobs/${JOB_NAME}/builds.
            For example, you have builds 1 2 3 5 6 7, then trend results will show only builds 5 6 7 because build 4 is missing.
            To workaround this I've created a script:

            #!/bin/sh

            max=3700
            last_dir=1
            for cur_dir in $(seq 1 $max)
            do
            if [ -d "$cur_dir" ]; then
            if [ $(expr $cur_dir - $last_dir) -gt 1 ]; then
            new_dir=$(expr $last_dir + 1)
            echo moving $cur_dir to $new_dir
            mv $cur_dir $new_dir
            last_dir=$new_dir
            else
            last_dir=$cur_dir
            fi
            fi
            done

            which moves builds down to fill the gaps. After running the script you still need to edit jenkins/jobs/${JOB_NAME}/nextBuildNumber to place a new number and restart Jenkins. Then after next successful build trend result will be complete again.

            Show
            divanov Daniil Ivanov added a comment - - edited This is caused by having non-sequential build numbers in build directory for your job jenkins/jobs/${JOB_NAME}/builds. For example, you have builds 1 2 3 5 6 7, then trend results will show only builds 5 6 7 because build 4 is missing. To workaround this I've created a script: #!/bin/sh max=3700 last_dir=1 for cur_dir in $(seq 1 $max) do if [ -d "$cur_dir" ]; then if [ $(expr $cur_dir - $last_dir) -gt 1 ]; then new_dir=$(expr $last_dir + 1) echo moving $cur_dir to $new_dir mv $cur_dir $new_dir last_dir=$new_dir else last_dir=$cur_dir fi fi done which moves builds down to fill the gaps. After running the script you still need to edit jenkins/jobs/${JOB_NAME}/nextBuildNumber to place a new number and restart Jenkins. Then after next successful build trend result will be complete again.
            Hide
            andyiii Andrew Martignoni III added a comment -

            The best fix would be to use information from Jenkins core to iterate rather than the current method involving consecutive numbers. If there really is no alternative to using the file system directly, then I suggest iterating over all numbers from 1 to (the contents of nextBuildNumber), or perhaps in reverse, stopping after the max number of builds to graph is reached. Is there anyone working on the plugin who could take a look at this? I haven't seen the code, so I can't be more specific or supply a patch.

            Show
            andyiii Andrew Martignoni III added a comment - The best fix would be to use information from Jenkins core to iterate rather than the current method involving consecutive numbers. If there really is no alternative to using the file system directly, then I suggest iterating over all numbers from 1 to (the contents of nextBuildNumber), or perhaps in reverse, stopping after the max number of builds to graph is reached. Is there anyone working on the plugin who could take a look at this? I haven't seen the code, so I can't be more specific or supply a patch.
            Hide
            jglick Jesse Glick added a comment -

            Pointless to add comments here unless you understand the implementation of RunList and are able to evaluate whether the current core API suffices to implement this here or whether the core API needs to be extended.

            Show
            jglick Jesse Glick added a comment - Pointless to add comments here unless you understand the implementation of RunList and are able to evaluate whether the current core API suffices to implement this here or whether the core API needs to be extended.
            Hide
            david_potts David Potts added a comment -

            With respect, Jesse, I'm hoping that it's not pointless to add comments indicating that this is still an issue - and that some of us are not happy with having to make use of workarounds for functionality that should be fixed within Jenkins itself.
            Not having this fixed is not enough to force me to move from Jenkins, obviously; but it does make me question how much testing is performed (because this did work at one point in time), and therefore how much reliance I can place on my use of Jenkins.

            Show
            david_potts David Potts added a comment - With respect, Jesse, I'm hoping that it's not pointless to add comments indicating that this is still an issue - and that some of us are not happy with having to make use of workarounds for functionality that should be fixed within Jenkins itself. Not having this fixed is not enough to force me to move from Jenkins, obviously; but it does make me question how much testing is performed (because this did work at one point in time), and therefore how much reliance I can place on my use of Jenkins.
            Hide
            jglick Jesse Glick added a comment -

            add comments indicating that this is still an issue

            Use the Vote for this issue link.

            Show
            jglick Jesse Glick added a comment - add comments indicating that this is still an issue Use the Vote for this issue link.
            Hide
            il__ya Ilya Ilba added a comment -

            Could somebody please try my fix (incremental build here). Cannot verify it myself as I'm stuck with the old version of jenkins which is incompatible with the latest junit.

            Show
            il__ya Ilya Ilba added a comment - Could somebody please try my fix (incremental build here ). Cannot verify it myself as I'm stuck with the old version of jenkins which is incompatible with the latest junit.

              People

              • Assignee:
                Unassigned
                Reporter:
                integer Kanstantsin Shautsou
              • Votes:
                30 Vote for this issue
                Watchers:
                32 Start watching this issue

                Dates

                • Created:
                  Updated: