Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-46166

Distingush tests by stage and parallel

    Details

    • Similar Issues:
    • Sprint:
      Blue Ocean 1.2, Blue Ocean 1.3, Blue Ocean 1.4 - beta 1, Blue Ocean 1.4 - beta 3, Blue Ocean 1.4 - beta 2

      Description

      Scope

      • Prefix the test name with the Path to the stage or parallel the test was run in
      • This should have a unit test (at least)
      • Check that the sort works correctly and the failures on the same "path" are grouped

      Note
      There are other ways of displaying this data except they will all need some design work done. We don't have the capacity for this, so we will do the minimum here.

      Example

      Jenkinsfile

      stage ('Browser Tests') {
        parallel {
          stage ('Firefox') {
            sh 'mvn test'
          }
          stage ('Chrome') {
            sh 'mvn test'
          }
          stage ('Safari') {
            sh 'mvn test'
          }
          stage ('Internet Explorer') {
            sh 'mvn test'
          }
        }
      }
      

        Attachments

          Issue Links

            Activity

            Hide
            jamesdumay James Dumay added a comment -

            Phil, if there are features missing would you mind sending me a quick brain dump at jdumay@cloudbees.com ?

            Show
            jamesdumay James Dumay added a comment - Phil, if there are features missing would you mind sending me a quick brain dump at jdumay@cloudbees.com ?
            Hide
            lostgoat Andres Rodriguez added a comment -

            Don't want to pile too many things into this ticket, but just wanted to add that it would be nice if the results were collapsible by stage.

            For your example data above above, how it would look when it opens:

             

            > Browser Tests / Firefox (1)
            > Browser Tests / Chrome (1)
            > Browser Tests / Internet Explorer (1)
            > Browser Tests / Safari (1)
            

            Then expand some of the entries:

             

            > Browser Tests / Firefox (1)
              > appstore.TestThisWillFailAbunch
            > Browser Tests / Chrome (1)
              > appstore.TestThisWillFailAbunch
            > Browser Tests / Internet Explorer (1)
            > Browser Tests / Safari (1)
            

            This grouping would make it slightly easier to parse the data when a large amount of test cases fail.

            For example, imagine that there are 500 other test cases that fail, TestThisWillFailAbunc[1..500]. In this scenario everything will be sorted in the order:

            > Browser Tests / Firefox - appstore.TestThisWillFailAbunch1
            > Browser Tests / Firefox - appstore.TestThisWillFailAbunch2
            > Browser Tests / Firefox - appstore.TestThisWillFailAbunch3
            ...
            > Browser Tests / Firefox - appstore.TestThisWillFailAbunch500
            > Browser Tests / Chrome - appstore.TestThisWillFailAbunch1
            ...
            > Browser Tests / Chrome - appstore.TestThisWillFailAbunch500
            ...

            Because the list is fully expanded, and it takes a long time to scroll, it is hard to answer simple questions like "Did it fail on all browsers, or just on Firefox?"

            The implementation doesn't have to be as I laid it. Just more or less the general idea that opening the results page and getting bombarded with 10000+ failing tests isn't great. And I think with the association of Junit to a stage it could finally be a good way to collapse it a bit.

            Sorry for the long post, wanted to drop by 2c

             

             

            Show
            lostgoat Andres Rodriguez added a comment - Don't want to pile too many things into this ticket, but just wanted to add that it would be nice if the results were collapsible by stage. For your example data above above, how it would look when it opens:   > Browser Tests / Firefox (1) > Browser Tests / Chrome (1) > Browser Tests / Internet Explorer (1) > Browser Tests / Safari (1) Then expand some of the entries:   > Browser Tests / Firefox (1) > appstore.TestThisWillFailAbunch > Browser Tests / Chrome (1) > appstore.TestThisWillFailAbunch > Browser Tests / Internet Explorer (1) > Browser Tests / Safari (1) This grouping would make it slightly easier to parse the data when a large amount of test cases fail. For example, imagine that there are 500 other test cases that fail, TestThisWillFailAbunc [1..500] . In this scenario everything will be sorted in the order: > Browser Tests / Firefox - appstore.TestThisWillFailAbunch1 > Browser Tests / Firefox - appstore.TestThisWillFailAbunch2 > Browser Tests / Firefox - appstore.TestThisWillFailAbunch3 ... > Browser Tests / Firefox - appstore.TestThisWillFailAbunch500 > Browser Tests / Chrome - appstore.TestThisWillFailAbunch1 ... > Browser Tests / Chrome - appstore.TestThisWillFailAbunch500 ... Because the list is fully expanded, and it takes a long time to scroll, it is hard to answer simple questions like "Did it fail on all browsers, or just on Firefox?" The implementation doesn't have to be as I laid it. Just more or less the general idea that opening the results page and getting bombarded with 10000+ failing tests isn't great. And I think with the association of Junit to a stage it could finally be a good way to collapse it a bit. Sorry for the long post, wanted to drop by 2c    
            Hide
            jamesdumay James Dumay added a comment -

            Andreas Krummsdorf thanks for the feedback. In this iteration, we will be providing them as a flat list.

            Show
            jamesdumay James Dumay added a comment - Andreas Krummsdorf thanks for the feedback. In this iteration, we will be providing them as a flat list.
            Hide
            kshultz Karl Shultz added a comment - - edited

            Testing Notes:

            Show
            kshultz Karl Shultz added a comment - - edited Testing Notes: As stated in the description, unit tests should be included Automated tests should also be included. Update: Tests were provided in the PR https://github.com/jenkinsci/blueocean-plugin/pull/1280/files#diff
            Hide
            feliwir Stephan Vedder added a comment -

            Tests still don't get seperated for us. We are using the XUnit plugin to submit the test results:

            pipeline {
              agent none
              stages {
                parallel {
                  stage('Windows') {
                    agent {
                      label 'windows'
                    }
                    stages
                    {
                      stage('Build')
                      {
                       //build steps
                      }
                      stage('Test')
                      {
                        steps {
                          ctest(installation: 'InSearchPath', arguments: '-j 32 --output-on-failure --no-compress-output -T Test -T Submit', workingDir: '../build/MES-build', ignoredExitCodes: '0-255')
                        }
                        post {
                        always {
                          step([$class: 'XUnitBuilder',
                            thresholds: [
                                                [$class: 'SkippedThreshold', failureThreshold: '0'],
                                                [$class: 'FailedThreshold', failureThreshold: '10']],
                                              tools: [[$class: 'CTestType', pattern: 'TestReport/*.xml']]])
            
            
                          }
                        }
                      }
                    }
                  }
                  stage('Linux') {
                    agent {
                      label 'linux'
                    }
                    stages
                    {
                      stage('Build')
                      {
                       //build steps
                      }
                      stage('Test')
                      {
                        steps {
                          ctest(installation: 'InSearchPath', arguments: '-j 32 --output-on-failure --no-compress-output -T Test -T Submit', workingDir: '../build/MES-build', ignoredExitCodes: '0-255')
                        }
                        post {
                        always {
                          step([$class: 'XUnitBuilder',
                            thresholds: [
                                                [$class: 'SkippedThreshold', failureThreshold: '0'],
                                                [$class: 'FailedThreshold', failureThreshold: '10']],
                                              tools: [[$class: 'CTestType', pattern: 'TestReport/*.xml']]])
            
            
                          }
                        }
                      }
                    }
                }  
              }
            }

            This is completly making Jenkins unusable for us at the moment. We don't know if tests are failing on windows or linux, which is quite a big deal...

            Show
            feliwir Stephan Vedder added a comment - Tests still don't get seperated for us. We are using the XUnit plugin to submit the test results: pipeline { agent none stages { parallel { stage( 'Windows' ) { agent { label 'windows' } stages { stage( 'Build' ) { //build steps } stage( 'Test' ) { steps { ctest(installation: 'InSearchPath' , arguments: '-j 32 --output-on-failure --no-compress-output -T Test -T Submit' , workingDir: '../build/MES-build' , ignoredExitCodes: '0-255' ) } post { always { step([$class: 'XUnitBuilder' , thresholds: [ [$class: 'SkippedThreshold' , failureThreshold: '0' ], [$class: 'FailedThreshold' , failureThreshold: '10' ]], tools: [[$class: 'CTestType' , pattern: 'TestReport/*.xml' ]]]) } } } } } stage( 'Linux' ) { agent { label 'linux' } stages { stage( 'Build' ) { //build steps } stage( 'Test' ) { steps { ctest(installation: 'InSearchPath' , arguments: '-j 32 --output-on-failure --no-compress-output -T Test -T Submit' , workingDir: '../build/MES-build' , ignoredExitCodes: '0-255' ) } post { always { step([$class: 'XUnitBuilder' , thresholds: [ [$class: 'SkippedThreshold' , failureThreshold: '0' ], [$class: 'FailedThreshold' , failureThreshold: '10' ]], tools: [[$class: 'CTestType' , pattern: 'TestReport/*.xml' ]]]) } } } } } } } This is completly making Jenkins unusable for us at the moment. We don't know if tests are failing on windows or linux, which is quite a big deal...

              People

              • Assignee:
                abayer Andrew Bayer
                Reporter:
                jamesdumay James Dumay
              • Votes:
                4 Vote for this issue
                Watchers:
                21 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: