Uploaded image for project: 'Jenkins'
  1. Jenkins
  2. JENKINS-29188

api URL for REST API is not available for flowGraphTable

    Details

    • Similar Issues:

      Description

      The REST API isn't available for access to "step" state through the workflow "running steps" page.

      e.g. at http://<server>:8080//job/<job name>/<job num>/flowGraphTable/api

      I would expect to see a REST API page, indeed there is a link at the bottom of the flowGraphTable page to the REST API page that does not exist.

      At the JUC in London Jesse told me that this was probably just a bug, but if you want to make this a feature/improvement request that's fine too

      In general I think it would be massively useful to have access to stateful information for the workflow through the REST API i.e. though XML or JSON. That will allow for better flexibility of use and presentation of the data.

      Moreover it would also be nice for groovy workflow script developers to be able to "inject" state such that it can be discoverable through the REST API. e.g. to provide contextual information about a particular step in the workflow. In this sense perhaps it could be linked with JENKINS-26107, or a similar feature that would allow groovy workflow scripts to feedback stateful information to the presentation layer.
      Though I suppose this could be fudged in the short term e.g. by saving this data in XML/JSON as artifacts which can be presented through the rest API through the page for the job.

        Attachments

          Issue Links

            Activity

            Hide
            tomjdalton Thomas Dalton added a comment -

            @James Sandlin . Access to pipeline result/logging data was a priority for us, so in the end we wrote wrapper scripts (bash/python) that did the bulk of the work and piped logging output to files, implementing a groovy class that dealt with the collection of results, archiving logs, and generating nicely formatted email reports tailored to meet our needs.
            in the end this offered us greater flexibility and control in terms of what gets reported and where vs the stock support. So if the official/unofficial REST APIs don't provide what you need I guess you could go down a similar route i.e. explicitly call functions at points in the groovy pipeline code that are responsible for "publishing" status (to interested users, perhaps beyond a firewall) at particular points in the flow.

            Show
            tomjdalton Thomas Dalton added a comment - @James Sandlin . Access to pipeline result/logging data was a priority for us, so in the end we wrote wrapper scripts (bash/python) that did the bulk of the work and piped logging output to files, implementing a groovy class that dealt with the collection of results, archiving logs, and generating nicely formatted email reports tailored to meet our needs. in the end this offered us greater flexibility and control in terms of what gets reported and where vs the stock support. So if the official/unofficial REST APIs don't provide what you need I guess you could go down a similar route i.e. explicitly call functions at points in the groovy pipeline code that are responsible for "publishing" status (to interested users, perhaps beyond a firewall) at particular points in the flow.
            Hide
            greenscar James Sandlin added a comment - - edited

            Amazing John Long! Thanks so much... that does exactly what I need.

            Thomas Dalton: How did you handle the sandbox? Did you disable sandboxing globally? As our Jenkins is locked down / a limited team manages the build scripts, the lockdown in pipeline is overkill for us.

            Show
            greenscar James Sandlin added a comment - - edited Amazing John Long ! Thanks so much... that does exactly what I need. Thomas Dalton : How did you handle the sandbox? Did you disable sandboxing globally? As our Jenkins is locked down / a limited team manages the build scripts, the lockdown in pipeline is overkill for us.
            Hide
            tomjdalton Thomas Dalton added a comment -

            @James Sandlin: I'm not sure what you mean wrt the sandboxing - in terms of script functions we do need to whitelist some operations yes, but once done these operations can be used repeatedly.
            We pushed a .groovy file to workflowlibs.git that collects data that we need and builds reports that get emailed out. My suggesting was that you could do similar and push specific data elsewhere e.g. to a wider team if necessary (making sure that the information is appropriate and not sensitve with regard to your security concerns of course .

            Show
            tomjdalton Thomas Dalton added a comment - @James Sandlin: I'm not sure what you mean wrt the sandboxing - in terms of script functions we do need to whitelist some operations yes, but once done these operations can be used repeatedly. We pushed a .groovy file to workflowlibs.git that collects data that we need and builds reports that get emailed out. My suggesting was that you could do similar and push specific data elsewhere e.g. to a wider team if necessary (making sure that the information is appropriate and not sensitve with regard to your security concerns of course .
            Hide
            jamesdumay James Dumay added a comment -

            If you are looking to get the node data for a Pipeline via REST, I would recommend looking at both the Pipeline Steps API and Pipeline Node API in Blue Ocean REST.

            Show
            jamesdumay James Dumay added a comment - If you are looking to get the node data for a Pipeline via REST, I would recommend looking at both the Pipeline Steps API and Pipeline Node API in Blue Ocean REST.
            Hide
            jiangty_addepar Damien Jiang added a comment - - edited

            We're trying to do analytics on the runtime of each node using a combination of the Blue Ocean REST API, the normal Jenkins REST API, and the /wfapi routes. We've run into problems collecting all the information we want: for each node, we want its:

            • name (from Blue Ocean API)
            • start time (from Blue Ocean API)
            • runtime (from Blue Ocean API, but runtime for nodes in parallel steps is inaccurate, as described here: https://issues.jenkins-ci.org/browse/JENKINS-38536)
            • computer (Jenkins slave) the node runs on.

            This last piece of data isn't accessible from the Blue Ocean API, since it excludes stages and parallels/blocks, and we need to find the log for the `Allocate Node` block to find the computer, as far as I know?

            So, we:

            • Use `api/json?tree=actions[nodes[displayName,id,parents]]` to get a list of all workflow steps
            • map `Allocate node : Start` steps to the nodes found in Blue Ocean (kind of fuzzy, since parallel steps aren't allocated a node but appear in Blue Ocean, while the `Declarative: Post Actions` step is allocated a node, but doesn't appear in Blue Ocean)
            • use `/execution/node/[node_id]/wfapi/log` to read the log for the `Allocate node : Start` step and actually get the computer name

            It seems like an API version of the flowGraphTable would simplify this process by an awful lot, since all the information we want (other than the logs for Allocate Node steps) can be found by actually viewing the flowGraphTable.

            Show
            jiangty_addepar Damien Jiang added a comment - - edited We're trying to do analytics on the runtime of each node using a combination of the Blue Ocean REST API, the normal Jenkins REST API, and the /wfapi routes. We've run into problems collecting all the information we want: for each node, we want its: name (from Blue Ocean API) start time (from Blue Ocean API) runtime (from Blue Ocean API, but runtime for nodes in parallel steps is inaccurate, as described here:  https://issues.jenkins-ci.org/browse/JENKINS-38536 ) computer (Jenkins slave) the node runs on. This last piece of data isn't accessible from the Blue Ocean API, since it excludes stages and parallels/blocks , and we need to find the log for the `Allocate Node` block to find the computer, as far as I know? So, we: Use `api/json?tree=actions[nodes [displayName,id,parents] ]` to get a list of all workflow steps map `Allocate node : Start` steps to the nodes found in Blue Ocean (kind of fuzzy, since parallel steps aren't allocated a node but appear in Blue Ocean, while the `Declarative: Post Actions` step is allocated a node, but doesn't appear in Blue Ocean) use `/execution/node/ [node_id] /wfapi/log` to read the log for the `Allocate node : Start` step and actually get the computer name It seems like an API version of the flowGraphTable would simplify this process by an awful lot, since all the information we want (other than the logs for Allocate Node steps) can be found by actually viewing the flowGraphTable.

              People

              • Assignee:
                jglick Jesse Glick
                Reporter:
                tomjdalton Thomas Dalton
              • Votes:
                7 Vote for this issue
                Watchers:
                17 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: