Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[JENKINS-52189] Ensure GraphListeners are notified of the FlowStartNode too #232

Merged
merged 3 commits into from
Feb 1, 2019

Conversation

svanoort
Copy link
Member

@svanoort svanoort commented Jul 2, 2018

JENKINS-52189

We might consider initializing the CpsThreadGroup before the FlowStartNode (in CpsFlowExecution) to make this more "correct" but I'm a bit concerned that might have unintended consequences.

@@ -117,6 +117,19 @@ void newStartNode(FlowStartNode n) throws IOException {
this.head = execution.startNodes.push(n);
}
execution.storage.storeNode(head, false);

CpsThreadGroup c = CpsThreadGroup.current();
if (c !=null) {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm leaving this in, in case we do get a proper CpsThreadGroup down the road, so it does The Right Thing (tm).

@t-8ch
Copy link
Contributor

t-8ch commented Jul 3, 2018

This will currently not work for GraphListeners attached using FlowExecution.addListener() in FlowExecutionListener.onRunning().
The FlowStartNode is emitted before FlowExecutionListener.onRunning().

It is obviously possible to use a global extension, but I was under the impression that onRunning() was an intended place to add new listeners.

@jglick
Copy link
Member

jglick commented Oct 3, 2018

Note: might solve a minor display issue in logs that I noticed in JEP-210 (see #252), that we display the FlowEndNode but not the FlowStartNode. Depends on the timing of this event I suppose.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants