-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
combine checks into a single job #4
Conversation
Any idea why these checks don't run on this repo? |
@ru-fu you need to tweak the https://github.com/canonical/lxd/blob/main/.github/workflows/tests.yml#L2-L4 |
cd59141
to
5f0b31f
Compare
Couldn't get that to work. :( But I tested with another repo now: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a good idea. The only thing though that I'd ask to change if possible would be to ensure that earlier failures don't prevent later checks from running. Since the checks are now sequential, this is happening. In the failure case, a failed spelling check prevents link and woke from running, which is a pity because it's nicer to see all applicable errors at once.
Two ideas for solutions - not sure which are actually possible:
- If there was somehow a way of telling the GitHub job to try subsequent steps even if the earlier one failed - but still fail the entire job if any step fails. Maybe by recording step outputs and failing if any of them are bad at the end.
- If there was a way to parallelize running steps within a single job
It is nice in theory to have parallel jobs, unfortunately due to the wait times to get workers on GH runners (at least in the Canonical org anyway) it can mean 10s of minutes or even hours (like yesterday 3.5 hours) extra wait time. So this is an efficiency change to reduce resource consumption (by repetitive building of the same source to run different checks) and wait times to get access to workers in the first place. |
5f0b31f
to
815ccc2
Compare
I think you can force steps to always run, independent of previous results. Let's test. |
815ccc2
to
4f48723
Compare
This works now: https://github.com/canonical/microcloud/actions/runs/6338395104/job/17215408285?pr=171 It only raises the first error though, but the other checks still run, so you can check them to see if they complete or not. |
It's an improvement but not ideal that we can't see error icons on subsequent steps. I have a suggestion, can you try this at the job level:
See GitHub docs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of if: success() || failure()
please try continue-on-error: true
at the job level
This won't work, because the full job will then return a success instead of a failure. |
It may work or it may not, I'm not sure. I'd suggest you try it as it would be cleaner if it did work. The behaviour may have changed - see https://stackoverflow.com/a/76877377. |
4f48723
to
083d2d8
Compare
I have tried this before. It has not changed. Here's the proof: https://github.com/canonical/microcloud/actions/runs/6339134573/job/17217641161?pr=171 |
Every job takes up a slot on a runner - which means more load on runners, and also more queuing time (for everyone). Combining the three jobs into one job with steps for each decreases the load, and it should also speed things up overall since we're checking out the repo only once. Signed-off-by: Ruth Fuchss <ruth.fuchss@canonical.com>
083d2d8
to
f793621
Compare
Looking at this diff, I wonder did you add those things at the step level, rather than the job level? Did you try adding the setting at the job level? |
I have added them on step level. |
That may be true. I'd like to try it though because it's unclear to me if that's the beahviour e.g. according to this: https://stackoverflow.com/a/73357322:
|
Another idea that may work and would make it clearer which steps failed would be to continue on error per step, and then add a step at the end of the job aggregating which prior steps failed. This way, we don't get the confusing display of the first step failing but not others, and with aggregation we should be able to print out which steps failed, so that the user can check those. Something like this:
|
Can we just get this in and improve on it later? I'm sure there's ways to tune it to make it even better, but I don't really have time to spend hours on this at the moment to find the perfect solution. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We've traded off clarity of error reporting for resource / speed efficiency. I trust that this practical trade-off makes sense, so I'll approve, but it's not a nice report to read now. I've added an issue to slightly improve the error reporting (although it still won't be as good as the parallel case): #5.
Every job takes up a slot on a runner - which means more load on runners, and also more queuing time (for everyone). Combining the three jobs into one job with steps for each decreases the load, and it should also speed things up overall since we're checking out the repo only once.