Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(celery): close celery.apply spans even without after_task_publish, when using apply_async [backport 2.14] #10893

Merged
merged 2 commits into from
Oct 1, 2024

Conversation

github-actions[bot]
Copy link
Contributor

@github-actions github-actions bot commented Oct 1, 2024

Backport 0d28e08 from #10676 to 2.14.

The instrumentation for the Celery integration relies on various Celery signals in order to start and end the span when calling on apply_async.

The integration can fail if the expected signals don't trigger, which can lead to broken context propagation (and unexpected traces).

Example:

  • dd-trace-py expects the signal before_task_publish to start the span then after_task_publish to close the span. If the after_task_publish signal never gets called (which can happen if a Celery exception occurs while processing the app), then the span won't finish.
  • The same thing above can also happen to task_prerun and task_postrun.

Solution

This PR patches apply_async so that there is a check to see if there is a span lingering around and closes it when apply_task is called.

If an internal exception happens, the error will be marked on the celery.apply span.

To track this, I added new logs in debug mode:

The after_task_publish signal was not called, so manually closing span

and

The task_postrun signal was not called, so manually closing span

There's a related PR #10848 that works to improve how we extract information based on the protocols, that also affects when spans get closed or not.

Special Thanks:

  • Thanks to @tabgok for going through this with me in great detail!
  • @timmc-edx for helping us track it down!

APMS-13158

Checklist

  • PR author has checked that all the criteria below are met
  • The PR description includes an overview of the change
  • The PR description articulates the motivation for the change
  • The change includes tests OR the PR description describes a testing strategy
  • The PR description notes risks associated with the change, if any
  • Newly-added code is easy to change
  • The change follows the library release note guidelines
  • The change includes or references documentation updates if necessary
  • Backport labels are set (if applicable)

Reviewer Checklist

  • Reviewer has checked that all the criteria below are met
  • Title is accurate
  • All changes are related to the pull request's stated goal
  • Avoids breaking API changes
  • Testing strategy adequately addresses listed risks
  • Newly-added code is easy to change
  • Release note makes sense to a user of the library
  • If necessary, author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment
  • Backport labels are set in a manner that is consistent with the release branch maintenance policy

APMS-13158

…sh, when using apply_async (#10676)

The instrumentation for the Celery integration relies on various [Celery
signals ](https://docs.celeryq.dev/en/stable/userguide/signals.html) in
order to start and end the span when calling on `apply_async`.

The integration can fail if the expected signals don't trigger, which
can lead to broken context propagation (and unexpected traces).

**Example:**
- dd-trace-py expects the signal `before_task_publish` to start the span
then `after_task_publish` to close the span. If the `after_task_publish`
signal never gets called (which can happen if a Celery exception occurs
while processing the app), then the span won't finish.
- The same thing above can also happen to `task_prerun` and
`task_postrun`.

**Solution**

This PR patches `apply_async` so that there is a check to see if there
is a span lingering around and closes it when `apply_task` is called.

If an internal exception happens, the error will be marked on the
`celery.apply` span.

To track this, I added new logs in debug mode:
> The after_task_publish signal was not called, so manually closing span

and
> The task_postrun signal was not called, so manually closing span

There's a related PR #10848
that works to improve how we extract information based on the protocols,
that also affects when spans get closed or not.

Special Thanks:
- Thanks to @tabgok for going through this with me in great detail!
- @timmc-edx for helping us track it down!

[APMS-13158]

## Checklist
- [x] PR author has checked that all the criteria below are met
- The PR description includes an overview of the change
- The PR description articulates the motivation for the change
- The change includes tests OR the PR description describes a testing
strategy
- The PR description notes risks associated with the change, if any
- Newly-added code is easy to change
- The change follows the [library release note
guidelines](https://ddtrace.readthedocs.io/en/stable/releasenotes.html)
- The change includes or references documentation updates if necessary
- Backport labels are set (if
[applicable](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting))

## Reviewer Checklist
- [x] Reviewer has checked that all the criteria below are met
- Title is accurate
- All changes are related to the pull request's stated goal
- Avoids breaking
[API](https://ddtrace.readthedocs.io/en/stable/versioning.html#interfaces)
changes
- Testing strategy adequately addresses listed risks
- Newly-added code is easy to change
- Release note makes sense to a user of the library
- If necessary, author has acknowledged and discussed the performance
implications of this PR as reported in the benchmarks PR comment
- Backport labels are set in a manner that is consistent with the
[release branch maintenance
policy](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting)

APMS-13158

[APMS-13158]:
https://datadoghq.atlassian.net/browse/APMS-13158?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ

---------

Co-authored-by: Emmett Butler <723615+emmettbutler@users.noreply.github.com>
(cherry picked from commit 0d28e08)
Copy link
Collaborator

@wantsui wantsui left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@datadog-dd-trace-py-rkomorn
Copy link

datadog-dd-trace-py-rkomorn bot commented Oct 1, 2024

Datadog Report

Branch report: backport-10676-to-2.14
Commit report: 49e2a44
Test service: dd-trace-py

✅ 0 Failed, 1196 Passed, 0 Skipped, 36m 24.54s Total duration (1m 29.7s time saved)

@wantsui wantsui closed this Oct 1, 2024
@wantsui wantsui reopened this Oct 1, 2024
Copy link
Contributor Author

github-actions bot commented Oct 1, 2024

CODEOWNERS have been resolved as:

releasenotes/notes/fix-celery-apply-async-span-close-b7a8db188459f5b5.yaml  @DataDog/apm-python
ddtrace/contrib/internal/celery/app.py                                  @DataDog/apm-core-python @DataDog/apm-idm-python
ddtrace/contrib/internal/celery/signals.py                              @DataDog/apm-core-python @DataDog/apm-idm-python
tests/contrib/celery/test_integration.py                                @DataDog/apm-core-python @DataDog/apm-idm-python

@pr-commenter
Copy link

pr-commenter bot commented Oct 1, 2024

Benchmarks

Benchmark execution time: 2024-10-01 21:31:00

Comparing candidate commit 49e2a44 in PR branch backport-10676-to-2.14 with baseline commit c7df637 in branch 2.14.

Found 0 performance improvements and 0 performance regressions! Performance is the same for 350 metrics, 48 unstable metrics.

@wantsui wantsui enabled auto-merge (squash) October 1, 2024 21:17
@wantsui wantsui merged commit a213e8d into 2.14 Oct 1, 2024
635 checks passed
@wantsui wantsui deleted the backport-10676-to-2.14 branch October 1, 2024 21:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant