Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[v23.3.x] k/group_manager: return not_coordinator quickly in tx operations #23187

Merged

Conversation

vbotbuildovich
Copy link
Collaborator

Backport of PR #23176

@vbotbuildovich vbotbuildovich added this to the v23.3.x-next milestone Sep 4, 2024
@vbotbuildovich vbotbuildovich added the kind/backport PRs targeting a stable branch label Sep 4, 2024
group_manager::attached_partition::catchup_lock can get blocked for
extended periods of time. For example in the following scenario:
1. consumer_offsets partition leader gets isolated
2. some group operation acquires a read lock and tries to replicate a
  batch to the consumer_offsets partition. This operation hangs for an
  indefinite period of time.
3. the consumer_offsets leader steps down
4. group state cleanup gets triggered, tries to acquire a write lock,
  hangs until (2) finishes

Meanwhile, clients trying to perform any tx group operations will get a
coordinator_load_in_progress errors and blindly retry, without even
trying to find the real coordinator.

Check for leadership without the read lock first to prevent that (this
is basically a "double-check" pattern as we have to check the second
time under the lock.)

(cherry picked from commit 440ed2c)
@ztlpn ztlpn force-pushed the backport-pr-23176-v23.3.x-856 branch from b11dbe3 to cbd4cdc Compare September 6, 2024 09:26
@ztlpn
Copy link
Contributor

ztlpn commented Sep 6, 2024

@ztlpn ztlpn merged commit c03072f into redpanda-data:v23.3.x Sep 6, 2024
16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/redpanda kind/backport PRs targeting a stable branch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants