Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Poor thread utilization when using ExclusiveResource.GLOBAL_KEY #3928

Closed
Tracked by #3990
mpkorstanje opened this issue Aug 15, 2024 · 2 comments · Fixed by #4004
Closed
Tracked by #3990

Poor thread utilization when using ExclusiveResource.GLOBAL_KEY #3928

mpkorstanje opened this issue Aug 15, 2024 · 2 comments · Fixed by #4004

Comments

@mpkorstanje
Copy link
Contributor

mpkorstanje commented Aug 15, 2024

When executing in parallel using a fixed number of threads, when several tests attempt to acquire a read/write lock against the GLOBAL_KEY, threads are likely to be assigned in a sub-optimal manner.

This issue was originally reported as cucumber/cucumber-jvm#2910. The key to this issue is the low max-pool size. This is a common, if crude, way to limit the number of active web drivers.

Steps to reproduce

See https://github.com/mpkorstanje/junit5-scheduling for a minimal reproducer.

junit.jupiter.execution.parallel.enabled=true
junit.jupiter.execution.parallel.mode.default=concurrent
junit.jupiter.execution.parallel.config.strategy=fixed
junit.jupiter.execution.parallel.config.fixed.parallelism=3
junit.jupiter.execution.parallel.config.fixed.max-pool-size=3
@Isolated
class SerialATest {

    @BeforeEach
    void simulateWebDriver() throws InterruptedException {
        System.out.println("Serial A: " + Thread.currentThread().getName() );
        Thread.sleep(1000);
    }

   ... Several tests

}
@Isolated
class SerialBTest {

   ... Copy of SerialATest

}
class ParallelTest {

    @BeforeEach
    void simulateWebDriver() throws InterruptedException {
        System.out.println("Parallel: " + Thread.currentThread().getName() );
        Thread.sleep(1000);
    }

    ... Several tests

}

Executing these tests will likely result in an output similar to this:

Parallel: ForkJoinPool-1-worker-2
Parallel: ForkJoinPool-1-worker-2
Parallel: ForkJoinPool-1-worker-2
Parallel: ForkJoinPool-1-worker-2
Parallel: ForkJoinPool-1-worker-2
Serial A: ForkJoinPool-1-worker-3
Serial A: ForkJoinPool-1-worker-3
Serial B: ForkJoinPool-1-worker-1
Serial B: ForkJoinPool-1-worker-1

The output implies that worker-1 and worker-3 are waiting to acquire the ExclusiveResource.GLOBAL_KEY, leaving only worker-2 to process the parallel section on its own. Once done, the other workers can then acquire the lock in turn. In the ideal scenario, the parallel section would be executed with the maximum number of workers.

Context

  • Jupiter: 5.10.3
  • Java 17

Deliverables

To be determined.

@mpkorstanje mpkorstanje changed the title Liveness starvation when using ExclusiveResource.GLOBAL_KEY Poor thread utilization when using ExclusiveResource.GLOBAL_KEY Aug 15, 2024
@marcphilipp
Copy link
Member

Thanks for the reproducer! Any ideas for optimizing this? Scheduling @Isolated tests to run last? /cc @leonard84

@mpkorstanje
Copy link
Contributor Author

mpkorstanje commented Sep 17, 2024

I am thinking that if a thread can not aquire the read-write locks it needs, it could start work stealing and speed up the parallel sections that are able to run.

https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ForkJoinTask.html#helpQuiesce--

I've never used this before, so I'm not sure how performant it would be.

@marcphilipp marcphilipp self-assigned this Sep 18, 2024
@marcphilipp marcphilipp modified the milestones: 5.11.1, 5.10.4 Sep 18, 2024
marcphilipp added a commit that referenced this issue Sep 19, 2024
Rather  than executing tasks requiring the global read-write lock
while forked tasks are still being executed and thus causing contention,
such isolated tasks are now executed after all other work is done.

Resolves #3928.

(cherry picked from commit c8496a2)
@marcphilipp marcphilipp modified the milestones: 5.10.4, 5.11.1 Sep 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants