Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BEAM-13015] Update the SDK harness grouping table to be memory bounded based upon the amount of assigned cache memory and to use an LRU eviction policy. #17327

Merged
merged 3 commits into from
May 16, 2022

Conversation

lukecwik
Copy link
Member

@lukecwik lukecwik commented Apr 9, 2022


Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:

  • Choose reviewer(s) and mention them in a comment (R: @username).
  • Format the pull request title like [BEAM-XXX] Fixes bug in ApproximateQuantiles, where you replace BEAM-XXX with the appropriate JIRA issue, if applicable. This will automatically link the pull request to the issue.
  • Update CHANGES.md with noteworthy changes.
  • If this contribution is large, please file an Apache Individual Contributor License Agreement.

See the Contributor Guide for more tips on how to make review process smoother.

To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md

GitHub Actions Tests Status (on master branch)

Build python source distribution and wheels
Python tests
Java tests

See CI.md for more information about GitHub Actions CI.

…n amount of assigned cache memory and also use an LRU policy for evicting entries from the table.
@lukecwik
Copy link
Member Author

lukecwik commented Apr 9, 2022

R: @youngoli

@lukecwik
Copy link
Member Author

Run Java PreCommit

@lukecwik
Copy link
Member Author

Run Python_PVR_Flink PreCommit

@lukecwik
Copy link
Member Author

Run Java PreCommit

@lukecwik
Copy link
Member Author

lukecwik commented Apr 28, 2022

R: @Abacn

@aaltay
Copy link
Member

aaltay commented May 12, 2022

@Abacn - could you please review this change?

}
return tableEntry;
});
weight += entry.getWeight();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this accurate if entry is not new?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed and updated tests since it turned out we weren't accounting for the grouping table key.

@SuppressWarnings({
"nullness" // TODO(https://issues.apache.org/jira/browse/BEAM-10402)
})
@NotThreadSafe
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Document why? Also seems to contradict the requirement of Shrinkable?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Documented that put and flush must be called from the bundle processing thread. shrink can be called from any thread.


// Get the updated weight now that the cache may have been shrunk and respect it
long currentMax = maxWeight.get();
if (weight > currentMax) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is triggered by shrink() why not do it in shrink but instead rely on new input?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because we want to make sure that we only produce output from the bundle processing thread and not from an arbitrary thread that caused the shrinking to happen. Added a comment to reflect.


table.put("DDDD", 6, receiver);
assertThat(receiver.outputElems, hasItem((Object) KV.of("DDDD", 6L)));
// Insert three values which even with compaction isn't enough so we evict D & E to get
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/D & E/A & B/

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@lukecwik
Copy link
Member Author

@y1chi PTAL

groupingKey,
(key, tableEntry) -> {
if (tableEntry == null) {
weight += groupingKey.getWeight();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove this?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this adds the weight of the key, and not the value

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't entry.getWeight() = key.getWeight() + accumulator.getWeight()?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are two cases.

  • key == structural key, then:
    • GroupingTableKey weight = key weight + windows weight + pane info weight
    • GroupingTableEntry weight = reference weight + accumulator weight
  • key != structural key, then:
    • GroupingTableKey weight = structural key weight + windows weight + pane info weight
    • GroupingTableEntry weight = key weight + accumulator weight

Copy link
Contributor

@y1chi y1chi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@youngoli youngoli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as I can tell it looks good, although I had some trouble following all the various weights involved so I'm glad Yichi's here to provide a second set of eyes.

Iterator<GroupingTableEntry> iterator = lruMap.values().iterator();
while (iterator.hasNext()) {
GroupingTableEntry valueToFlush = iterator.next();
weight -= valueToFlush.getWeight() + valueToFlush.getGroupingKey().getWeight();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm having some trouble following all the different weights, and my first instinct is that since valueToFlush contains the GroupingKey, that this would count the weight of the grouping key twice (and presumably this would be bad because it wasn't counted twice when being originally added to the max weight).

@lukecwik
Copy link
Member Author

Run Java PreCommit

@lukecwik lukecwik merged commit 5b81d14 into apache:master May 16, 2022
@robertwb
Copy link
Contributor

I happened to do some benchmarking for a separate change (#17641) and noticed that this PR seems to reduce the performance significantly. Before (https://github.com/robertwb/incubator-beam/tree/java-combine-key-old) I was getting stats

  33,102 ±(99.9%) 1,173 ops/s [Average]
  (min, avg, max) = (32,761, 33,102, 33,492), stdev = 0,305
  CI (99.9%): [31,929, 34,275] (assumes normal distribution)

  24,809 ±(99.9%) 0,861 ops/s [Average]
  (min, avg, max) = (24,521, 24,809, 25,083), stdev = 0,224
  CI (99.9%): [23,948, 25,670] (assumes normal distribution)

(two benchmarks here: globally windowed and not) but after merging this change I'm seeing

Result "org.apache.beam.fn.harness.jmh.CombinerTableBenchmark.uniformDistribution":
  4,949 ±(99.9%) 0,349 ops/s [Average]
  (min, avg, max) = (4,832, 4,949, 5,059), stdev = 0,091
  CI (99.9%): [4,601, 5,298] (assumes normal distribution)

Result "org.apache.beam.fn.harness.jmh.CombinerTableBenchmark.uniformDistribution":
  3,855 ±(99.9%) 0,304 ops/s [Average]
  (min, avg, max) = (3,735, 3,855, 3,930), stdev = 0,079
  CI (99.9%): [3,551, 4,159] (assumes normal distribution)

@robertwb
Copy link
Contributor

I should note that before either change I was getting on the order of 15k ops/sec.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants