Skip to content

Commit

Permalink
test: relax chunk count expectations
Browse files Browse the repository at this point in the history
In parallel/test-fs-read-stream-concurrent-reads.js the number
of data chunks used is being tested when few concurrent reads
are performed. The number of chunks can fluctuate based on the
number of concurrent reads as well as the data that was read in
one shot. Accommodate these variations in the test.

Fixes: #22339

PR-URL: #25415
Reviewed-By: James M Snell <jasnell@gmail.com>
Reviewed-By: Anna Henningsen <anna@addaleax.net>
Reviewed-By: Luigi Pinca <luigipinca@gmail.com>
  • Loading branch information
gireeshpunathil committed Jan 20, 2019
1 parent c1ac578 commit cc26957
Showing 1 changed file with 4 additions and 4 deletions.
8 changes: 4 additions & 4 deletions test/parallel/test-fs-read-stream-concurrent-reads.js
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ const fs = require('fs');
const filename = fixtures.path('loop.js'); // Some small non-homogeneous file.
const content = fs.readFileSync(filename);

const N = 1000;
const N = 2000;
let started = 0;
let done = 0;

Expand All @@ -26,10 +26,10 @@ function startRead() {
.on('data', (chunk) => {
chunks.push(chunk);
arrayBuffers.add(chunk.buffer);
if (started < N)
startRead();
})
.on('end', common.mustCall(() => {
if (started < N)
startRead();
assert.deepStrictEqual(Buffer.concat(chunks), content);
if (++done === N) {
const retainedMemory =
Expand All @@ -43,5 +43,5 @@ function startRead() {

// Don’t start the reads all at once – that way we would have to allocate
// a large amount of memory upfront.
for (let i = 0; i < 4; ++i)
for (let i = 0; i < 6; ++i)
startRead();

0 comments on commit cc26957

Please sign in to comment.