-
Notifications
You must be signed in to change notification settings - Fork 9.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
core(lantern): improve RTT estimates #4552
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ignores observed connectionId from Chrome when simulating and creates its own set of connections instead
In the non-LR case, we have the original data. So can we see how well our simulation did against observed connectionIds?
as an aside, think now's a good time to get type-checking enabled for lib/dependency-graph?
}); | ||
|
||
const rttSummaries = Array.from(NetworkAnalyzer.estimateRTTByOrigin(records).entries()); | ||
const rttByOrigin = new Map(rttSummaries.map(item => [item[0], item[1].min])); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we destructure item here and on 93/97? would help keeping track of what each obj is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah I cleaned this whole function up
|
||
const rttSummaries = Array.from(NetworkAnalyzer.estimateRTTByOrigin(records).entries()); | ||
const rttByOrigin = new Map(rttSummaries.map(item => [item[0], item[1].min])); | ||
const responseTimeSummaries = NetworkAnalyzer.estimateServerResponseTimeByOrigin(records, { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can move this down to before line 96 and then structurally we have the RTT and response time guys as the two chunks here. maybe a comment or two, too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah I cleaned this whole function up
* @param {!Node} dependencyGraph | ||
* @return {!Object} | ||
*/ | ||
static computeOptions(dependencyGraph) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i appreciate the thought you put into the names in this file, but i think we can upgrade this one a bit.
computeRTTandSR
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sg done
const estimate = new LoadSimulator(graphs[key]).simulate(); | ||
const lastLongTaskEnd = PredictivePerf.getLastLongTaskEndTime(estimate.nodeTiming); | ||
const estimate = new LoadSimulator(graphs[key], options).simulate(); | ||
const longTaskThreshold = /optimistic/.test(key) ? 100 : 50; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
startsWith?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
if (node.type === Node.TYPES.NETWORK) records.push(node.record); | ||
}); | ||
|
||
const rttSummaries = Array.from(NetworkAnalyzer.estimateRTTByOrigin(records).entries()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
later on you use this pattern, which i think is a lot more readable:
for (const [origin, summary] of rttByOrigin.entries()) {
rttByOrigin.set(origin, summary.min);
}
does take more lines, but worth it IMO
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah cleaned this whole thing up
yes there's even a test for that ;) lighthouse/lighthouse-core/test/lib/dependency-graph/simulator/network-analyzer-test.js Lines 147 to 156 in 23e6778
during simulation though this is a slightly different concern, essentially the connection IDs that were used during the fast load aren't necessarily the ones you want to force when simulating a very small subset. i.e. we happened to pick 3 assets that all shared the same connection ID when loading with 100 other resources but would've obviously been put onto different connections when loaded alone. In this sense simulator becomes a little more rational/browser-like.
yes, but the error list was ~200 lines long so I'll save that for a followup :) |
nice cleanup! thxx
nice nice. looked at the
the gap between the result and approx is pretty big for all of these. I would have expected <= 20% difference. i'm not sure how we should quantify these are reasonable approximations; what do you think? |
~10ms gap is about as good as I could possibly hope for given we're essentially trying to blindly guess what % of ~100ms request was due to RTT and what was server response time it was more impressive when run on traces that actually had some latency (40 vs. 50 isn't such a big deal), but all ours are existing ones were pretty slim. want me to add some beefy ones to fixtures from the throttled dataset? also note that it will skew over estimating crazy low RTT (~5ms is like top 0.3% percentile 😛) and under estimating crazy high RTT since the penalty for going big gets really huge |
changes
consequences
0