Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance of 'http' + anonymous functions eventually slows down #3659

Closed
SynerG opened this issue Nov 4, 2015 · 52 comments
Closed

Performance of 'http' + anonymous functions eventually slows down #3659

SynerG opened this issue Nov 4, 2015 · 52 comments
Labels
help wanted Issues that need assistance from volunteers or PRs that need help to proceed. http Issues or PRs related to the http subsystem. performance Issues and PRs related to the performance of Node.js. v8 engine Issues and PRs related to the V8 dependency.

Comments

@SynerG
Copy link

SynerG commented Nov 4, 2015

When using anonymous functions (in IIFEs or Promises) to prepare the response of an HTTP request, the performance of the server suddenly slows down. Performance remains low until the server is restarted.

I initially found the problem using Promises. A script to replicate the problem:

var http = require('http');

var srv = http.createServer(function (req, res) {
    promiseValue().then(function(value){
        res.writeHead(200, {'Content-Type': 'text/plain'});
        res.end(value);
    });
});

function promiseValue() {
    return Promise.resolve().then(function() {
        var j = 0;
        for (var i = 0, l = 300000; i < l; i++) { j = j + 2; }
        return "Hi " + j;
    });
}

But Promises are not required:

var http = require('http');
var srv = http.createServer(function (req, res) {
    callbackValue(function(value){
        res.writeHead(200, {'Content-Type': 'text/plain'});
        res.end(value);
    });
});

function callbackValue(cb) {
    (function() {
        var j = 0;
        for (var i = 0, l = 300000; i < l; i++) {
            j = j + 2;
        }
        cb("Hi " + j);
    })();
}

srv.listen(3000, '127.0.0.1', function() {
    "use strict";
    console.log("Server listening on 127.0.0.1:3000");
});

Using a HTTP benchmarking tool to test the server, e.g. wrk:

#! /bin/bash
while :
do
  wrk -t2 -c32 -d5s "http://127.0.0.1:3000" | grep "Requests"
done

The server starts providing good performance, but an unpredictable point in time the performance drops:

Requests/sec:   2389.40
Requests/sec:   2394.00
Requests/sec:   2396.53
Requests/sec:   1071.95
Requests/sec:    890.80
Requests/sec:    889.23
Requests/sec:    893.15

Once it drops, it doesn't matter if you stop the load testing tool and let the server recover for a while, performance remains low until the server is restarted.

¿Is it a bug or am I doing something wrong?
If it is a bug, I don't know if it belongs to Nodejs or V8, although I have been unable to reproduce the problem only with Promises/IIFEs without the 'http' module.

Some additional details I have collected while hunting down the issue that may or may not be of interest to solve the problem:

  • It can be reproduced in all versions of Nodejs, at least in OSX. In particular, using Promises, I have been able to reproduce it in: v5.0.0, v4.2.1, iojsv1.0.0, even v0.11.13 (the first version that introduced Promise). Previous versions (v10.x, <v0.11.13) didn't have native Promise support, I have tested them using a 3rd party library (Bluebird) and performance matches the low one since the start up.
  • Performance drops after an unpredictable amount of time or requests. Following the same steps, I have found the drop in performance in a wide range of time (e.g. 15 seconds - 30 minutes).
  • The anonymous function used in the .then(function(){...}) of promiseValue() is the main cause of performance.
  • According to its performance, during the 1st phase the function seems to be optimized and during the 2nd phase it seems to become deoptimized (see comment).
    • If the anonymous function is refactorized as a named function, performance matches the high level and does NOT drop over time.
    • If an instruction that prevents optimization (e.g. try {} catch (e) {}) is added to the anonymous function, performance matches the low level and doesn't vary over time.
  • According to v8, the function is always not optimized (I could copy the snippet of how I tested this if required. Basically, it follows @petkaantonov's tip). (EDIT: The function starts being optimized and, at some point, becomes deoptimized also according to v8, see comment).

(I can also provide snippets for these points)

EDIT:

@SynerG SynerG changed the title Performance of 'http' + anonymous functions eventually slowdown Performance of 'http' + anonymous functions eventually slows down Nov 4, 2015
@alexlamsl
Copy link

Might not be related, but can you run netstat on the command line and see if you get a lot of stale TCP connections over the course of the test?

@YurySolovyov
Copy link

@alexlamsl but that would not explain why it performs ok without IIFE or promises.

@ChALkeR
Copy link
Member

ChALkeR commented Nov 4, 2015

@SynerG Could you also log server memory usage?

@SynerG Also, --trace-gc.

https://code.google.com/p/v8/issues/detail?id=4121 could be related.
Edit: If 0.11.13 is affected, then it shouldn't.

Ok, I ran it with --trace-gc, and it's not the culprit. Memory usage stays at ~50 MiB, gc times are low.

@SynerG
Copy link
Author

SynerG commented Nov 4, 2015

@YuriSolovyov, I think you could also reproduce the issue in your machine, but you will probably need to run the script for a longer period of time.
I have just tried it in a Linux VM and in a droplet in Digital Ocean and both have eventually degraded, but it has required several minutes of the server+script running.
From the DO VM:

> node -v
v4.2.1
> uname -a
Linux testingDroplet 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt11-1 (2015-05-24) x86_64 GNU/Linux

@alexlamsl, I have run netstat -an in the Linux VMs after performance has degraded (stopping the load testing tool, but leaving the server open) and it doesn't seem to leave open connections. I check the state after stopping wrk to see if there are connections left, because when I run wrk again for the second time, performance remains already degraded since the beginning.

@ChALkeR, regarding memory usage:
GC show lines like these:

Initially:
[2321]    12224 ms: Scavenge 6.3 (44.0) -> 5.7 (44.0) MB, 3.0 ms [allocation failure].
[2321]    12356 ms: Scavenge 7.1 (44.0) -> 6.2 (45.0) MB, 2.0 ms [allocation failure].
[2321]    12787 ms: Scavenge 10.5 (45.0) -> 6.8 (45.0) MB, 0.8 ms [allocation failure].
...
After a while (still good performance):
[2321]   444784 ms: Scavenge 47.7 (76.0) -> 32.3 (76.0) MB, 2.8 ms [allocation failure].
....
Performance degraded:
2321]   719454 ms: Scavenge 47.1 (90.0) -> 31.8 (90.0) MB, 2.0 ms [allocation failure].
[2321]   723029 ms: Scavenge 47.3 (90.0) -> 32.0 (90.0) MB, 2.0 ms [allocation failure].
[2321]   726270 ms: Scavenge 47.5 (90.0) -> 32.2 (90.0) MB, 1.5 ms [allocation failure].

Also, the state of node process according to top:

  • After a while:
VIRT=669824 RES=87292   SHR=12908   %MEM=17,2%
  • Performance degraded:
VIRT=684160 RES=101252  SHR=12908   %MEM=20% 102000 free

So it looks like the implementation creates a memory leakage, although the VM still has free memory even after performance has degraded.

@YurySolovyov
Copy link

I wonder if you can build node with this and try again, just in case.

@SynerG
Copy link
Author

SynerG commented Nov 4, 2015

@YuriSolovyov, I am not familiar with the process to rebuild Nodejs with a different version of v8.
I guess I have to:

  • Clone Nodejs repo
  • Clone v8 repo
  • Replace node/deps/v8 directory with the /v8 folder from v8 repo
  • Build Nodejs

Is this the correct process?

@targos
Copy link
Member

targos commented Nov 4, 2015

@SynerG It is not the easiest process. First you can try to directly apply the patch.

Download it: https://gist.github.com/targos/b057bbaf1fd1589d4dba
And from the nodejs repo, run git apply 4dd45e115be0122ee8b6ba1d9b3be5e86ea3b97c.diff --directory=deps/v8

If you are lucky, it will apply cleanly

@kzc
Copy link

kzc commented Nov 4, 2015

Since the problem is 100% reproducible given the excellent test case @SynerG provided any contributer could test out any patch they wish.

@evanlucas
Copy link
Contributor

That patch does actually seem to prevent the large drop, but the average seems to be a bit lower right off the bat.

on the master branch:

Before

Requests/sec:   2917.59
Requests/sec:   2951.20
Requests/sec:   2962.39
Requests/sec:   2993.20
Requests/sec:   3016.23
Requests/sec:   3031.21
Requests/sec:   2993.41
Requests/sec:   2993.89
Requests/sec:   2998.26
Requests/sec:   3003.57
Requests/sec:   3026.83
Requests/sec:   2972.55
Requests/sec:   2806.44
Requests/sec:   2882.28
Requests/sec:   2958.43
Requests/sec:   3001.17
Requests/sec:   3005.86
Requests/sec:   2973.36
Requests/sec:   2988.07
Requests/sec:   1257.81
Requests/sec:   1061.08
Requests/sec:   1059.37
Requests/sec:   1088.27

After

Requests/sec:   2690.31
Requests/sec:   2982.13
Requests/sec:   2640.89
Requests/sec:   2544.31
Requests/sec:   2692.97
Requests/sec:   2657.87
Requests/sec:   2724.81
Requests/sec:   2544.41
Requests/sec:   2601.36
Requests/sec:   2487.85
Requests/sec:   2893.20
Requests/sec:   2488.91
Requests/sec:   2363.16
Requests/sec:   2324.68
Requests/sec:   2358.04
Requests/sec:   2584.03
Requests/sec:   2469.70
Requests/sec:   2436.66
Requests/sec:   2389.08
Requests/sec:   2378.98
Requests/sec:   2388.63
Requests/sec:   2494.49
Requests/sec:   2459.86
Requests/sec:   2651.36
Requests/sec:   2321.81
Requests/sec:   2400.41
Requests/sec:   2207.65
Requests/sec:   2502.09
Requests/sec:   2691.86
Requests/sec:   2454.27
Requests/sec:   2350.09
Requests/sec:   2759.47
Requests/sec:   2730.08
Requests/sec:   2380.81
Requests/sec:   2472.89
Requests/sec:   2900.46
Requests/sec:   2660.17
Requests/sec:   2462.02
Requests/sec:   2672.50
Requests/sec:   2595.16
Requests/sec:   2520.38
Requests/sec:   2562.00
Requests/sec:   2355.87

@YurySolovyov
Copy link

@evanlucas I was blinded by this too, try to run it at least 5-10 mins.

@SynerG
Copy link
Author

SynerG commented Nov 4, 2015

@targos thanks for the tip. I was already rebuilding node the hard way when I read your comment.

I have rebuild node with the latest version of v8 in the repo. It should already contain the desired commit as there is a path between the two according to git:

git log 4dd45e115be0122ee8b6ba1d9b3be5e86ea3b97c..master --ancestry-path

In my OSX development machine, the patch doesn't solve the problem:

Requests/sec:   2422.61
Requests/sec:   2428.32
Requests/sec:   1996.10
Requests/sec:    892.94
Requests/sec:    893.19
Requests/sec:    893.40

@evanlucas could you keep running the test longer? It could eventually reach the point when it drops yet

@kzc
Copy link

kzc commented Nov 4, 2015

Another data point - I tried out the (no Promise) test case on Mac with node v5.0.0 and got the same throughput drop off numbers as @SynerG at top of ticket: 2400 requests/sec => 900 requests/sec after 5 minutes.

It can be reproduced in all versions of Nodejs, at least in OSX

Then for the heck of it, I tried running the test case on jxcore, a node port using the mozilla JS engine - not v8:

$ jx -v
v0.10.40
$ jx -jxv
v Beta-0.3.0.4
$ jx -jsv
Mozilla SpiderMonkey v34

After an hour I did not see any drop off:

09:55:21 Requests/sec:   4508.56
09:55:26 Requests/sec:   4461.28
09:55:31 Requests/sec:   4478.79
...
10:22:28 Requests/sec:   4882.16
10:22:33 Requests/sec:   4864.88
10:22:38 Requests/sec:   4873.53
...
10:49:54 Requests/sec:   4452.07
10:49:59 Requests/sec:   4470.95
10:50:04 Requests/sec:   4868.22
...
10:59:09 Requests/sec:   4787.37
10:59:14 Requests/sec:   4786.83
10:59:19 Requests/sec:   4816.61

which suggests it may be a v8 issue.

@mscdex mscdex added the http Issues or PRs related to the http subsystem. label Nov 4, 2015
@SynerG
Copy link
Author

SynerG commented Nov 4, 2015

@kzc good point using jxcore. Do they only replace the JS engine or could they have also reimplemented node features such as the http module?

It may be a v8 issue, but I have been unable to reproduce the problem using only v8 features.
I have tried to remove the http module and replace it with continuous calls to callbackValue(), but performance doesn't degrade over time.

For example, using the following script:

// An attempt to replace http.createServer and trigger continuous calls
(function triggerCall(){
    callbackValue(function(value){
        statistics();
        process.nextTick(triggerCall);
    });
})();

// Same function than before
function callbackValue(cb) {
    (function() {
        var j = 0;
        for (var i = 0, l = 300000; i < l; i++) {
            j = j + 2;
        }
        cb("Hi " + j);
    })();
}

// Print basic statistics periodically
var startTime = (new Date()).getTime();
var lastPrintTime = startTime;
var MIN_ELAPSED_TIME = 5000;
var currentTime, elapsedTime;
var callsCounter = 0;

function statistics(){
    callsCounter++;
    currentTime = (new Date()).getTime();
    elapsedTime = currentTime-lastPrintTime;
    if ( elapsedTime >= MIN_ELAPSED_TIME){
        console.log( callsCounter/(MIN_ELAPSED_TIME/1000)+" calls/sec\t\t "+ (currentTime-startTime)/1000 +" seconds up");
        callsCounter=0;
        lastPrintTime=currentTime;
    }
}

The snippet doesn't use http, but performance remains high even after almost 2 hours running:

2870.8 calls/sec         6750.002 seconds up
2889.6 calls/sec         6755.002 seconds up
2880.4 calls/sec         6760.002 seconds up

This alternative to replace http also fails to reproduce the issue (i.e. performance remains high after an hour):

function mockServerF(fn){
    fn();
}

var mockServer = mockServerF.bind(null, function(){
    callbackValue(function(value){
        statistics();
        process.nextTick(mockServer);
    });
});

mockServer();
2868.6 calls/sec         4535 seconds up
2891.6 calls/sec         4540 seconds up
2871.8 calls/sec         4545 seconds up

If we were able to come up with a snippet that triggered the problem without http, using only v8 features, we could run the script in Chrome/Chromium, without Nodejs, and reproduce the issue. We could determine that the problem belongs to v8.

Is somebody able to come up with a good script for this?

@kzc
Copy link

kzc commented Nov 4, 2015

good point using jxcore. Do they only replace the JS engine or could they have also reimplemented node features such as the http module?

@obastemur would be the one to answer that question.

@obastemur
Copy link
Contributor

good point using jxcore. Do they only replace the JS engine or could they have also reimplemented node features such as the http module?
@obastemur would be the one to answer that question.

http is one of the most updated modules in jxcore. The changes are mostly about reducing the number of C ++ / JS calls per request.

However I have tested the same scenario using jxcore on v8 3.14 / 3.28 and couldn't reproduce the problem after aprox 40/50 mins of testing.

@SynerG
Copy link
Author

SynerG commented Nov 5, 2015

Taking into account @kzc & @obastemur tests together, it looks like the problem is not caused by v8 alone, but the httpmodule in nodejs is involved, at least, in triggering the degradation of performance.

My guess is that v8 initially decides to use the optimizing compiler with the anonymous function inside callbackValue()/promiseValue(), but, overtime, some contextual parameters are affected by the current implementation of nodejs (http module?) and the anonymous function to the point where v8 decides to start using the generic compiler with that function, therefore, performance degrades.
I am not familiar with the parameters that could be considered by v8 to decide between the generic and optimizing compilers.

This is only a theory, but it is based on the following observed facts:

  • Performance does not degrade gradually, but we find only 2 performance levels (let's call them 'high' and 'low').
  • High performance is similar to the performance when the target function is optimized
  • Low performance is similar to the performance when the target function is not optimized
  • The same script using v8 + jxcore's http doesn't trigger the low performance level
  • Similar scripts using only v8 don't trigger the low performance level

@YurySolovyov
Copy link

@SynerG I wonder how number of iterations in for loop affects this, maybe at some point V8 decides that is growing too large, but that would be weird, since it should be destroyed anyway after the IIFE call

@SynerG
Copy link
Author

SynerG commented Nov 5, 2015

Let's show the performance values using optimizable and non-optimizable versions of the function, as well as the performance values with the anonymous function during the 1st phase (high performance) and 2nd phase (low performance) for different values of l(number of iterations).
These tests support the previous list of observed facts. (@YuriSolovyov maybe this is useful for your comment too)

This is the optimizable variant:

function fNamedHighPerformance() {
    var j = 0;
    for (var i = 0, l = 30000; i < l; i++) { j = j + 2; }
    return "Hi " + j;
}

function promiseValue() {
    return Promise.resolve().then(fNamedHighPerformance);
}

This is the non-optimizable variant:

function fNamedLowPerformance() {
    try { } catch (e) { }
    var j = 0;
    for (var i = 0, l = 30000; i < l; i++) { j = j + 2; }
    return "Hi " + j;
}

function promiseValue() {
    return Promise.resolve().then(fNamedLowPerformance);
}

And the results:

Num. Iter Opt Non-Opt 1st phase 2nd phase
30000 15692 7250 10455 7157
300000 2807 908 2546 921
3000000 305 89 304 88
30000000 25 3 25 3

Legend:

  • Num. Iter: Number of iterations (value of l)
  • Opt: Requests/sec using fNamedHighPerformance(). Performance does not degrade
  • Non-Opt: Requests/sec using fNamedLowPerformance(). Degraded performance all the time
  • 1st phase: Requests/sec using anonymous function during 1st phase (before performance degrades)
  • 2nd phase: Requests/sec using anonymous function during 2nd phase (after performance degrades)

Almost the same performance numbers are obtained using the optimizable function than during the 1st phase, and using the non-optimizable function than during the 2nd phase for any choosen number of iterations.

(Except for l = 3000 where the optimizable function provides substantially higher performance, but at that point it could be a matter of the performance of named vs inline-anonymous functions instead of optimized vs non-optimized)

At some point, v8 seems to stop optimizing the function (for any value of l), although it only does it using nodejs, not jxcore.

@Fishrock123
Copy link
Contributor

cc @trevnorris maybe?

@SynerG
Copy link
Author

SynerG commented Nov 5, 2015

Ok, no more implying an optimized/non-optimized state, v8 has confessed :)
(I don't know why my previous implementation didn't reply the question, but this one works).

First, the implementation to check the optimization state:

var http = require('http');

// Nothing has changed here
var srv = http.createServer(function (req, res) {
    promiseValue().then(function(value){
        res.writeHead(200, {'Content-Type': 'text/plain'});
        res.end(value);
    });
});

srv.listen(3000, '127.0.0.1', function() {
    "use strict";
    console.log("Server listening on 127.0.0.1:3000");
});

// Improved: query optimization status of the internal function
function promiseValue() {
    return Promise.resolve().then(function fNamed() {
            var j = 0;
            for (var i = 0, l = 300000; i < l; i++) { j = j + 2; }
            printStatus(%GetOptimizationStatus(fNamed));
            return "Hi " + j;
        }
    );
}

// Print the optimization status
var lastPrintTime = Date.now();
var MIN_ELAPSEDTIME = 2000;

function printStatus(s){
    if ((Date.now()-lastPrintTime)>= MIN_ELAPSEDTIME){
        console.log(verbalizeStatus(s));
        lastPrintTime = Date.now();
    }
}

function verbalizeStatus(s) {
    var state = "UNKNOWN";
    switch (s) {
    case 1:
        state = "OPTIMIZED";
        break;
    case 2:
        state = "NOT optimized";
        break;
    case 3:
        state = "ALWAYS optimized";
        break;
    case 4:
        state = "NEVER optimized";
        break;
    case 6:
        state = "MAYBE deoptimized";
        break;
    }
        return "function is "+state;
}

And the results. When this happens in the wrk window:

Requests/sec:   2378.30
Requests/sec:   2354.38
Requests/sec:   2361.94
Requests/sec:   1185.86
Requests/sec:    880.09
Requests/sec:    889.08
Requests/sec:    892.55

This happens in the server window:

function is OPTIMIZED
function is OPTIMIZED
function is OPTIMIZED
function is NOT optimized
function is NOT optimized
function is NOT optimized

So v8 stops optimizing the function at some point when using nodejs.

At this point, some relevant questions:

  • What parameters vary during execution time that cause v8 to change from the optimizing compiler to the generic one?
  • Should nodejs fix something (probably the http module) to prevent this behaviour?
  • Should v8 reconsider its optimization strategies to prevent this behaviour?
  • Should developers protect their code from triggering this behaviour somehow?

I would like to also cc @petkaantonov, who has made an awesome work with Bluebird keeping it within v8optimization boundaries, to take his input on which factors may be provoking v8to deoptimize the function and how this could be prevented.

@ChALkeR ChALkeR added the v8 engine Issues and PRs related to the V8 dependency. label Nov 5, 2015
@kzc
Copy link

kzc commented Nov 5, 2015

The key factor may be the relative size and complexity of the closure.

If the innermost anonymous function calls a regular function for the computation then the throughput does not degrade on node 5.0.0 after an hour. Throughput remains relatively constant at around 2,300 requests/second. Keep in mind that V8 may have optimized away the for loop altogether.

var http = require('http');

var srv = http.createServer(function (req, res) {
    promiseValue().then(function(value){
        res.writeHead(200, {'Content-Type': 'text/plain'});
        res.end(value);
    });
});

function promiseValue() {
    return Promise.resolve().then(function() {
        var j = compute(0);
        return "Hi " + j;
    });
}

function compute(j) {
    for (var i = 0, l = 300000; i < l; i++) { j = j + 2; }
    return j;
}

srv.listen(3000, '127.0.0.1', function() {
    "use strict";
    console.log("Server listening on 127.0.0.1:3000");
});

@SynerG
Copy link
Author

SynerG commented Nov 5, 2015

@kzc the reason could indeed be related to size and complexity. The function does not use a high number of variables (which could complicate the closure), neither has a high cyclomatic complexity, but we generate a huge number of calls to the function (although I don't know if nodejs/v8require to keep something from the closure of independent and completed HTTP calls) and each call requires a high number of CPU instructions. There are probably other complexity factors that v8could be considering and which could build up over time.

Your proposal shows a pattern that could be used to minimize the impact of unexpected deoptimization. In practice, it refactorizes the instructions that require more computation to an external named function that, as tested previously, does not get unpredictably deoptimized.

However, the internal anonymous function becomes so light that a sudden change in the optimization strategy from optimized to deoptimized would hardly be visible.

Here there is a modified version to query v8 directly about its optimization status:

function promiseValue() {
    return Promise.resolve().then(function fNamed() {
        var j = compute(0);
        printStatus(%GetOptimizationStatus(fNamed));
        return "Hi " + j;
    });
}

When invoked with node:

> node --allow-natives-syntax server.js

We get this since the beginning:

function is NOT optimized
function is NOT optimized
function is NOT optimized

So, the function becomes so simple that v8decides directly not to optimize it!

Testing different values of lin the one-line loop of the original implementation shows that optimization does not kick in for a low number of iterations (e.g. l < 5000), battles between optimized/non-optimized for a medium number (e.g. 5000 < l < 15000) and is optimized for high values (e.g. l>15000). (Ranges are not exact).

In the cases where the function is never optimized or battles between optimized/non-optimized since the beginning, we would not find a sudden degradation of performance at an unexpected point of time in the future.

But why an originally optimized (complex-enough) function becomes non-optimized some time later? (It could probably be related to complexity/closure factors)
Why it doesn't recover its optimized state after the server has been idle for a while? (And recovering strategies such as garbage collection had time to kick in)
Why does it only happen in nodejs?

@ChALkeR
Copy link
Member

ChALkeR commented Nov 6, 2015

@SynerG Once again: could this be reproduced with %GetOptimizationStatus and without Promises?

@SynerG
Copy link
Author

SynerG commented Nov 6, 2015

@ChALkeR, yes it can be reproduced. Including %GetOptimizationStatusin callbackValue:

function callbackValue(cb) {
    (function internalFunction() {
        var j = 0;
        for (var i = 0, l = 300000; i < l; i++) { j = j + 2;}
        printStatus(%GetOptimizationStatus(internalFunction));
        cb("Hi " + j);
    })();
}

After 115 seconds running (in this case), I found this:

Wrk Server
Requests/sec: 2437.33 function is OPTIMIZED
Requests/sec: 2423.31 function is OPTIMIZED
Requests/sec: 2420.90 function is OPTIMIZED
Requests/sec: 1644.15 function is NOT optimized
Requests/sec: 886.03 function is NOT optimized
Requests/sec: 895.30 function is NOT optimized
Requests/sec: 898.59 function is NOT optimized

@YurySolovyov
Copy link

It is sad that V8 has %GetOptimizationStatus, but no %GetDeoptimizationReason or something...

@YurySolovyov
Copy link

Looks like it does not regress if you flatten the callbacks

var http = require('http');

var callbackValue = function(cb, res) {
    (function() {
        var j = 0;
        for (var i = 0, l = 300000; i < l; i++) {
            j = j + 2;
        }
        cb("Hi " + j, res);
    })();
};

var responseWriter = function(value, res) {
    res.writeHead(200, {'Content-Type': 'text/plain'});
    res.end(value);
};

var reqHandler = function (req, res) {
    callbackValue(responseWriter, res);
};

var srv = http.createServer(reqHandler);

srv.listen(3000, '127.0.0.1', function() {
    "use strict";
    console.log("Server listening on 127.0.0.1:3000");
});

@SynerG
Copy link
Author

SynerG commented Nov 6, 2015

Also, this is probably obvious, but the issue can be reproduced using 3rd party libraries supported by http. I just provided the test case requiring only http to isolate the problem.

Express example:

var express = require('express');
var app = express();

app.get('/', function(req, res){
    callbackValue(function(value){
        res.send(value);
    });
});

After 35 seconds (in a first execution), and 140 seconds (in a second execution):

Requests/sec:   2175.25
Requests/sec:   2142.66
Requests/sec:    846.38
Requests/sec:    853.78

Loopback example:

// In MyModel.js
  MyModel.remoteMethod('getHello', {
    http: {path: '/getHello', verb: 'get'},
    returns: {root: true, type: 'object'}
  });
  MyModel.getHello= function (){
    return promiseValue();
  }

After 25 seconds (1st execution) or 50 seconds (2nd execution):

Requests/sec:   1482.91
Requests/sec:   1406.09
Requests/sec:    731.45
Requests/sec:    725.32

Maybe with more complex libraries the issue shows up sooner? (This isn't clear)

I think we should start logging as much information as possible on the server during execution (for example, using memwatch-next) to find which parameters trigger the 'sweet spot'.

Input from @trevnorris, @petkaantonov or another connoisseur of v8 optimization triggers would be welcome though :)

@YurySolovyov
Copy link

@evanlucas
Copy link
Contributor

After putting through irhydra, it looks like deopts occur in parserOnHeadersComplete and parserOnMessageComplete when the drop-off occurs. Still looking into to see why though.

https://gist.github.com/evanlucas/ed3ac1d748362c6eba29

@kzc
Copy link

kzc commented Nov 6, 2015

It always becomes deoptimized when it reaches the value 11.

DEFINE_INT(max_opt_count, 10,
       "maximum number of optimization attempts before giving up.")

DEFINE_INT(max_opt_count, 10,
"maximum number of optimization attempts before giving up.")

You can adjust this value and other DEFINE_ options listed in flag-definitions.h on the node command line. But in the case of --max_opt_count=X you'd just be altering when the deoptimization would inevitably happen. But you could tweak these settings and see what comes of it. There's probably some speed versus memory trade-off at play.

@omsmith
Copy link
Contributor

omsmith commented Nov 7, 2015

@evanlucas looking at parserOnHeadersComplete it doesn't seem entirely surprising. Appears to be expecting undefined or an Array for headers, and similar type mismatches for a couple other params as well.

Wouldn't v8 deopt if it optimized to one type and it received another? (Haven't explored as to whether this should be happening)

@trevnorris
Copy link
Contributor

v8 doesn't deoptimize from the types of arguments passed, but how the arguments are used in the function.

@omsmith
Copy link
Contributor

omsmith commented Nov 7, 2015

Thanks for the clarification - sorry to add noise

@trevnorris
Copy link
Contributor

@omsmith No worries. I thought the same for a while until confirming with @mraleph. He told me that the function only checks on the number of arguments, but not the types. Though since the function operates on the arguments (why wouldn't it) there's where the deoptimization happens.

On the side, a polymorphic function that has a lot going on can benefit from having the specialized cases split into multiple functions and have the entry point simply validate the arguments and call the corresponding function. This way if the entry point is deoptimized it causes much less of an impact.

@evanlucas
Copy link
Contributor

I've been digging into this for a couple days now. From what I can tell so far, the deopt in parserOnHeadersComplete occurs because of a change in the hidden class of parser.socket at https://github.com/nodejs/node/blob/master/lib/_http_common.js#L57 and a change in the hidden class of parser.incoming at https://github.com/nodejs/node/blob/master/lib/_http_common.js#L117 in parserOnMessageComplete. I have a feeling that this could be related to the way timers are handled via a linklist. _idleNext and _idlePrev are attached to the socket object and can be different types. Once the slow down occurs, they seem to change from Socket objects to other objects. I'll continue looking into this.

@Fishrock123
Copy link
Contributor

I have a feeling that this could be related to the way timers are handled via a linklist. _idleNext and _idlePrev are attached to the socket object and can be different types. Once the slow down occurs, they seem to change from Socket objects to other objects. I'll continue looking into this.

@evanlucas noted, let me know what you find.

For reference, it attaches to the object "for efficiency", so that extra objects do not need to be created.

Maybe just hard-core enrolling the object to have timers fields manually upon instantiation?

@trevnorris
Copy link
Contributor

Maybe just hard-core enrolling the object to have timers fields manually upon instantiation?

This is pretty much what we should be doing. On every object instantiation we should be setting all possible future fields to at least null. This is enough to prevent object map mutation.

@ChALkeR ChALkeR added the performance Issues and PRs related to the performance of Node.js. label Feb 16, 2016
@jasnell
Copy link
Member

jasnell commented Apr 9, 2016

Ping. any updates on this one?

@targos
Copy link
Member

targos commented Jan 2, 2017

I ran the test several times with v5, v6 and v7 and can only reproduce the performance drop in v5.
I believe the issue was fixed by a V8 update, so I'm going to close this.

@targos targos closed this as completed Jan 2, 2017
@MylesBorins
Copy link
Contributor

@targos did you measure v4?

@targos
Copy link
Member

targos commented Jan 2, 2017

I didn't but it's probably still affected

@MylesBorins
Copy link
Contributor

would we be able to backport the V8 changes?

@targos targos added the v4.x label Jan 2, 2017
@targos
Copy link
Member

targos commented Jan 2, 2017

We'd have to identify them. I reopen with the v4.x label in case someone wants to do it.

@targos targos reopened this Jan 2, 2017
@Trott Trott added the help wanted Issues that need assistance from volunteers or PRs that need help to proceed. label Jul 15, 2017
@YurySolovyov
Copy link

Still the case on node 8.3+ ?

@apapirovski
Copy link
Member

Given that Node.js 4.x is now in maintenance mode (and has less than 5 months to go in its life) and there has been absolutely zero interest in anyone taking this on, I'm going to go ahead and close it but feel free to reopen if you believe I've made an error.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Issues that need assistance from volunteers or PRs that need help to proceed. http Issues or PRs related to the http subsystem. performance Issues and PRs related to the performance of Node.js. v8 engine Issues and PRs related to the V8 dependency.
Projects
None yet
Development

No branches or pull requests