Skip to content

Performance Guidelines

Jérôme Benoit edited this page Dec 5, 2020 · 4 revisions

Front-end perfs will also be covered later on.

Track the DB/REST/OCPP service performances and data volumes when you code

Here below the kind of improvements that I obtained with the new trace API I put in place in which DB perfs/volume and REST/OCPP calls/perfs/volumes are traced:

Get the Logs in e-Mobility in the local dev env

Before the optimisation (one HTTP request):

<image001.png>

After I optimised it:

<image002.png>

Check the scalability on your local machine

You can run load test with ‘autocannon’: https://github.com/mcollina/autocannon

Here is an example of 10 concurrent HTTP requests (bringing 100 concurrent calls will not help on your local machine 😉):

export SLF_LOCAL_TOKEN= "Bearer <JWT_TOKEN>" autocannon -c 10 -d 10 -m GET -H Authorization=$SLF_LOCAL_TOKEN 'http://127.0.0.1:80/client/api/Loggings?Limit=100&SortFields=timestamp&SortDirs=desc'

And check the result: Latency Requests per second Data volume

Here below, the latency stays low in average (187ms), with 53 requests/sec and a total of 2 Mb of data retrieved which is really good.

<image003.png>

Don’t forget that the traffic will be dispatched in production through several more powerful REST servers than yours and the high-availability MongoDB Atlas will be tuned to handle much more load that your local machine can ever cope with 😊.

Monitor the NodeJs server process

You can do that with Clinic: https://clinicjs.org/

First you’ll have to run ‘clinic doctor’ with ‘autocannon’: Clinic will start your server, run autocannon and stop it at the end + collecting profiling data and showing them in a browser.

Here below the command to test on the code generated for production:

npm run build:prod sudo clinic doctor --on-port="autocannon -c 10 -d 10 -m GET -H Authorization='Bearer <JWT_TOKEN>' 'http://127.0.0.1:80/client/api/Loggings?Limit=100&SortFields=timestamp&SortDirs=desc'" -- node -r source-map-support/register ./dist/start.js

Clinic will automatically open a browser with the results of the analysis and give your some recommendations:

Here it has detected IO issue (Active Handles) which is related here with MongoDB access but it can also detect a NodeJS processing issue (Event Loop):

<image004.png>

You can also see that we don’t have memory leaks and the memory curve is not going up after garbage collection.

According the recommendations based on machine learning, you can drill down into more details with: Clinic Bubbleprof: when having high Active Handle issues (IO) Clinic Flame: when having high Event Loop issues (processing time)

Here we have an IO issue so let’s run Clinic Bubbleprof.

Simply replace ‘doctor’ by ‘bubbleprof’:

sudo clinic bubbleprof --on-port="autocannon -c 10 -d 10 -m GET -H Authorization='Bearer <JWT_TOKEN>' 'http://127.0.0.1:80/client/api/Loggings?Limit=100&SortFields=timestamp&SortDirs=desc'" -- node -r source-map-support/register ./dist/start.js

We can see that 10 secs out of 20 secs (startup took 10 secs also) were spent into the MongoDB package, then you’ll have to work on improving the MongoDB request:

<image005.png>

If Clinic Doctor detects an Event Loop issue, you can run Clinic Flame.

Simply replace ‘doctor’ by ‘flame’:

sudo clinic flame --on-port="autocannon -c 10 -d 10 -m GET -H Authorization='Bearer <JWT_TOKEN>' 'http://127.0.0.1:80/client/api/Loggings?Limit=100&SortFields=timestamp&SortDirs=desc'" -- node -r source-map-support/register ./dist/start.js

You can see here the Flame progress bar at the top and the details below: part of the processing time is taken in MongoDB drivers and the rest are many calls in other libs

<image006.png>

Take these guidelines into considerations for all your devs.