Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hapi server throw [FIN, ACK] and [RST] #3738

Closed
morugu opened this issue Feb 6, 2018 · 10 comments
Closed

hapi server throw [FIN, ACK] and [RST] #3738

morugu opened this issue Feb 6, 2018 · 10 comments
Assignees
Labels
non issue Issue is not a problem or requires changes

Comments

@morugu
Copy link

morugu commented Feb 6, 2018

Detail

I use hapi as an API server on AWS EC2.
Load Balancer returns 502 error several times a day.
When I looked at the TCP stream, I received a POST request with FIN-WAIT 1 state and it seems that it is getting a 502 error because it returns RST.
I was told from AWS Technical Support "Does Node.js properly implement closing processing for requests?" However, how can I properly perform close processing with hapi?

TCP dump

60090 2018-02-01 08:43:23.014057    ALB_IP           NODE_SERVER_IP         TCP      74     23495 → 3000 [SYN] Seq=0 Win=26883 Len=0 MSS=8961 SACK_PERM=1 TSval=37350007 TSecr=0 WS=256
60091 2018-02-01 08:43:23.014086    NODE_SERVER_IP         ALB_IP           TCP      74     3000 → 23495 [SYN, ACK] Seq=0 Ack=1 Win=26847 Len=0 MSS=8961 SACK_PERM=1 TSval=45343656 TSecr=37350007 WS=128
60092 2018-02-01 08:43:23.014462    ALB_IP           NODE_SERVER_IP         TCP      66     23495 → 3000 [ACK] Seq=1 Ack=1 Win=27136 Len=0 TSval=37350007 TSecr=45343656
60093 2018-02-01 08:43:23.014487    ALB_IP           NODE_SERVER_IP         HTTP     722    POST POST_PATH HTTP/1.1  (application/json
60094 2018-02-01 08:43:23.014493    NODE_SERVER_IP         ALB_IP           TCP      66     3000 → 23495 [ACK] Seq=1 Ack=657 Win=28160 Len=0 TSval=45343656 TSecr=37350007
60099 2018-02-01 08:43:23.017253    NODE_SERVER_IP         ALB_IP           HTTP     293    HTTP/1.1 200 OK  (application/json)
60100 2018-02-01 08:43:23.017505    ALB_IP           NODE_SERVER_IP         TCP      66     23495 → 3000 [ACK] Seq=657 Ack=228 Win=28160 Len=0 TSval=37350008 TSecr=45343657
60142 2018-02-01 08:43:28.019353    NODE_SERVER_IP         ALB_IP           TCP      66     3000 → 23495 [FIN, ACK] Seq=228 Ack=657 Win=28160 Len=0 TSval=45344908 TSecr=37350008
60143 2018-02-01 08:43:28.019908    ALB_IP           NODE_SERVER_IP         HTTP     745    POST POST_PATH HTTP/1.1  (application/json)
60144 2018-02-01 08:43:28.019931    NODE_SERVER_IP         ALB_IP           TCP      54     3000 → 23495 [RST] Seq=229 Win=0 Len=0
60145 2018-02-01 08:43:28.019942    ALB_IP           NODE_SERVER_IP         TCP      66     23495 → 3000 [FIN, ACK] Seq=1336 Ack=229 Win=28160 Len=0 TSval=37351258 TSecr=45344908
60146 2018-02-01 08:43:28.019946    NODE_SERVER_IP         ALB_IP           TCP      54     3000 → 23495 [RST] Seq=229 Win=0 Len=0

Code

internals.handler = (request, reply) => {

    let randam = uuid.v4().replace(/-/g, '');
   
    reply({
        uuid: randam
    }).code(200);
}

Request Flow

Application Load Balanser -> EC2 ( Node.js on hapi at 3000 port)

Environment

  • node version: 8.9.4
  • hapi version: 16.6.2
  • os: Ubuntu 16.04

Relation issue ?

##3480

@hueniverse
Copy link
Contributor

@kanongil any ideas on this?

@mtharrison
Copy link
Contributor

mtharrison commented Feb 7, 2018

Looks like a race condition of sorts.

A response is sent to client for original req, server waits exactly 5 seconds for more data and then closes its side of the connection with FIN, ACK, then client sends another request immediately after but server considers the connection already closed so sends RST.

There's a couple of things that come to mind:

  • Why is server timing out after 5s - do you have any special timeout settings there?
  • If so, maybe keep alive on the nginx proxy isn't the best idea Not sure why I assumed nginx here, is it AWS ELB?

@mtharrison
Copy link
Contributor

mtharrison commented Feb 7, 2018

I guess the 5 seconds comes from: https://nodejs.org/dist/latest-v8.x/docs/api/http.html#http_server_keepalivetimeout. Default node server keepalive timeout.

You may want to tweak node/elb timeout settings to arrive at something more harmonious.

@morugu
Copy link
Author

morugu commented Feb 7, 2018

@mtharrison thanks, I try it.

@morugu
Copy link
Author

morugu commented Feb 7, 2018

Is there keepalive timeout setting in hapi options ?
I can't find in docs.

https://hapijs.com/api/16.6.2#serverconnections
https://hapijs.com/api/16.6.2#serverstartcallback

@mtharrison
Copy link
Contributor

mtharrison commented Feb 7, 2018

This is related, same issue described as you: nodejs/node#17749. Are you on Node 8? I think node 8 changed timeout from 2min to 5s.

If you wanted the old behaviour you'd do something like:

const Hapi = require('hapi');

const server = Hapi.server({ port: ... });
server.listener.keepAliveTimeout = 120e3;

....

@spanditcaa
Copy link

We've been having intermittent 502s since node 8 as well, so it was interesting to see this issue. I was able to repro this with a similar configuration in Azure and a loop of POST requests running every 5.01 seconds. I applied @mtharrison's keepAliveTimeout as above and am no longer able to repro the FIN and resulting 502 from the LB. Thanks!

@hueniverse hueniverse added the non issue Issue is not a problem or requires changes label Feb 9, 2018
@morugu
Copy link
Author

morugu commented Feb 10, 2018

solved. thanks!

@ggoodman
Copy link

Here's my own experience with this new situation: nodejs/node#20256

@lock
Copy link

lock bot commented Jan 9, 2020

This thread has been automatically locked due to inactivity. Please open a new issue for related bugs or questions following the new issue template instructions.

@lock lock bot locked as resolved and limited conversation to collaborators Jan 9, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
non issue Issue is not a problem or requires changes
Projects
None yet
Development

No branches or pull requests

5 participants