-
-
Notifications
You must be signed in to change notification settings - Fork 68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tweak latency function #327
Conversation
Hi @drc38, my motivation for closing the connection was this: when there is a timeout, either the network or the charger response is seriously degraded. Likely the charger will experience similar issues and try to reconnect. We don't want old connections to remain lingering, so we need to close at some point. Due to the latency measurement the normal ping/pong timeout mechanism of websockets will no longer work, so we need to do that manually. How does this scenario play out with your changes? How do we close a previous connection when the charger reconnects? Should we keep the connection open indefinitely if it doesn't? |
@@ -318,7 +318,7 @@ def on_remote_stop_transaction(self, **kwargs): | |||
@on(Action.SetChargingProfile) | |||
def on_set_charging_profile(self, **kwargs): | |||
"""Handle set charging profile request.""" | |||
return call_result.SetChargingProfilePayload(ChargingProfileStatus.accepted) | |||
return call_result.SetChargingProfilePayload(ChargingProfileStatus.rejected) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this has anything to do with the latency measurement?!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess this is only for increasing test coverage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct, it was just to get test coverage >90%
My understanding is the client should initiate any close, including if the transport layer is degraded. I don't think it is possible to have a lingering connection if the client reconnects (ie it has to close first). Leaving it to the client may prevent unnecessary closing where transport layer degrades but not for long enough for charger to close socket. See https://datatracker.ietf.org/doc/html/rfc6455#section-7.2.1 |
My understanding is that either end can initiate a closing handshake in case of timeouts. See https://websockets.readthedocs.io/en/stable/topics/timeouts.html |
I guess it is more a design philosophy because as you say either end can close the connection. My wifi connection has medium signal strength and typical latency around 250ms, but at times goes to 3-4s and has gone to 20s without causing the charger to initiate a disconnect. The server could have dropped the connection but functionally it would not have made a difference as the transport layer went back to normal (possibly by the next ping/pong cycle). If you'd prefer the server to drop the connection I'm ok with that, perhaps change the function to manage_connection_latency to make it clear it is actively managing the connection rather than a passive measurement and add to the debug statements to confirm the server/integration is closing the connection for each exception. |
Let's revisit when the need arises. |
@lbbrhzn I think it is best for the latency function not to close the connection or manage unexpected connection exceptions (leaving the main charger routine to do this), also if it does timeout to still log the 20000ms latency time.