Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tweak latency function #327

Merged
merged 4 commits into from
Jan 23, 2022
Merged

tweak latency function #327

merged 4 commits into from
Jan 23, 2022

Conversation

drc38
Copy link
Collaborator

@drc38 drc38 commented Jan 23, 2022

@lbbrhzn I think it is best for the latency function not to close the connection or manage unexpected connection exceptions (leaving the main charger routine to do this), also if it does timeout to still log the 20000ms latency time.

@drc38 drc38 requested a review from lbbrhzn January 23, 2022 08:18
@lbbrhzn
Copy link
Owner

lbbrhzn commented Jan 23, 2022

Hi @drc38, my motivation for closing the connection was this: when there is a timeout, either the network or the charger response is seriously degraded. Likely the charger will experience similar issues and try to reconnect. We don't want old connections to remain lingering, so we need to close at some point. Due to the latency measurement the normal ping/pong timeout mechanism of websockets will no longer work, so we need to do that manually.

How does this scenario play out with your changes? How do we close a previous connection when the charger reconnects? Should we keep the connection open indefinitely if it doesn't?

@@ -318,7 +318,7 @@ def on_remote_stop_transaction(self, **kwargs):
@on(Action.SetChargingProfile)
def on_set_charging_profile(self, **kwargs):
"""Handle set charging profile request."""
return call_result.SetChargingProfilePayload(ChargingProfileStatus.accepted)
return call_result.SetChargingProfilePayload(ChargingProfileStatus.rejected)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this has anything to do with the latency measurement?!

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess this is only for increasing test coverage.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct, it was just to get test coverage >90%

@drc38
Copy link
Collaborator Author

drc38 commented Jan 23, 2022

Hi @drc38, my motivation for closing the connection was this: when there is a timeout, either the network or the charger response is seriously degraded. Likely the charger will experience similar issues and try to reconnect. We don't want old connections to remain lingering, so we need to close at some point. Due to the latency measurement the normal ping/pong timeout mechanism of websockets will no longer work, so we need to do that manually.

How does this scenario play out with your changes? How do we close a previous connection when the charger reconnects? Should we keep the connection open indefinitely if it doesn't?

My understanding is the client should initiate any close, including if the transport layer is degraded. I don't think it is possible to have a lingering connection if the client reconnects (ie it has to close first). Leaving it to the client may prevent unnecessary closing where transport layer degrades but not for long enough for charger to close socket.

See https://datatracker.ietf.org/doc/html/rfc6455#section-7.2.1

@lbbrhzn
Copy link
Owner

lbbrhzn commented Jan 23, 2022

My understanding is that either end can initiate a closing handshake in case of timeouts. See https://websockets.readthedocs.io/en/stable/topics/timeouts.html
Also note that the server can close at any time: See https://datatracker.ietf.org/doc/html/rfc6455#section-7.3

@drc38
Copy link
Collaborator Author

drc38 commented Jan 23, 2022

My understanding is that either end can initiate a closing handshake in case of timeouts. See https://websockets.readthedocs.io/en/stable/topics/timeouts.html Also note that the server can close at any time: See https://datatracker.ietf.org/doc/html/rfc6455#section-7.3

I guess it is more a design philosophy because as you say either end can close the connection. My wifi connection has medium signal strength and typical latency around 250ms, but at times goes to 3-4s and has gone to 20s without causing the charger to initiate a disconnect. The server could have dropped the connection but functionally it would not have made a difference as the transport layer went back to normal (possibly by the next ping/pong cycle).

If you'd prefer the server to drop the connection I'm ok with that, perhaps change the function to manage_connection_latency to make it clear it is actively managing the connection rather than a passive measurement and add to the debug statements to confirm the server/integration is closing the connection for each exception.

@lbbrhzn
Copy link
Owner

lbbrhzn commented Jan 23, 2022

Let's revisit when the need arises.

@lbbrhzn lbbrhzn merged commit 6f9dba3 into lbbrhzn:main Jan 23, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants