-
-
Notifications
You must be signed in to change notification settings - Fork 8.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[🚀 Feature]: Recorded Video File Names #10018
Comments
@LukeIGS, thank you for creating this issue. We will troubleshoot it as soon as we can. Info for maintainersTriage this issue by using labels.
If information is missing, add a helpful comment and then
If the issue is a question, add the
If the issue is valid but there is no time to troubleshoot it, consider adding the
After troubleshooting the issue, please add the Thank you! |
Looking at implementation it appears this actually isn't as simple as i initially thought. Seems like in the current design, video recording only works in dynamic grid mode. Even if you deploy the video recorder and selenium node container in the same pod (basically docker network in k8s) like in the above k8s manifest; the video node will expect a display to open on selenium:99, however se:recordVideo is only exposed when the node's role is docker, so there's no way to flag the adjacent selenium contrainer to actually expose a display on :99. Furthermore the video container doesn't continuously poll its adjacent node and exists after the first video recording so even if there was, the video containers would still have to be dynamically allocated each time a test was spun up on a given node was requested with se:videoRecord = true. I'm currently thinking through a design, almost seems like the one shot k8s nodes should support this functionality too. I'm not a fan of needing to use a hybrid of direct docker and k8s orchestration since k8s doesn't do so great at detecting resource usage by containers that it's not orchestrating. |
Hi @LukeIGS , Any update on for this issue? |
@titusfortner I really need this feature in my CI. May I help somehow? |
@qalinn this is not possible right now and the implementation is not straightforward, as we have not definied how to scale the Grid in Kubernetes, all this well pointed out by @LukeIGS in the previous comment. @LukeIGS, just wondering, have you tried to deploy the |
@diemol Hello! apiVersion: apps/v1 |
@qalinn with that approach, the containers need to stop gracefully to get the video and avoid corrupting the file. |
@diemol on docker, How can I do it without to stop the video container? |
The container needs to stop in order to have For more questions, please join us in the IRC/Slack channel where the community can help you as well. |
I did try deploying it alongside the container and ran into similar issues as @qalinn, k8s orchestration currently kills the pod early which corrupts the video, and there's no easy way to send a termination signal to the container when the test ends at the moment, as that's handled by the the selenium/node-docker image which hits some unique code located in the following files, replicating that in k8s requires knowledge of the underlying docker socket on the k8's node (@diemol this part's relevant to you), assuming it actually has one and isn't running a different engine, and then we get into a messy situation with volumes, pretty much need to use a host volume that lines up on both the host node and the worker container so the path is the same across any child containers too. Also k8s will not be aware of any child nodes spun up if you just deploy selenium-docker to your k8s cluster which means you're in danger of overloading your k8s nodes which will lead to cluster instability. TL;DR: We basically need a k8s selenium worker that provides similar functionality to the docker-selenium one, relevant code can be found in these files. (mostly providing this in case anyone relatively new to selenium picks this up...) PS: |
I am going to close this one because this should be part of #9845 |
This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
Feature and motivation
In the current implementation of grid 4 video recording videos are saved using a file name that's determined at the start of the ffmpeg container while in a start grid configuration. (Dynamic also works roughly the same way.)
The issue with this is that there's no great way to identify a video named "video.mp4", or even "edge_video.mp4" if one attempts to save that video to a mounted storage of any kind short of using some scripting to rename the generated video from the client side or something (which would be prone to all sorts of race conditions).
My thoughts on a solution for this would be to provide the video name to the video recording container when recording is started by the selenium session, passing the sessions sessionId as the file name. Since this is visible to the client, it would allow easy linking a video the a given test run on the client side.
Usage example
Given a static grid where each video container mounts a shared storage directory...
Using ruby:
From there it could be sent to any reporting aggregator of your choice for easy linking of video to test results, and managed (such as being removed in the event of a passing test or renamed to something more useful) entirely from the client side.
The text was updated successfully, but these errors were encountered: