Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[🚀 Feature]: Recorded Video File Names #10018

Closed
LukeIGS opened this issue Nov 9, 2021 · 12 comments
Closed

[🚀 Feature]: Recorded Video File Names #10018

LukeIGS opened this issue Nov 9, 2021 · 12 comments

Comments

@LukeIGS
Copy link

LukeIGS commented Nov 9, 2021

Feature and motivation

In the current implementation of grid 4 video recording videos are saved using a file name that's determined at the start of the ffmpeg container while in a start grid configuration. (Dynamic also works roughly the same way.)

The issue with this is that there's no great way to identify a video named "video.mp4", or even "edge_video.mp4" if one attempts to save that video to a mounted storage of any kind short of using some scripting to rename the generated video from the client side or something (which would be prone to all sorts of race conditions).

My thoughts on a solution for this would be to provide the video name to the video recording container when recording is started by the selenium session, passing the sessions sessionId as the file name. Since this is visible to the client, it would allow easy linking a video the a given test run on the client side.

Usage example

Given a static grid where each video container mounts a shared storage directory...

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: selenium-chrome-node-deployment
  labels:
    app: selenium-chrome-node
    name: selenium-chrome-node
    component: "selenium-grid-4"
spec:
  replicas: 10
  selector:
    matchLabels:
      app: selenium-chrome-node
  template:
    metadata:
      labels:
        app: selenium-chrome-node
        name: selenium-chrome-node
        component: "selenium-grid-4"
    spec:
      containers:
        - name: video
          image: selenium/video
          resources:
            limits:
              memory: "1Gi"
              cpu: "1"
          volumeMounts:
            - name: video
              mountPath: /video
        - name: selenium
          image: selenium/node-chrome:latest
          env:
            - name: SE_EVENT_BUS_HOST
              value: "selenium-event-bus"
            - name: SE_EVENT_BUS_PUBLISH_PORT
              value: "4442"
            - name: SE_EVENT_BUS_SUBSCRIBE_PORT
              value: "4443"
            - name: VNC_NO_PASSWORD
              value: "true"
            - name: "SE_OPTS"
              value: "--log-level FINE --grid-url https://selenium-grid.example.com"
          ports:
            - containerPort: 5553
              protocol: TCP
            - containerPort: 5555
              protocol: TCP 
          volumeMounts:
            - name: dshm
              mountPath: /dev/shm
          resources:
            requests:
              memory: "1Gi"
              cpu: "1"
            limits:
              memory: "1Gi"
              cpu: "1"
      volumes:
        - name: video
          flexVolume:
            driver: 'fstab/cifs'
            fsType: 'cifs'
            secretRef:
              name: 'selenium-video'
            options:
              networkPath: '//examplefs/qa/selenium-videos'
              mountOptions: 'dir_mode=0777,file_mode=0777,vers=3.0,domain=EXAMPLEDOMAIN'
        - name: dshm
          emptyDir: { "medium": "Memory" }

Using ruby:

FILE_SERVER_PATH = '//examplefs/qa/selenium-videos/'
@driver = Selenium::Webdriver.for :remote, url: 'selenium-grid.example.com', desired_capabilities: :chrome
@session_id = @driver.session_id
@video_file_path = "#{FILE_SERVER_PATH}#{@driver.session_id}.mp4"

From there it could be sent to any reporting aggregator of your choice for easy linking of video to test results, and managed (such as being removed in the event of a passing test or renamed to something more useful) entirely from the client side.

@github-actions
Copy link

github-actions bot commented Nov 9, 2021

@LukeIGS, thank you for creating this issue. We will troubleshoot it as soon as we can.


Info for maintainers

Triage this issue by using labels.

If information is missing, add a helpful comment and then I-issue-template label.

If the issue is a question, add the I-question label.

If the issue is valid but there is no time to troubleshoot it, consider adding the help wanted label.

After troubleshooting the issue, please add the R-awaiting answer label.

Thank you!

@LukeIGS
Copy link
Author

LukeIGS commented Nov 10, 2021

Looking at implementation it appears this actually isn't as simple as i initially thought.

Seems like in the current design, video recording only works in dynamic grid mode. Even if you deploy the video recorder and selenium node container in the same pod (basically docker network in k8s) like in the above k8s manifest; the video node will expect a display to open on selenium:99, however se:recordVideo is only exposed when the node's role is docker, so there's no way to flag the adjacent selenium contrainer to actually expose a display on :99.

Furthermore the video container doesn't continuously poll its adjacent node and exists after the first video recording so even if there was, the video containers would still have to be dynamically allocated each time a test was spun up on a given node was requested with se:videoRecord = true.

I'm currently thinking through a design, almost seems like the one shot k8s nodes should support this functionality too. I'm not a fan of needing to use a hybrid of direct docker and k8s orchestration since k8s doesn't do so great at detecting resource usage by containers that it's not orchestrating.

@qalinn
Copy link

qalinn commented Dec 28, 2021

Hi @LukeIGS ,

Any update on for this issue?
Do you know if it exists any option to record my tests if I am deploying selenium grid 4 on kubernetes?

@qalinn
Copy link

qalinn commented Jan 10, 2022

@titusfortner I really need this feature in my CI. May I help somehow?

@diemol
Copy link
Member

diemol commented Jan 10, 2022

@qalinn this is not possible right now and the implementation is not straightforward, as we have not definied how to scale the Grid in Kubernetes, all this well pointed out by @LukeIGS in the previous comment.

@LukeIGS, just wondering, have you tried to deploy the node-docker in a pod? That one could start the browser and the video container.

@qalinn
Copy link

qalinn commented Jan 11, 2022

@diemol Hello!
Please find below a full example of how I did. Please pay atention how I set the DISPLAY_CONTAINER_NAME environment variable, value is localhost. I have tried with the name of the container, but it doesn't work on k8s. The problem is that I cannot have the video till I won't stop the video container, even if the file is present(is the same situation for docker).
When the video part will work I will create a PR for kubernetes example.

apiVersion: apps/v1
kind: Deployment
metadata:
name: selenium-chrome-node-deployment
namespace: selenium
labels:
app: selenium-chrome-node
name: selenium-chrome-node
component: "selenium-grid-4"
spec:
replicas: 1
selector:
matchLabels:
app: selenium-chrome-node
template:
metadata:
labels:
app: selenium-chrome-node
name: selenium-chrome-node
component: "selenium-grid-4"
spec:
containers:
- name: video
image: selenium/video
env:
- name: DISPLAY_CONTAINER_NAME
value: "localhost"

- name: FILE_NAME
value: chrome_video.mp4
resources:
limits:
memory: "1Gi"
cpu: "1"
volumeMounts:
- name: video
mountPath: /videos
- name: selenium-chrome-node
image: selenium/node-chrome:4.1.0
env:
- name: SE_EVENT_BUS_HOST
value: "selenium-event-bus"
- name: SE_EVENT_BUS_PUBLISH_PORT
value: "4442"
- name: SE_EVENT_BUS_SUBSCRIBE_PORT
value: "4443"
ports:
- containerPort: 5553
protocol: TCP
- containerPort: 5555
protocol: TCP
volumeMounts:
- name: dshm
mountPath: /dev/shm
resources:
requests:
memory: "1Gi"
cpu: "1"
limits:
memory: "1Gi"
cpu: "1"
nodeSelector:
typeInstance: selenium
volumes:
- name: dshm
emptyDir: { "medium": "Memory" }
- name: video
persistentVolumeClaim:
claimName: efs-claim

@diemol
Copy link
Member

diemol commented Jan 11, 2022

@qalinn with that approach, the containers need to stop gracefully to get the video and avoid corrupting the file.

@qalinn
Copy link

qalinn commented Jan 11, 2022

@diemol on docker, How can I do it without to stop the video container?

@diemol
Copy link
Member

diemol commented Jan 11, 2022

The container needs to stop in order to have ffmpeg shutdown gracefully. If the file is still open, you won't be able to see the video.
https://github.com/seleniumhq/docker-selenium/#dynamic-grid- does that for you.

For more questions, please join us in the IRC/Slack channel where the community can help you as well.

@LukeIGS
Copy link
Author

LukeIGS commented Jan 13, 2022

I did try deploying it alongside the container and ran into similar issues as @qalinn, k8s orchestration currently kills the pod early which corrupts the video, and there's no easy way to send a termination signal to the container when the test ends at the moment, as that's handled by the the selenium/node-docker image which hits some unique code located in the following files, replicating that in k8s requires knowledge of the underlying docker socket on the k8's node (@diemol this part's relevant to you), assuming it actually has one and isn't running a different engine, and then we get into a messy situation with volumes, pretty much need to use a host volume that lines up on both the host node and the worker container so the path is the same across any child containers too. Also k8s will not be aware of any child nodes spun up if you just deploy selenium-docker to your k8s cluster which means you're in danger of overloading your k8s nodes which will lead to cluster instability.

TL;DR: We basically need a k8s selenium worker that provides similar functionality to the docker-selenium one, relevant code can be found in these files. (mostly providing this in case anyone relatively new to selenium picks this up...)
java/src/org/openqa/selenium/grid/node/docker/DockerSessionFactory.java
java/src/org/openqa/selenium/grid/node/docker/DockerSession.java
java/src/org/openqa/selenium/grid/node/docker/DockerOptions.java
java/src/org/openqa/selenium/grid/node/docker/DockerFlags.java

PS:
Other issue is that if you DO deploy a pod containing the video image, it sits and waits for some time period (iirc, 3 minutes), and you HAVE to execute a test within that time frame, otherwise it gives up on waiting for recording and kills itself, which i suppose just further fortifies that we need a k8s driver.

@diemol
Copy link
Member

diemol commented Feb 22, 2022

I am going to close this one because this should be part of #9845

@diemol diemol closed this as completed Feb 22, 2022
@github-actions
Copy link

This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@github-actions github-actions bot locked and limited conversation to collaborators Mar 25, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants