Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Greetings! and some feedback #1

Closed
wader opened this issue Oct 14, 2022 · 17 comments
Closed

Greetings! and some feedback #1

wader opened this issue Oct 14, 2022 · 17 comments

Comments

@wader
Copy link
Member

wader commented Oct 14, 2022

Hi, intersting idea!

Not sure how much time i have to help with ffbuilds repos but i will try help as much as i can. If you haven't seen before there is an issue wader/static-ffmpeg#217 in the static-ffmpeg repo summarising things related to multi arch builds. Some about the problem with emulated builds that are quite problematic.

Another concern about doing builds that copy binaries (.a files etc) might be to make sure they are compatible, not sure what compatibility guarantees alpine and musl has? maybe it's safest, both compatibility and security updates-wise, to somehow tag the lib* builds with alpine version etc?

@binoculars
Copy link
Member

Hey @wader, thanks for any help you can spare.

I think I figured out a clever way to way to do the multi arch builds without needing a separate repo, so I may archive this one for reference. Like you said, the mutli arch builds can be problematic. I had to additional logic to some of the ffbuilds/static-lib* repos in order for them to build on arm/v7 and arm/v6. For ffmpeg specifically, it needs --extra-ldexeflags=-static, but this fails on amd64 and arm64. Weird. Luckily, we have the TARGETPLATFORM global ARG, so we can conditionally add that flag to configure.

I plan on keeping the alpine versions in sync via your bump action and dependabot. We may want to pin to hashes for the downstream lib* and ffmpeg repos, but I'm still testing that with multi arch images. Separate Dockerfiles seems to be the only way to feasibly build the multi arch images as some of the lib* repos take a very long time to build with Qemu.

Some things on my todo list:

  • Give proper attribution to your original work (note in the README?)
  • Fill out README files
  • Auto merge PRs from bump and dependabot if they pass all checks

@wader
Copy link
Member Author

wader commented Oct 16, 2022

Hey @wader, thanks for any help you can spare.

👍 i think both projects have their uses so let's keep in sync and share things. For example i've wanted to keep the full build as single Dockerfile because it is easy to use as a base, for example I have some private and work projects where it's used with various patches for ffmpeg and libraries. Also in those projects it's very important that some decoders/encoders are built with correct optimizations, otherwise performance will be just too slow. The only "safe" and practical way that i've come up with is to do "native" docker builds. As you noticed many build system do not like docker's emulation :).

I think I figured out a clever way to way to do the multi arch builds without needing a separate repo, so I may archive this one for reference. Like you said, the mutli arch builds can be problematic. I had to additional logic to some of the ffbuilds/static-lib* repos in order for them to build on arm/v7 and arm/v6. For ffmpeg specifically, it needs --extra-ldexeflags=-static, but this fails on amd64 and arm64. Weird. Luckily, we have the TARGETPLATFORM global ARG, so we can conditionally add that flag to configure.

Yeah it's a bit of a mess to have conditionals and things :(

I plan on keeping the alpine versions in sync via your bump action and dependabot. We may want to pin to hashes for the downstream lib* and ffmpeg repos, but I'm still testing that with multi arch images. Separate Dockerfiles seems to be the only way to feasibly build the multi arch images as some of the lib* repos take a very long time to build with Qemu.

As I noted earlier I think I would try if possible to somehow make the build that copies all libs and build ffmpeg always copy from builds done with the same alpine version.

Yes some libs are very slow, for example librav1e I think takes several hours to build on normal github action hosts... and maybe it even got OOM-killed last time I tried.

I've thought about doing build by booting up a full qemu host per arch in a container instead of using buildx action. If possible that should fix some of the build system issues but maybe there will be other issues? Last time I checked I could not find any existing github action that does it, so probably have to figure out how to do it (if possible?).

Some things on my todo list:

  • Give proper attribution to your original work (note in the README?)

👍

  • Fill out README files
  • Auto merge PRs from bump and dependabot if they pass all checks

Maybe would be good with some more sanity tests (one per lib etc?) when doing auto merges?

@binoculars
Copy link
Member

For librav1e specifically, I had to add this:

# Fails on fetch without CARGO_NET_GIT_FETCH_WITH_CLI=true and git installed
ENV CARGO_NET_GIT_FETCH_WITH_CLI=true

https://github.com/ffbuilds/static-librav1e/blob/4f85e96efe2fd6f3024256c78529b313178e1bd6/Dockerfile#L31-L32

Otherwise it will get OOM-killed during the build. It seems to be a bit faster on amd64 as well. I was able to get it to compile with qemu for arm, but it is extremely slow. It took over 5 hours to run.

I'm still thinking of a good way to keep alpine tags synched across repos. Probably just a docker tag, like main-alpine_3.16.2 and then reference that tag in all downstream repos with hash pinning.

For sanity tests, what do you recommend?

@wader
Copy link
Member Author

wader commented Oct 16, 2022

For librav1e specifically, I had to add this:

# Fails on fetch without CARGO_NET_GIT_FETCH_WITH_CLI=true and git installed
ENV CARGO_NET_GIT_FETCH_WITH_CLI=true

https://github.com/ffbuilds/static-librav1e/blob/4f85e96efe2fd6f3024256c78529b313178e1bd6/Dockerfile#L31-L32

Otherwise it will get OOM-killed during the build. It seems to be a bit faster on amd64 as well. I was able to get it to compile with qemu for arm, but it is extremely slow. It took over 5 hours to run.

Oh the OOM is related to fetching? strange, i assumed it was compiler related. Feels nearly like a bug?

I'm still thinking of a good way to keep alpine tags synched across repos. Probably just a docker tag, like main-alpine_3.16.2 and then reference that tag in all downstream repos with hash pinning.

Sounds good

For sanity tests, what do you recommend?

Maybe for the most interesting decoders and encoders try encode and decode:

# for each video encoders
ffmpeg -f lavfi -i testsrc -c:v $encoder -t 1s test-$codec-$encoder.mp4
# for each audio encoders
ffmpeg -f lavfi -i sine -c:a $encoder -t 1s test-$codec-$encoder.mp4

# for each video decoder, use test files from above?
for i in test-$codec-*.mp4; do 
  ffmpeg -c:v $decoder -i $i
done

# for each audio decoder, use test file from above?
for i in test-$codec-*.mp4; do 
  ffmpeg -c:a $decoder -i $i
done

...

Something like that, try to at least encode and decode once. Note that it can be tricky to use exit code with the ffmpeg cli tool, it is known to exit with zero even on errors.

@wader
Copy link
Member Author

wader commented Oct 16, 2022

Feels related to rust OOM issue rust-lang/cargo#10583

@binoculars
Copy link
Member

I found a good way to do matrix builds with different alpine versions and have updated all of the repos to reflect the changes, so now it's a matter of pinning to a sha256 hash.

Is there a good way to use you bump action to pin to a hash? This is what I'm thinking in the static-libaom repo:
https://github.com/ffbuilds/static-libaom/blob/a7dfe23d428b99a730611a19972fda0db754d76c/.github/workflows/docker.yml#L27-L31

ffmpeg would be similar but would have all of the dependencies instead of just vmaf like aom has.

@binoculars
Copy link
Member

Maybe something like this:

        include:
          - alpine_version: 3.16.2
            # bump vmaf_3.16.2 /libvmaf_version: (sha256:[0-9a-f]{64}) # 3.16.2/ docker:ghcr.io/ffbuilds/static-libvmaf-alpine_3.16.2:main|*
            libvmaf_version: sha256:91173f8b89d81bc3ee8f0a6b0ba3ed11bdbbc3af8315fcdff7165900a893862e # 3.16.2
          - alpine_version: edge
            # bump vmaf_edge /libvmaf_version: (sha256:[0-9a-f]{64}) # edge/ docker:ghcr.io/ffbuilds/static-libvmaf-alpine_edge:main|*
            libvmaf_version: sha256:5286265af89dc30e56017dba681b6a0d07f7dca38b5f1dcc312fc842e25cd9ea # edge

Not sure if that syntax would work or not

@wader
Copy link
Member Author

wader commented Oct 18, 2022

bump has kind of support for that but not for the docker filter at the moment, ex the git and gitrefs filter can do this

$ bump pipeline 'https://github.com/FFmpeg/FFmpeg.git|^5|@commit'
1326fe9d4c85cca1ee774b072ef4fa337694f2e7

(but i noticed now that i probably have to look into the ^{} handling, messy)

So it would be nice if the docker filer could to docker:....|...|@digest. If i remember correctly the registry API only support fetching one digest (via manifest endpoint) at the time and the way bump works at the moment that would not work well.

Another issue is that the docker filter only supports docker hub but i don't think it is that much work to support other registries, at least for anonymous access.

Remember that i did start look into using some more expressive pipeline language (jq?) that could solve some of these issues :)

@wader
Copy link
Member Author

wader commented Oct 19, 2022

Docker registry support in bump wader/bump#84, still no support for digest, will need some thinking.

$ bump pipeline 'docker:ghcr.io/ffbuilds/static-libvmaf-alpine_edge'
main

@binoculars
Copy link
Member

Nice! I know dependabot has some support for digest but it's not smart enough to look outside of the Dockerfile. In this case, I want to fetch the mutli-arch image / manifest list instead of an individual amd64 image, which dependeabot handles correctly, but without support for the digest in ARGs / outside of the Dockerfile.

https://github.com/dependabot/dependabot-core/blob/main/docker/lib/dependabot/docker/update_checker.rb

@wader
Copy link
Member Author

wader commented Oct 19, 2022

Interesting comment in the beginning of the file, thanks for the link. How to know if a image use list or traditional manifest? have to ask docker? maybe for bump can be done in two step somehow, bump detects change, use external command to get the correct digest? think i need to look at a concrete example to understand this :)

@binoculars
Copy link
Member

binoculars commented Oct 19, 2022

https://docs.docker.com/engine/reference/commandline/manifest/ is what I used as a reference to generate the manifest list if that helps at all. As an example with dependabot: ffbuilds/static-libaom#2 is one I merged and then changed later to use the matrix builds for different alpine versions

@wader
Copy link
Member Author

wader commented Oct 24, 2022

Hi again, do you plan on having tags per ffmpeg-version somehow in addition to main? Some ppl have asked me about 32 bit arm builds and i would like to point them to ffbuilds but i think they probably like to have per ffmpeg verison tags somehow. Or would the recommendation be to use digest instead?

@binoculars
Copy link
Member

For now I would use the image digest. I'm trying to think of good ways to do the tagging with all of the versions, but haven't thought of any yet. Maybe something that involves hashing all of the library names and versions together as the tag?

@binoculars
Copy link
Member

Hey I'm having some issues with bump. Getting a 403 on trying to open a PR. Can you advise? https://github.com/ffbuilds/static-libx265/actions/runs/3344292559/jobs/5538477027

Also, I've created a discussions section in the ffbuilds org. That's probably the best place to reach me regarding this topic or just @ me anywhere on GH if its relevant

@wader
Copy link
Member Author

wader commented Oct 28, 2022

Hey I'm having some issues with bump. Getting a 403 on trying to open a PR. Can you advise? https://github.com/ffbuilds/static-libx265/actions/runs/3344292559/jobs/5538477027

Hmm that is strange. Do you have access to the token used and can try with curl etc? could it have wrong scope etc?

Looking at the code it seems to fail here https://github.com/wader/bump/blob/master/internal/githubaction/githubaction.go#L262, a bit above i think it successfully used the same token to list pull request, strange

Also, I've created a discussions section in the ffbuilds org. That's probably the best place to reach me regarding this topic or just @ me anywhere on GH if its relevant

👍 yeap let's close this issue and continue in per issue topics etc?

@binoculars
Copy link
Member

No, don't have the token either. Going to try add permissions: pull-request: write

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants