-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
libfuzzer ubsan build: BAD BUILD: UBSan build of <fuzzer> seems to be compiled with ASan. #5317
Comments
Just checking, you can't reproduce this by building fuzzers with ubsan locally and then running check_build? |
Yeah, I tried when I commented on the issue (built them and ran check build), but I'll re-try right now as well. |
I just tried to repro this locally and couldn't either. Very weird. |
That would be bizarre since I notice that most of the vars (eg |
Anyway, I'm trying to do a build on Google Cloud Build with only UBSAN. If this succeeds I think it will be evidence for my state theory. |
Also doing another build that will skip testing. |
Thanks! |
Suspect there's some kind of state issue. @oliverchang Any ideas what could be happening here? |
Looking at a latest failed build https://oss-fuzz-build-logs.storage.googleapis.com/log-ea90f23b-921e-496b-9285-85faa267cb44.txt Seems like it's expected to run the address job and then the undefined job sequentially.
AND
When I scroll to the second I see a bunch of failures:
and I also see
It looks like bad build checks are performed on targets in Throwing shots at the dark but maybe if they went to I'm skeptical that's the issue because everyone running multiple sanitizers with build checks would be seeing this issue... Edit: Don't think so
I also ran some local experiments. If I build an Envoy fuzzer locally without sanitizers (just as a regression test), I find that there are 2 calls to __asan_poison_memory_region each.
|
Sorry for the crazy amount of spam, trying to trace down where the state sharing is coming from. I just want to make sure that
isn't causing some issue. Do you think it's possible that the oss-fuzz packages aren't being re-set in the second run and we're copying the old ones? (We only clear bazel-* directory, not bazel cache at the end of the build script). If so, we could recreate $STAGING_DIR here: Are other rules_fuzzing projects with multiple sanitizers running OK? Cel-cpp seems fine, but I don't see those lines in their log. |
Hmmm, I think I know what's going on: The I think the fix is easy: just remove the staging dir before creating it in the action script. Let me fix this now.
I think the other projects would not be affected because they use the default (sandboxed) execution model, which does this cleanup automatically. |
AH! Totally didn't think of that flag. Thank you! I can bump in Envoy!! |
Great! The fix is submitted and you can now use this new release in Envoy: https://github.com/bazelbuild/rules_fuzzing/releases/tag/v0.1.3 |
I'm confused about this. I feel like the container we are building the builds from should be torn down and restarted fresh. So I can't understand how state is persisting. |
Is each sanitizer configuration built in a separate container? If yes, is there any external storage that is mounted and persisted across runs? (IIRC, this is the case for local runs using infra/helper.py.) |
Maybe things like /work are persisted but from what I've been told by @oliverchang nothing should be persisted |
If somehow bazel cache ( |
On our build infra (cloud build), only |
Huh! Only the binaries are written to out. The cache is not in
|
I haven't confirmed this issue was fixed by #6069 but I suspect it is. |
[infra][build] Set HOME=/root on GCB when doing fuzzer builds. GCB passes HOME as env var to the docker container. It sets HOME to /builder/home which is persisted accross builds. This issue causes build breakages in #6035 and possibly #5317. Perhaps more insidiuosly it can cause fuzzers to be built with the wrong instrumentation.
Is this still an issue? |
Related: #4743
https://oss-fuzz-build-logs.storage.googleapis.com/log-6cb883f9-5026-4454-bc7c-a878a93962fb.txt
After fixing unrelated build issues, Envoy has been running into bad build checks on 100% of fuzzers.
My hope was to reproduce this locally and figure out the number of ASan calls happening so I could check what's happening, but it can't reproduce locally. If it's small, hoping this can be increased, but I don't know what the threshold should be.
oss-fuzz/infra/base-images/base-runner/bad_build_check
Line 32 in 96ae2ed
The text was updated successfully, but these errors were encountered: