forked from facebook/sapling
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Eden windows ci #8
Closed
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
ahornby
force-pushed
the
eden_windows_ci
branch
2 times, most recently
from
October 28, 2023 17:38
4e8d639
to
5cb3927
Compare
ahornby
force-pushed
the
eden_windows_ci
branch
2 times, most recently
from
November 5, 2023 16:38
0da4649
to
837e5c5
Compare
ahornby
force-pushed
the
eden_windows_ci
branch
from
November 10, 2023 17:37
837e5c5
to
fffe296
Compare
ahornby
force-pushed
the
eden_windows_ci
branch
2 times, most recently
from
November 24, 2023 14:44
cdc7131
to
5d73319
Compare
ahornby
force-pushed
the
eden_windows_ci
branch
2 times, most recently
from
December 19, 2023 18:10
6542bcc
to
60b255f
Compare
ahornby
force-pushed
the
eden_windows_ci
branch
2 times, most recently
from
December 27, 2023 16:51
b4efffb
to
06256bc
Compare
ahornby
force-pushed
the
eden_windows_ci
branch
2 times, most recently
from
February 6, 2024 17:46
0672e03
to
5794b3c
Compare
ahornby
pushed a commit
that referenced
this pull request
Jun 8, 2024
Summary: We have discovered a deadlock in EdenFS in S412223. The deadlock appears when all fsChannelThreads have a stack trace that looks like: ``` thread facebook#225, name = 'FsChannelThread', stop reason = signal SIGSTOP frame #0: 0x00007f7ca85257b9 frame #1: 0x0000000007d73aa4 edenfs`bool folly::DynamicBoundedQueue<folly::CPUThreadPoolExecutor::CPUTask, false, false, true, 8ul, 7ul, folly::DefaultWeightFn<folly::CPUThreadPoolExecutor::CPUTask>, std::atomic>::canEnqueue<std::chrono::_V2::steady_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l>>>(std::chrono::time_point<std::chrono::_V2::steady_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l>>> const&, unsigned long) + 212 frame #2: 0x0000000007d73557 edenfs`bool folly::DynamicBoundedQueue<folly::CPUThreadPoolExecutor::CPUTask, false, false, true, 8ul, 7ul, folly::DefaultWeightFn<folly::CPUThreadPoolExecutor::CPUTask>, std::atomic>::tryEnqueueUntilSlow<std::chrono::_V2::steady_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l>>, folly::CPUThreadPoolExecutor::CPUTask>(folly::CPUThreadPoolExecutor::CPUTask&&, std::chrono::time_point<std::chrono::_V2::steady_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l>>> const&) + 39 frame #3: 0x0000000007d722c0 edenfs`facebook::eden::EdenTaskQueue::add(folly::CPUThreadPoolExecutor::CPUTask) + 576 frame #4: 0x0000000005172cbd edenfs`folly::CPUThreadPoolExecutor::add(folly::Function<void ()>) + 557 frame #5: 0x000000000503c4d8 edenfs`void folly::Executor::KeepAlive<folly::Executor>::add<folly::Function<void (folly::Executor::KeepAlive<folly::Executor>&&)>>(folly::Function<void (folly::Executor::KeepAlive<folly::Executor>&&)>&&) && + 216 frame #6: 0x000000000503c257 edenfs`folly::futures::detail::DeferredExecutor::addFrom(folly::Executor::KeepAlive<folly::Executor>&&, folly::Function<void (folly::Executor::KeepAlive<folly::Executor>&&)>) + 135 frame #7: 0x000000000503baf5 edenfs`folly::futures::detail::CoreBase::doCallback(folly::Executor::KeepAlive<folly::Executor>&&, folly::futures::detail::State) + 565 frame #8: 0x00000000052e5e0d edenfs`folly::futures::detail::CoreBase::proxyCallback(folly::futures::detail::State) + 493 frame #9: 0x00000000052e5bb2 edenfs`folly::futures::detail::CoreBase::setProxy_(folly::futures::detail::CoreBase*) + 66 ... ``` This stack means that an FsChannelThread is blocked waiting to enque a task to the EdenTaskQueue. The EdenTaskQueue is the task queue that FsChannelThread's pull work from. So the deadlock is that all FsChannelThreads are blocked trying to add to a full queue which can only be emptied by FsChannelThreads. There are two contributing reasons for this happening: 1. The FsChannelThread will queue a bunch of network fetches into the Sapling queue. When those all finish the future callback chain runs on the FsChannelThreads. So the backing store accumulates a bunch of work then when the fetches complete it dumps a bunch of tasks onto the FsChannelThread's queue. So the backing store is filling up the FsChannelThread task queue. Other threads could theoretically do this to, but backingstore is the main one I have seen. 2. the FsChannelThread might enqueue to their own threads. Folly futures have some smarts to try to prevent a task running on executor a from enqueueing work onto executor a (and instead just run the callback inline), see: https://www.internalfb.com/code/fbsource/[c7c20340562d2eab5f5d2f7f45805546687942d9]/fbcode/folly/futures/detail/Core.cpp?lines=147-148. However, that does not prevent a future that is unknowningly running on an executor's thread from enqueueing to that thread's executor queue. I belive that kind of thing happens when we do stuff like this: https://www.internalfb.com/code/fbsource/[824f6dc95f161e141bf9b821a7826c40b570ddc3]/fbcode/eden/fs/inodes/TreeInode.cpp?lines=375-376 The outerlamba is aware which exector it's running on, but the future we start inside the lambda is not, so when we add that thenValue, it doesn't realize it's enqueuing to it's own executor. I wrote up this toy program to show when folly will enqueue vs run inline: https://www.internalfb.com/intern/commit/cloud/FBS/bce3a906f53913ab8dc74944b8b50d09d78baf9a. script: P1222930602, output: P1222931093. This shows if you return a future from a future callback, the next callback is enqueued. and if you have a callback on a future returned by another future's callback, the callback on the returned future is enqueued. So in summary, backingstore fills up the FsChannelThread work queue then FsChannelThread trys to push work onto it's own queue and deadlocks. Potential solutions: **1- Queued Immediate Executor.** This behavior was likely introduced here: D51302394. We moved from queued immediate executor to the fschannelthreads. queued immediate executor uses an unbounded queue looks like: https://www.internalfb.com/code/fbsource/[c7c20340562d2eab5f5d2f7f45805546687942d9]/fbcode/folly/executors/QueuedImmediateExecutor.h?lines=39 so that is why we didn't have the problem before. I don't love going back to this solution because we are moving away from queued immediate exector because it makes it easy to cause deadlocks. For example the one introduced in D50199539. **2- Make the queue error instead of block when full.** We use to do that for the Eden CPU thread pool, and it resulted in a lot of errors from eden that caused issues for clients. See: D6490979. I think we are kicking the can down the road if we error rather than block. **3- Don't bother switching to the fschannelthreads to complete the fuse request.** This is likely going to be the same as 1. Unless we are going to undo the semifuture-ization we have been doing. or perhaps we could start the fuse request on the fschannel threads, then finish it on the eden cpu threads. Which is pretty much the same thing as this diff except more sets of threads involved. So I prefer this change. **4- add backpressure somewhere else.** If we prevent the backingstore/other threads from being able to fill up the fschannelthread queue then it should be impossible for the queue to fill up. Because there would be no fan out (one task out then one task in). However, this is fragile, we could easily introduce fan out again and then end up back here. Also this would mean we basically block all requests in the fuse layer instead of the lower layers of eden. We would need to redo the queueing in the backing store layer. The fragile-ness and complexity makes me not like this solution. **5 - linearize all future chains.** The reason that the fschannelthreads are enqueing to their own queue is that we have nested futures. If we didn't nest then folly future would run callbacks inline. So if we de-nest all our futures this issue should theoritically go away, because the fschannelthreads will not enqueue to their own queue. So if we de-nest all our futures this issue should theoritically go away. However, I don't know if we can completely get rid of returning a future in a future callback. I don't love this solution as I know there are some explicit places where we choose to nest (I'm looking at you PrjFSDispatcher). so it would be VERY confusing where am I supose to nest and where am I not. it would be easy to do the wrong thing and re-intoduce this bug. Also this is a ton of places we need to change and they are not easy to find. So don't like this option because it's not very "pit of success" - too fragile and too easy to get wrong the first time. **6 - unbound the fschannelthread queue.** It's not great. But there is precident for this. We unbounded the eden cpu thread pool for basically the same reason. See D6513572. The risk here is that we are opening out selves up to OOM. The queue might grow super duper large and then get eden oom killed. We probably should add a config to this change so we can roll this out carefully and watch for ooms as we rollout. Additionally, long term we likely want to rethink how we do threading to archetect eden away from this. I prefer this solution the most. That's what I have implemented here. --------------------------- note: I am removing the limit on the number of outstanding fs request we process at once in this diff. That config was not exactly working how we wanted any ways. Queueing in the backing store let us handle essentially infinite requests at once as the Sapling request queue does not have a max size. I can follow up with a semaphore in the fuse/nfs layers to rate limit the number of active requests. Though fwiw I will likely bump the limit at least initially by a lot when I do that since we realisiticly were allowing clients to do infinite requests previously. Reviewed By: jdelliot, genevievehelsel Differential Revision: D56553375 fbshipit-source-id: 9c6c8a76bd7c93b00d48654cd5fc31d1a68dc0b2
one of my previous GCC 13+ fixes broke windows build internally which geneivive was kind enough to fix, but I couldn't see it in OSS as github didn't have a windows build so I've added CI for it. I found a few other problems with windows cmake build along the way. Here are the build fixes and the CI to show they work. * upgrade ninja to fix problem finding make_fbpy_archive.py on windows VS build tools 2019 & 2022 * upgrade libsodium to a version with VS 2019 binaries. * patch lmdb to build with cmake on windows as its makefile is not windows compatible Test plan: for make_fbpy_archive.py: test locally in the git base window with: python build/fbcode_builder/getdeps.py build --clean python-six` Before: `ninja: error: 'C:/Users/alex/local/sapling/build/fbcode_builder/CMake/make_fbpy_archive.py', needed by 'python-six.lib_install/python-six.manifest', missing and no known rule to make it` After, works for libsodium upgrade: test locally with: python build/fbcode_builder/getdeps.py build fizz Before, fails on not finding VS 2019 compatible libsodium After, works For lmdb cmake windows build: test locally with: python build/fbcode_builder/getdeps.py build lmdb add eden fs windows build make it visible if this builds is breaking
ahornby
force-pushed
the
eden_windows_ci
branch
from
September 10, 2024 11:58
5794b3c
to
51a4785
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.