-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fish code in the background #563
Comments
There are two closely related issues that would require solving: stopping and resuming a block of fish code with ctrl-Z, and running fish code in parallel, e.g. in pipelines or using the There are (as far as I have thought out) three options:
Forking (option 1) is what bash and similar shells do, but they only use it to solve the second problem, not the first one. Using this option in fish could I think be done. When a ctrl-Z arrives fish forks, the parent handles user input and the child maintains the fish script state. This option is rather hairy, first of all it needs some way to synchronize internal state such as variables between the two processes, and second it needs some way to have the new child processes handle all external resources such as file descriptors and other child processes. This does not seem like a very practical approach to me, but I still wanted to name it. In option 2, fish would always execute shell code on a secondary thread. On a ctrl-Z this thread could be suspended and a new thread started to process user input. This would require making the evaluator and most of fish's internals thread safe. In option 3, the evaluator (parser and related things) would need to be refactored so that multiple instances of it could exist at the same time, with all state stored in an object. If a ctrl-Z arrived, the evaluator should then unwind without destroying its state, so that later on it could be resumed again. To have fish code run in parallel within this option, there are again several possibilities: resume the stopped evaluator in a forked child (which has the same problems as option 1), resume it in a different thread (which also requires making fish's internals thread safe), or having some way to interleave multiple evaluators: fish would run one evaluator for some time, then the other, then the first one again, etc. I think either option 2 or 3 (or some kind of combination) would need to be chosen, but I'm not sure which of those would be best. |
The parser is already instanced, and in fact we use that when loading completions (see A stackless-type parser, that doesn't depend on the C stack for storing its state, seems achievable to me. Such a parser could be driven externally ("do some work"). The big win here would be sane signal handling: cancellation would be handled by just stopping doing work, without needing to unwind the C stack. The biggest obstacle I see to a fully multithreaded shell is that there really is per-process state out of our control, such as the current working directory or the process that owns the term ( |
Great, so the parser is in fact already multithread safe! I must say I don't immediately see the advantage of the 'sane signal handling' you mention. On a ctrl-C the execution of code is cancelled, then why is it a problem to unwind the stack? That has to happen some way or another, now it happens by explicitly unwinding the C stack, but in a stackless parser stack frames would be deleted by doing something like The two examples of kernel controlled state you give seem solvable (using |
Handling signals by unwinding the C stack might be OK. I think it's more the current haphazard design that I dislike - for example, the I guess the real win would be centralizing SIGINT handling so it's easier to reason about. Regarding the kernel controlled state, consider a command like this:
The |
Have a look at the The fish process obviously cannot have more than one working directory, but it can open other directories that other threads use as working directory using the normal directory open calls. Then if the second thread needs to do things relative to its working directory, fish can use openat/fstatat/etc to open files relative to the directory file descriptor. When the second thread needs to spawn a program like |
openat doesn't appear to be implemented on OS X. I see it in Linux. |
Hmm, bummer. The manpage says it's part of POSIX, so I figured it would be portable. But apparently not. OSX does have |
Revisiting this, being able to safely run shell code on other threads would be a total coup. Fixing up stuff like the working directory after fork is plausible. It's certainly worth exploring. |
I would absolutely love to have this feature in fish. It's one of the most important features of the other shells I've used! |
In the case of |
Another thought I had - background processes may be implemented with a sort of queue / barrier mechanism:
This is very bash-like, and would be an alternative to a fully threaded model. |
I've also thought about that, and there are several things that came to
So IMO the 'right' way forward is making the interpreter multithreaded. |
At first, I thought it would be weird to allow setting a variable from a background task...it would just sort of change at some point in the future. But now I think it makes sense at least for global variables - you might want to spawn off a slow background task that reports data back when it's done, and one mechanism for that could be setting a variable. We will have to take care that things like redefining functions while they are executing does not cause a crash. |
(Documentation for fish-shell#238, fish-shell#563)
FWIW not the case anymore. |
from the manpage:
|
full list of added documented syscalls in OS X 10.10:
it would seem these are the first new public syscalls added since 10.5, which added |
I'm closing this as a duplicate of #238. |
This allows us to skip re-wcs2stringing the base_dir again and again by simply using the fd. It's about 10% faster in my testing. fstatat is defined by POSIX, so it should be available everywhere.
Although I find fish much better than bash for everyday use, there's one major feature bash supports but fish does not: running fish code while doing anything else. Specifically, backgrounding a fish function doesn't work, and running fish code as part of a pipeline blocks the whole pipeline.
Try this in bash:
If you try the same in fish, you'll notice that in a pipeline the output only arrives after the foo function has exited, i.e. after 10 seconds, and running a fish function in the background basically has no effect. I run into this sometimes when building pipelines to filter stuff or when using aliases, for example this one:
If I then run
gitg &
, the shell is blocked until I quit gitg.Pressing ctrl-Z while a function or loop or other compound statement is being executed backgrounds only the current external command, and continues the fish code in the foreground:
Ideally this should background the entire block. (But this part does not work in bash either) In fact, backgrounding a compound statement that consists only of functions and builtins doesn't work at all:
Fixing all of this would require some way of having fish execute multiple pieces of fish shellcode in parallel, so that is probably not a simple fix. However I think this issue should be discussed so we can decide on how to move forward with respect to fixing this at some point in the future. Fish's parser could really use an overhaul (see also bug #557), and if/when that happens this issue should also be taken into consideration.
The text was updated successfully, but these errors were encountered: