I have managed to make my bash prompt lazy loads some components to have the feeling of an async rendering. The lazy loading function runs as a background process with (render_async &)
.
From the screencast below you can see how the prompt actually loads part of it right away and the other "lazy loads". However, what I noticed is that if you change the directory before the async part is loaded the new prompt gets overwritten with wrong context.
My thought process of fixing this is the following:
jobs
commandwd:*
next to processes that are not in the background of the current working directoryI added a sleep to my async command to simulate that and I can see the background process hanging.
[1] Running render_async &
[2]- Running render_async & (wd: ~/Projects/My Personal Space/blog-core)
[3]+ Running render_async &
I went ahead and wrote:
jobs | grep 'render_async.*wd:' | cut -d "[" -f2 | cut -d "]" -f1 | while read -r line ; do
kill "%$line"
done
Which in theory should parse the id of the background jobs in other directories and kill them. However in practice I keep getting for the example above kill: %2: no such job
.
When I execute the same kill
command in the shell itself it works perfectly.
Would appreciate any help here.
I have tried naming my forked process (name each async process by appending the CWD to the function name) inspired from https://askubuntu.com/questions/153900/how-can-i-start-a-process-with-a-different-name where:
bash -c "exec -a MyUniqueProcessName <command> &"
replaces the current shell, no new process is created, that's why I'm starting a new shell to call exec.Then you can kill the process with:
pkill -f MyUniqueProcessName You can start more than one process under the same name, then pkill -f will kill all of them.
but that kept telling me that my async function that I was trying to pass to the <command>
is not found and I suspect it has something to do with the fact that it is a custom function and that is forking a new bash process but I am not an expert here.
Thanks
Piping creates an implicit subshell, so jobs from the parent are not visible inside the piped-to while
loop where the kill
is running. How about this, which will at least kill the first job?
job="$(jobs | grep 'render_async.*wd:' | cut -d "[" -f2 | cut -d "]" -f1)"
kill "%$job"
In my tests, the $(...)
happens in the current shell, at least initially, so the job table is visible. Example:
$ cat &
[1] 8296
$ jobs
[1]+ Stopped cat
$ echo "$(jobs| cut -d "[" -f2 | cut -d "]" -f1|head -1)"
1
(By the way, does the job you want to kill always have the same job number? Can you just hardcode %2
, or whatever it may be?)
Edit Multijob:
joblist="$(jobs| sed -E 's/^[^0-9]*([0-9]+).*$/\1/'|tr '\n' '@')"
# E.g., 1@2@
IFS='@' # Split on @
for job in $joblist # No double-quotes!
do
kill "%$job"
done
Working example from a shell prompt:
$ cat&
[1] 8824
$ cat&
[2] 6452
[1]+ Stopped cat
$ jobs
[1]- Stopped cat
[2]+ Stopped cat
$ joblist="$(jobs| sed -E 's/^[^0-9]*([0-9]+).*$/\1/'|tr '\n' '@')"
$ IFS=@
$ for job in $joblist; do kill "%$job" ; done
[1]- Stopped cat
[2]+ Stopped cat
$ jobs
[1]- Terminated cat
[2]+ Terminated cat
$ jobs
$
I chose @
as a separator because I don't think it means anything particular to the shell.
I don't know if it makes a difference in the PS1-function environment.