I have a script that transfers some files via ssh. I usually start the script and once I'm sure it is running okay I halt it using CTRL-Z then make it run in the background with bg
.
> ./download-script.sh
Downloading...
Got file foobar.txt
Got file baz.txt
Downloading bash.txt (42%)
[2]+ Stopped download-script.sh
> bg
[1]+ download-script.sh &
>
How is this safe? It seems like the server sending the file doesn't know to wait for my process to come back online, does it?
What if I waited for an hour and then resumed the script in the background, would it continue where it left off?
My example uses an ssh file transfer, but this becomes a concern for me also when my script is interacting with most any resource.
I/O buffers will help it to withstand a little delay (ie, it will not barf if you suspend the script/command just a few seconds, at most. But more than a few secs and I think you would probably encounter other problems: TCP/UDP timeouts between origin and destination? I/O timeouts? (ex: too long to enter password, etc.)
If you have just "local" things and no timeout built-in the commands you use : for example, if you do :
tar cvf something.tar /path/to/something
and then ctrl-z
it, and then bg
(to awake and send to background) or fg
(to awake and send to foreground) : it will work, even if you wait a loong time.
HOWEVER in the meantime you have more chance one of the file being tar-ed to be modified...
Or your shell could have a TIMEOUT/TMOUT making it stop before.
Or (any other reason, really : power off, your cat stomping on CTRL+d exiting the shell, etc)
iow: you can, unless something relies on it being "fast".