I would like to lock a directory while a Bash script is running and make sure it's not locked anymore when the script dies.
My script creates a directory, and I want to try deleting it, if it I can't delete it then it means it's locked. If it's not locked it should create the directory.
rm "$dir_path" > /dev/null 2>&1
if [ -d "$dir_path" ]; then
exit 0
fi
cp -r "$template_dir" "$dir_path"
# Lock directory
#LOCK "$dir_path"
# flock --exclusive --nonblock "$app_apex_path" # flock: bad file descriptor
# When script ends the lock is automatically removed without need to do any cleanup
# this is necessary because if for example in case of power failure the dir would still
# be locked on next boot.
I have looked into flock
but it doesn't seem to work like this.
Here’s an example with advisory locking, which works fine as long as all participating scripts follow the same protocol.
set -e
if ! mkdir '/tmp/my-magic-lock'; then
exit 1 # Report an error, maybe?
fi
trap "rmdir '/tmp/my-magic-lock'" EXIT
# We hold the advisory lock now.
rm -Rf "$dir_path"
cp -a "$template_dir" "$dir_path"
As a side note, if I were to tackle this situation, I would simply make $template_dir
and $dir_path
Btrfs subvolumes and use snapshots instead of copies:
set -e
btrfs subvolume delete "$dir_path"
btrfs subvolume snapshot "$template_dir" "$dir_path"
This^^^ is way more efficient, “atomic” (in a number of beneficial ways), copy-on-write and also resilient towards multiple concurrent replacement attempts of the same kind, yielding a correct state once all attempts finish.