I'm writing a small bash script for deploying a bunch of files to a specific remote directory. The process is as follows:
- Mount remote directory (sshfs)
- Remove everything except these few files
- Copy all the files from the local directory to the remote one.
I suddenly realized that if something went wrong mounting the remote directory, for example if I'd lost my internet connection, this script would remove everything from the directory I'm currently in! Which could easily be disastrous. So - what would the correct procedure be?
Since you don't provide your script I'm going to guess a little bit what it looks like. There might be several points for improvement.
The first and most important point would be to prevent accidental local access by design. So use paths to your mounted files instead of
cd'ing into a directory that might not exist. Do write:If you don't want to use
/mntas a mount point, consider creating a temporary directory usingmktemp -d.Next, and almost as important: abort your script on error. Writing
set -eat the top of the script will accomplish that. If yourmountfails, no other commands will be run.To navigate around the shortcomings of
set -ein certain cases, you might also considerexit'ing explicitly if an important command failed, like this(thanks to tjm3772 for bringing up the issue)
You can test if a directory is actually a mount point using the
mountpointcommand. This will fail on a local directory.I hope any of these suggestions might help.