When attempting to retrieve a file that was backed up to a DigitalOcean S3 compatible instance using these commands...
$ source "$HOME/.duplicity/.env_variables.conf"
$ sudo duplicity --verbosity notice --encrypt-sign-key=$GPG_KEY --log-file ~/.duplicity/info.log --file-to-restore <path to file> s3://sfo2.digitaloceanspaces.com/<my server> <path to file>
...I get this error...
boto.exception.NoAuthHandlerFound: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV1Handler'] Check your credentials
...even though as of last night the nightly backups are still successful using this bash script...
#!/bin/bash
HOME=<my home path>
source "$HOME/.duplicity/.env_variables.conf"
...
sync_results=`duplicity \
--verbosity notice \
--asynchronous-upload \
--encrypt-sign-key="$GPG_KEY" \
--log-file "$HOME/.duplicity/info.log" \
/srv/samba/share \
s3://sfo2.digitaloceanspaces.com/<my server>`
...
A month ago that retrieval command was successful. So something has changed since. The error points to an issue with credentials but both the command and the bash script source from the same .env file that contain the credentials.
All my research so far points to this being a boto issue, but until now I haven't had to touch boto to make things work correctly.
Any ideas?
PS: configuration details...
Local machine: Ubuntu 20.04 LTS, duplicity 0.8.12
DigitalOcean: just a Space
EDIT: mis-referenced cloud provider
EDIT2: stupid me should have also removed mis-referenced cloud info
might be caused by an upgrade to duplicity 0.8.23 which changed the default s3 backend from outdated boto to recent & maintained boto3. this changed how s3 access, especially with different end points, needs to be set up.
you can check if using
boto+s3://mitigates the issue, if so you may decide to stick with it or adapt toboto3+s3://which is now the default for the aliass3://the current 0.8.23 man page reads