If you don’t have ( or don’t want ) aws-cli installed you can upload files under 5gb with the following script:
#!/bin/bash
# Size limit of 5gb, if you need to upload a bigger file use AWS Cli tools
# $1 Must be the full path of the file
file=$(basename $1)
# S3 authentication information
bucket="<NameOfTheAwsBucket>"
s3Key=""<AWSAccessUser>"
s3Secret="<AWSAccessKey>"
awspath=s3.amazonaws.com
resource="/${bucket}/${file}"
# as we are uploading a backup the need to be a tar archive
contentType="application/x-compressed-tar"
dateValue=$(date -R)
stringToSign="PUT\n\n${contentType}\n${dateValue}\n${resource}"
# Signature magic :)
signature=$(echo -en ${stringToSign} | openssl sha1 -hmac ${s3Secret} -binary | base64)
curl -X PUT -T "${file}" -H "Host: ${bucket}.s3.amazonaws.com" -H "Date: ${dateValue}" -H "Content-Type: ${contentType}" -H "Authorization: AWS ${s3Key}:${signature}" https://${bucket}.${awspath}/${file}
If you plug the script above in a nice loop with bash, you can quickly upload a tar of your directories to S3
#!/bin/bash
CURRENT_DATE=$(date +%Y-%m-%d)
CURRENT_DIR=$(pwd)
BACKUPROOT="<FULL_PATH_TO_BACKUP_ROOT_FOLDER>"
cd $BACKUPROOT
DIR_LIST=$(ls -d */ | tr -d "/")
for i in $DIR_LIST
do
FILE_NAME=./$i-$CURRENT_DATE.tar.gz
tar czf $FILE_NAME $i
./s3uploader.sh "$BACKUPROOT/$FILE_NAME"
done
cd $CURRENT_DIR
echo -e "Backup $CURRENT_DATE Done\n"