If you have a self hosted Atlassian suite (Bitbucket, Confluence or Jira) you might want to consider creating a backup of the data to an external repository such as AWS S3 in-case your server fails.
Overview
This script creates a tar of all your Atlassian data and uploads it to AWS s3 using the aws cli package.
Requirements
- 1 x AWS S3 bucket
- AWS programmatic account that has writeable access to the AWS S3 bucket
- AWS CLI installed on the Linux server and configured with a AWS programmatic account
Script
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | #!/bin/bash host=atlassian bucket=yourbucketnamegoeshere tmp=/tmp/backup.cache date=(`date +%d-%m-%Y`) year=(`date +%C%g`) month=(`date +%B`) day=(`date +%d`) #Backup Bitbucket Repositories echo Backing up Bitbucket Repositories #Tar up data tar cvzfP $tmp/atlassian-repos-$date.tar.gz /var/atlassian/application-data/bitbucket/shared/data/repositories #Transfer tar to AWS S3 aws s3 cp $tmp/atlassian-repos-$date.tar.gz s3://$bucket/$year/$month/$day/$host/atlassian-repos-$date.tar.gz #Remove tar locally rm -rf $tmp |
Considerations
You will need to change lines 3 and 4 to match your requirements.
By default data is located in the “/var/atlassian/appliction-data” however you will need to change this location if it isn’t.
It creates the following folder structure in the S3 bucket: /year/month/day/host
Add this to a cron for automated results.
Conclusion
By setting up this script to run in a cron you will have the data in at least two places and depending on your cron schedule a lot of points to restore from.
Leave a Reply