Storing Mail-in-a-Box Backups in S3

Current Mood: “So Easy” by Röyksopp

One of the things that I left unfinished when I set up my Mail-in-a-Box server was backups. Mail-in-a-Box will automatically create backups of your data, but as far as I know it doesn’t have any easy way to transfer these backups to another location for safekeeping. Because Mail-in-a-Box encrypts its backups, it should be OK to store these backups in S3. In this post, I will describe how I set up my server to automatically transfer backups to S3.

Step 1: Copy the Secret Key

The default path for backups is /home/user-data/backup/encrypted. If you look in /home/user-data/backup, there should be a file named secret_key.txt. scp this file off of the server, or even better, copy the contents and store it in an encrypted password vault on your local machine (which should also be backed up remotely somewhere).

Step 2: Create an S3 Bucket

Create an S3 bucket to store backups. Open S3 and click “Create Bucket”. Enter a name for the bucket - for example, “mail-server-backups”. For the Region, select a region that is different from the one where your mail server resides. For example, my mail server is in us-east-2 (Ohio), but I used us-east-1 (Northern Virginia) for my backups. If something catastrophic happens to AWS’ data centers in either region, this should buffer against such events (if a catastrophic event affects both regions, you’ll probably have bigger concerns than your mail server backups). Continue to the “Set permissions” options. Verify that the S3 bucket will have the following settings (they should be selected by default):

  1. Owner account has full access
  2. Manage public permissions is set to “Do not grant public read access to this bucket (Recommended)”
  3. Manage system permissions is set to “Do not grant Amazon S3 Log Delivery group write access to this bucket”

Click “Next” to continue, then click “Create Bucket” to create the bucket.

Step 3: Create an IAM Role

Open Identity & Access Management. Click on “Roles” on the side menu, then click “Create role”. Select “AWS service” and then click on “EC2” for the role type. For the use case, select “EC2”, then click “Next: Permissions”. The next page is “Attach permissions policy”. We are not going to use any of the AWS managed policies. Instead, click “Create policy”, then click the “Select” button next to “Create Your Own Policy”. The page will open in a new window - keep the other tab open. Set the “Policy Name” to “mail-server-backup”. Copy the following policy document, replacing both instances of your-bucket-name with the name of your S3 bucket:

    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Effect": "Allow",
            "Action": [
            "Resource": [

Save the policy. You will be redirected back to the IAM dashboard. Switch back to the other tab where you were creating the role. Click the “Refresh” button to refresh the list of policies, then filter for “mail-server-backup”. Enable the checkbox for the policy, then click “Next: Review”. Name the role “mail-server-backup”, then click “Create Role”.

Step 4: Add Role to EC2 Instance

Open the EC2 Instances page. Select your instance, then click the Actions button and select “Attach/Replace IAM Role” under “Instance Settings”. Select “mail-server-backup” from the “IAM role” dropdown, then click “Apply”.

Step 5: Install awscli

SSH into your mail server. Execute the following commands to install awscli:

sudo apt-get update && sudo apt-get install -y awscli

Step 6: Verify that Role is Configured Correctly

Let’s verify that your IAM role and bucket are configured correctly:

echo "This is a test file" > ~/test-file && aws s3api put-object --bucket your-bucket-name --key test-file --body ~/test-file

Execute the command above, then open your S3 bucket and verify that the key test-file exists in the bucket. Click on the object, download it, and verify that it has the contents “This is a test file”.

Step 7: Upload Backups on Cron

We’ll use awscli’s s3api put-object operation to copy an archive of the files in /home/user-data/backup/encrypted to S3. This command will need to run as root because your normal account won’t have sufficient permissions to view the files in /home/user-data/backup/encrypted. Execute sudo crontab -e. If you haven’t set up a crontab for root and also haven’t configured a default file editor, you’ll see the following screen:

ubuntu@box:~$ sudo crontab -e
no crontab for root - using an empty one

Select an editor.  To change later, run 'select-editor'.
  1. /bin/ed
  2. /bin/nano        <---- easiest
  3. /usr/bin/vim.basic
  4. /usr/bin/vim.tiny

Choose 1-4 [2]:

Select whichever editor you’re most comfortable with (insert proselytizing for vim here). After you’ve selected an editor, you will see an empty crontab file (if it didn’t exist already):

# Edit this file to introduce tasks to be run by cron.
# Each task to run has to be defined through a single line
# indicating with different fields when the task will be run
# and what command to run for the task
# To define the time you can provide concrete values for
# minute (m), hour (h), day of month (dom), month (mon),
# and day of week (dow) or use '*' in these fields (for 'any').#
# Notice that tasks will be started based on the cron's system
# daemon's notion of time and timezones.
# Output of the crontab jobs (including errors) is sent through
# email to the user the crontab file belongs to (unless redirected).
# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
# For more information see the manual pages of crontab(5) and cron(8)
# m h  dom mon dow   command

Add a single command on a new line at the end of the crontab:

30 3 * * * tar -czf /tmp/backup-$(date '+\%m-\%d-\%y').tar.gz /home/user-data/backup/encrypted; aws s3api put-object --bucket --key backup-$(date '+\%m-\%d-\%y').tar.gz --body /tmp/backup-$(date '+\%m-\%d-\%y').tar.gz; rm /tmp/backup-$(date '+\%m-\%d-\%y').tar.gz

This command means “create an archive of /home/user-data/backup/encrypted named backup-mm-dd-yy.tar.gz and upload it to this S3 bucket every day at 3:30 AM”. Note that the time for backups might be different if you’ve changed this - check the timestamp on the files in /home/user-data/backup/encrypted. Each file should have roughly the same creation time. If the time is different on your server, just adjust the minute/hour fields (the first two numbers) in the line above.

Save the crontab and exit your editor to install the new crontab.

(Optional) Step 8: Automatically Transition/Delete Backups

Over time, your backups will probably become much larger, which means you’ll be storing a lot of data in your S3 bucket. One way to minimizie costs is to set up automatic Lifecycle rules for your S3 bucket. To do this, open your S3 bucket in the AWS console and click the “Management” tab. Click “Lifecycle” (it should already be selected), then click “Add lifecycle rule”. Enter a lifecycle rule name, e.g., “Delete Old Backups” on the first page, then click “Next”.

The next page allows you to configure automatic transitions. We didn’t configure versioning for your S3 bucket above, so you only need to select the checkbox for “Current version”. Next, click “Add transition”. From the “Select a transition” dropdown, select “Transition to Amazon Glacier after”. In the “Days after object creation” field, enter 15. Then click “Next”.

The next page allows you to configure automatic deletion of objects. Again select “Current version”, then select the checkbox for “Expire current version of object” and enter “180” for the “After ___ days from object creation”. This will give you about 6 months worth of backups should something go wrong. Then click Next. On the next page, review your settings, then click “Save” to create the lifecycle rule.