Aurora stores critical incident data across multiple systems. This guide covers backing up and restoring PostgreSQL, Weaviate, Vault, and object storage.
What to Back Up
Aurora’s data is distributed across:
PostgreSQL - Incidents, alerts, suggestions, thoughts, users, credentials
Weaviate - Semantic search vectors, knowledge base embeddings
Vault - User credentials, API tokens, secrets
Object Storage (SeaweedFS/S3) - File uploads, Terraform state, exported data
Redis - Celery tasks, cache (ephemeral - can be rebuilt)
Backup Strategy
Recommended : Run automated backups daily to S3-compatible storage with 30-day retention.
Backup Schedule
PostgreSQL : Daily full backup + continuous WAL archiving
Weaviate : Weekly snapshot
Vault : Daily backup
Object Storage : Continuous replication to secondary region
Redis : No backup needed (cache only)
Backing Up PostgreSQL
Manual Backup
Create a backup directory
mkdir -p ~/aurora-backups/postgres
Run pg_dump
docker exec aurora-postgres pg_dump \
-U $POSTGRES_USER \
-d $POSTGRES_DB \
--format=custom \
--compress=9 \
--file=/tmp/aurora_backup.dump
docker cp aurora-postgres:/tmp/aurora_backup.dump \
~/aurora-backups/postgres/aurora_ $( date +%Y%m%d_%H%M%S ) .dump
The --format=custom option enables parallel restore and selective table restore.
Verify the backup
docker exec aurora-postgres pg_restore \
--list /tmp/aurora_backup.dump | head -20
Automated Backup Script
Create scripts/backup-postgres.sh:
#!/bin/bash
set -e
# Configuration
BACKUP_DIR = "/var/backups/aurora/postgres"
RETENTION_DAYS = 30
DATE = $( date +%Y%m%d_%H%M%S )
BACKUP_FILE = " $BACKUP_DIR /aurora_ $DATE .dump"
# Ensure backup directory exists
mkdir -p $BACKUP_DIR
# Run backup
docker exec aurora-postgres pg_dump \
-U $POSTGRES_USER \
-d $POSTGRES_DB \
--format=custom \
--compress=9 \
--file=/tmp/backup.dump
docker cp aurora-postgres:/tmp/backup.dump $BACKUP_FILE
# Upload to S3 (optional)
if [ -n " $AWS_S3_BACKUP_BUCKET " ]; then
aws s3 cp $BACKUP_FILE s3:// $AWS_S3_BACKUP_BUCKET /postgres/
fi
# Clean up old backups
find $BACKUP_DIR -name "aurora_*.dump" -mtime + $RETENTION_DAYS -delete
echo "Backup completed: $BACKUP_FILE "
Run via cron:
0 2 * * * /path/to/scripts/backup-postgres.sh
Continuous WAL Archiving
For point-in-time recovery, enable WAL archiving in PostgreSQL.
Add to config/postgres/postgresql.conf:
wal_level = replica
archive_mode = on
archive_command = 'aws s3 cp %p s3://your-backup-bucket/wal/%f'
Update docker-compose.yaml to mount the config:
postgres :
volumes :
- ./config/postgres/postgresql.conf:/etc/postgresql/postgresql.conf:ro
command : postgres -c config_file=/etc/postgresql/postgresql.conf
Backing Up Weaviate
Create a Weaviate backup
Weaviate supports backups via API: curl -X POST "http://localhost:8080/v1/backups/filesystem" \
-H "Content-Type: application/json" \
-d '{
"id": "aurora-backup-' $( date +%Y%m%d ) '",
"include": ["AuroraKnowledge", "AuroraIncidents"]
}'
Check backup status
curl "http://localhost:8080/v1/backups/filesystem/aurora-backup-20260303"
Export backup files
Weaviate stores backups in /var/lib/weaviate/backups inside the container: docker cp weaviate:/var/lib/weaviate/backups/aurora-backup-20260303 \
~/aurora-backups/weaviate/
Automated Weaviate Backup
#!/bin/bash
set -e
BACKUP_ID = "aurora-backup-$( date +%Y%m%d)"
BACKUP_DIR = "/var/backups/aurora/weaviate"
# Create backup
curl -X POST "http://localhost:8080/v1/backups/filesystem" \
-H "Content-Type: application/json" \
-d '{"id": "' $BACKUP_ID '"}'
# Wait for completion
while true ; do
STATUS = $( curl -s "http://localhost:8080/v1/backups/filesystem/ $BACKUP_ID " | jq -r '.status' )
if [ " $STATUS " = "SUCCESS" ]; then
break
fi
sleep 5
done
# Export backup
mkdir -p $BACKUP_DIR
docker cp weaviate:/var/lib/weaviate/backups/ $BACKUP_ID $BACKUP_DIR /
echo "Weaviate backup completed: $BACKUP_ID "
Backing Up Vault
Vault backups contain sensitive credentials. Encrypt and restrict access.
Take a Vault snapshot
docker exec aurora-vault vault operator raft snapshot save /tmp/vault-snapshot.snap
docker cp aurora-vault:/tmp/vault-snapshot.snap \
~/aurora-backups/vault/vault_ $( date +%Y%m%d_%H%M%S ) .snap
Encrypt the snapshot
# Encrypt with GPG
gpg --symmetric --cipher-algo AES256 \
~/aurora-backups/vault/vault_20260303_020000.snap
# Or use age
age -p ~/aurora-backups/vault/vault_20260303_020000.snap > \
~/aurora-backups/vault/vault_20260303_020000.snap.age
Upload to secure storage
aws s3 cp ~/aurora-backups/vault/vault_20260303_020000.snap.gpg \
s3://your-backup-bucket/vault/ \
--sse AES256
Vault Backup Script
#!/bin/bash
set -e
BACKUP_DIR = "/var/backups/aurora/vault"
DATE = $( date +%Y%m%d_%H%M%S )
BACKUP_FILE = " $BACKUP_DIR /vault_ $DATE .snap"
ENCRYPTION_KEY_FILE = "/etc/aurora/vault-backup-key.txt"
mkdir -p $BACKUP_DIR
# Take snapshot
docker exec aurora-vault vault operator raft snapshot save /tmp/snapshot.snap
docker cp aurora-vault:/tmp/snapshot.snap $BACKUP_FILE
# Encrypt
if [ -f " $ENCRYPTION_KEY_FILE " ]; then
gpg --batch --yes --passphrase-file $ENCRYPTION_KEY_FILE \
--symmetric --cipher-algo AES256 $BACKUP_FILE
rm $BACKUP_FILE # Remove unencrypted file
BACKUP_FILE = " $BACKUP_FILE .gpg"
fi
# Upload to S3
if [ -n " $AWS_S3_BACKUP_BUCKET " ]; then
aws s3 cp $BACKUP_FILE s3:// $AWS_S3_BACKUP_BUCKET /vault/ --sse AES256
fi
echo "Vault backup completed: $BACKUP_FILE "
Backing Up Object Storage (SeaweedFS)
SeaweedFS backups depend on your deployment:
Option 1: S3 Sync
Sync to another S3-compatible service:
#!/bin/bash
aws s3 sync \
--endpoint-url http://localhost:8333 \
s3://aurora-storage \
s3://aurora-storage-backup \
--region us-east-1
Option 2: Volume Backup
Back up Docker volumes:
docker run --rm \
-v aurora_seaweedfs_filer_data:/data \
-v ~/aurora-backups/seaweedfs:/backup \
alpine tar czf /backup/seaweedfs_ $( date +%Y%m%d ) .tar.gz /data
Option 3: SeaweedFS Replication
Configure cross-datacenter replication in docker-compose.yaml:
seaweedfs-master :
command : >
master
-defaultReplication=010 # 0 same rack, 1 different datacenter, 0 different racks
Restoring from Backup
Restore PostgreSQL
Drop and recreate database
docker-compose up -d postgres
docker exec -it aurora-postgres psql -U $POSTGRES_USER -c "DROP DATABASE IF EXISTS $POSTGRES_DB ;"
docker exec -it aurora-postgres psql -U $POSTGRES_USER -c "CREATE DATABASE $POSTGRES_DB ;"
Restore from dump
docker cp ~/aurora-backups/postgres/aurora_20260303_020000.dump aurora-postgres:/tmp/restore.dump
docker exec aurora-postgres pg_restore \
-U $POSTGRES_USER \
-d $POSTGRES_DB \
--verbose \
/tmp/restore.dump
Restore Weaviate
Stop Weaviate
docker-compose stop weaviate
Copy backup into container
docker cp ~/aurora-backups/weaviate/aurora-backup-20260303 \
weaviate:/var/lib/weaviate/backups/
Restore via API
docker-compose up -d weaviate
curl -X POST "http://localhost:8080/v1/backups/filesystem/aurora-backup-20260303/restore" \
-H "Content-Type: application/json"
Verify restore
curl "http://localhost:8080/v1/schema"
Restore Vault
Stop Vault
docker-compose stop vault vault-init
Decrypt backup
gpg --decrypt ~/aurora-backups/vault/vault_20260303_020000.snap.gpg > \
~/aurora-backups/vault/vault_20260303_020000.snap
Restore snapshot
docker cp ~/aurora-backups/vault/vault_20260303_020000.snap aurora-vault:/tmp/restore.snap
docker-compose up -d vault
docker exec aurora-vault vault operator raft snapshot restore /tmp/restore.snap
Unseal Vault
docker exec aurora-vault vault operator unseal
Restore Object Storage
Restore SeaweedFS volumes:
docker run --rm \
-v aurora_seaweedfs_filer_data:/data \
-v ~/aurora-backups/seaweedfs:/backup \
alpine tar xzf /backup/seaweedfs_20260303.tar.gz -C /
Or sync from backup S3:
aws s3 sync s3://aurora-storage-backup s3://aurora-storage \
--endpoint-url http://localhost:8333
Disaster Recovery Testing
Test your restore process monthly. Backups are useless if you can’t restore them.
DR Test Checklist
Create a test environment
cp .env .env.dr-test
# Update ports to avoid conflicts
Restore all components (PostgreSQL, Weaviate, Vault, object storage)
Verify data integrity
Check incident count: SELECT COUNT(*) FROM incidents;
Test authentication
Verify file uploads work
Run a test investigation to ensure all systems function
Document any issues and update procedures
Backup Monitoring
Monitor backup health:
#!/bin/bash
# scripts/check-backup-freshness.sh
MAX_AGE_HOURS = 48
BACKUP_DIR = "/var/backups/aurora/postgres"
LATEST_BACKUP = $( find $BACKUP_DIR -name "aurora_*.dump" -type f -printf '%T@ %p\n' | sort -n | tail -1 | cut -f2- -d " " )
if [ -z " $LATEST_BACKUP " ]; then
echo "ERROR: No backups found in $BACKUP_DIR "
exit 1
fi
BACKUP_AGE = $(( ( $(date +%s ) - $( stat -c %Y " $LATEST_BACKUP " ) ) / 3600 ))
if [ $BACKUP_AGE -gt $MAX_AGE_HOURS ]; then
echo "WARNING: Latest backup is $BACKUP_AGE hours old (max: $MAX_AGE_HOURS )"
exit 1
fi
echo "OK: Latest backup is $BACKUP_AGE hours old"
Integrate with monitoring:
# Add to crontab
0 * * * * /path/to/scripts/check-backup-freshness.sh || curl https://healthchecks.io/your-check-id
Backup Retention Policy
Daily backups: 7 days retention
Weekly backups: 4 weeks retention
No long-term archival
Daily backups: 30 days retention
Weekly backups: 12 weeks retention
Monthly backups: 12 months retention
Yearly backups: 7 years (compliance)
Implement with S3 lifecycle policies:
{
"Rules" : [
{
"Id" : "aurora-backup-lifecycle" ,
"Status" : "Enabled" ,
"Transitions" : [
{
"Days" : 30 ,
"StorageClass" : "STANDARD_IA"
},
{
"Days" : 90 ,
"StorageClass" : "GLACIER"
}
],
"Expiration" : {
"Days" : 2555
}
}
]
}
Security Considerations
Backups contain sensitive data. Always encrypt and restrict access.
Encrypt backups at rest - Use GPG, age, or S3 SSE
Encrypt backups in transit - Use HTTPS/TLS for uploads
Restrict access - Use IAM policies, bucket policies, and MFA
Audit access logs - Enable CloudTrail or equivalent
Test encryption - Verify you can decrypt backups
Rotate encryption keys - Update keys annually
Separate backup credentials - Use dedicated service accounts
Troubleshooting
Backup script fails with permission denied
Ensure the script can access Docker: sudo usermod -aG docker $USER
# Log out and back in
Restore fails with constraint violations
Restore with --no-owner --no-privileges: pg_restore -U $POSTGRES_USER -d $POSTGRES_DB \
--no-owner --no-privileges /tmp/restore.dump
Weaviate backup times out
Increase timeout for large datasets: curl -X POST -m 3600 "http://localhost:8080/v1/backups/filesystem" ...
If you lose unseal keys, you cannot recover Vault data. This is by design. Always store unseal keys in multiple secure locations:
Encrypted USB drives
Hardware security modules (HSMs)
Split across key personnel (Shamir’s Secret Sharing)
Next Steps
First Investigation Run your first incident investigation
Custom Connectors Build integrations for proprietary systems