Imagine waking up to find your VPS completely wiped - a ransomware attack, accidental deletion, or hardware failure. Your entire business is gone. Unless you have proper backups.
Most people know backups are important, but few implement them correctly. Having backups isn't enough - you need a strategy that accounts for different disaster scenarios, automated processes, and regular testing.
This guide covers everything you need to know about VPS backups: from fundamental principles to advanced automation scripts, disaster recovery planning, and common mistakes that leave your data vulnerable.
The 3-2-1 Backup Rule (Your Foundation)
This industry-standard rule should guide all your backup decisions:
- 3 copies of your data (1 primary + 2 backups)
- 2 different storage mediums (e.g., local disk + cloud storage)
- 1 copy offsite (geographic separation from your VPS)
Why This Matters
Consider these scenarios:
- Scenario 1: Your VPS provider has a datacenter fire. If your only backup is on the same provider, you lose everything.
- Scenario 2: You accidentally delete files and realize it only after your nightly backup overwrites the previous backup. Without multiple backup versions, recovery is impossible.
- Scenario 3: Ransomware encrypts your VPS and your attached backup drive. Without an offsite backup, you're forced to pay the ransom.
3-2-1 (as above)
+1 offline/immutable backup (cannot be modified or deleted)
0 errors (verified backups that have been tested)
Types of Backups Explained
1. Full Backup
What: Complete copy of all data
Pros: Fastest restoration, simple to understand
Cons: Largest storage requirement, slowest backup process
Best for: Weekly or monthly comprehensive snapshots
# Full backup example
tar -czf /backups/full-backup-$(date +%Y%m%d).tar.gz /var/www /etc /home
2. Incremental Backup
What: Only backs up files changed since last backup (any type)
Pros: Fastest backup, smallest storage
Cons: Slowest restoration (need full + all incrementals)
Best for: Daily backups between full backups
3. Differential Backup
What: Backs up changes since last full backup
Pros: Faster restoration than incremental
Cons: Grows larger as time passes since last full backup
Best for: Mid-sized datasets with moderate change rates
4. Snapshot Backup
What: Point-in-time copy of your entire VPS state
Pros: Instant recovery to exact system state
Cons: Vendor-specific, can be expensive
Best for: Before major system changes
Recommended Backup Strategy for Most VPS Servers
Here's a battle-tested strategy that balances protection with practical resource usage:
| Backup Type | Frequency | Retention | Location |
|---|---|---|---|
| Full system snapshot | Weekly | 4 weeks | VPS provider + offsite |
| Database backup | Daily | 30 days | Local + cloud storage |
| Application files | Daily | 14 days | Local + cloud storage |
| Configuration files | Daily | 90 days | Git repo + cloud |
Implementing Automated Backups
Method 1: Simple Bash Script (Good for Small Sites)
sudo nano /usr/local/bin/vps-backup.sh
#!/bin/bash
# Configuration
BACKUP_DIR="/root/backups"
DATE=$(date +%Y-%m-%d-%H%M)
RETENTION_DAYS=14
LOG_FILE="/var/log/backup.log"
# MySQL credentials
DB_USER="root"
DB_PASS="your_mysql_password"
DB_NAME="your_database"
# Directories to backup
WEB_DIR="/var/www"
CONFIG_DIR="/etc"
# Create backup directory
mkdir -p $BACKUP_DIR
# Function to log messages
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a $LOG_FILE
}
log "Starting backup process..."
# Backup database
log "Backing up database..."
mysqldump -u $DB_USER -p$DB_PASS $DB_NAME | gzip > $BACKUP_DIR/db-$DATE.sql.gz
if [ $? -eq 0 ]; then
log "Database backup successful"
else
log "ERROR: Database backup failed"
fi
# Backup web files
log "Backing up web files..."
tar -czf $BACKUP_DIR/web-$DATE.tar.gz $WEB_DIR 2>> $LOG_FILE
if [ $? -eq 0 ]; then
log "Web files backup successful"
else
log "ERROR: Web files backup failed"
fi
# Backup configuration files
log "Backing up configuration files..."
tar -czf $BACKUP_DIR/config-$DATE.tar.gz $CONFIG_DIR 2>> $LOG_FILE
if [ $? -eq 0 ]; then
log "Configuration backup successful"
else
log "ERROR: Configuration backup failed"
fi
# Delete old backups
log "Cleaning old backups..."
find $BACKUP_DIR -type f -mtime +$RETENTION_DAYS -delete
log "Deleted backups older than $RETENTION_DAYS days"
# Calculate backup sizes
TOTAL_SIZE=$(du -sh $BACKUP_DIR | cut -f1)
log "Total backup size: $TOTAL_SIZE"
log "Backup process completed"
sudo chmod +x /usr/local/bin/vps-backup.sh
# Schedule daily backup at 2 AM
sudo crontab -e
# Add:
0 2 * * * /usr/local/bin/vps-backup.sh
Method 2: Rsync to Remote Server (Geographic Redundancy)
# Set up SSH key authentication first
ssh-keygen -t rsa -b 4096
ssh-copy-id backup-user@remote-backup-server.com
# Backup script with rsync
nano /usr/local/bin/rsync-backup.sh
#!/bin/bash
SOURCE_DIR="/var/www"
REMOTE_USER="backup-user"
REMOTE_HOST="remote-backup-server.com"
REMOTE_DIR="/backups/vps-backups"
rsync -avz --delete \
--exclude='*.log' \
--exclude='cache/*' \
-e "ssh -i /root/.ssh/id_rsa" \
$SOURCE_DIR/ $REMOTE_USER@$REMOTE_HOST:$REMOTE_DIR/
echo "Backup completed at $(date)" >> /var/log/rsync-backup.log
Method 3: Cloud Storage with Rclone (AWS S3, Backblaze B2, etc.)
# Install rclone
curl https://rclone.org/install.sh | sudo bash
# Configure rclone (follow interactive setup)
rclone config
# Backup script
nano /usr/local/bin/cloud-backup.sh
#!/bin/bash
BACKUP_DIR="/root/backups"
CLOUD_REMOTE="mycloud:vps-backups" # Name from rclone config
DATE=$(date +%Y-%m-%d)
# Create local backup first
tar -czf $BACKUP_DIR/full-backup-$DATE.tar.gz /var/www /etc /home
# Upload to cloud
rclone sync $BACKUP_DIR $CLOUD_REMOTE --progress
# Delete local backups older than 7 days (keep cloud longer)
find $BACKUP_DIR -type f -mtime +7 -delete
echo "Cloud backup completed at $(date)" >> /var/log/cloud-backup.log
β’ Backblaze B2: $6/TB/month (cheapest)
β’ AWS S3 Glacier Deep Archive: $1/TB/month (retrieval fees apply)
β’ Wasabi: $6.99/TB/month (no egress fees)
β’ Google Cloud Storage Nearline: $10/TB/month
Advanced: Using Borg Backup (Professional Solution)
Borg is a deduplicating backup program that's incredibly efficient for VPS backups:
Why Borg?
- Deduplication: Only stores unique data chunks, saving 80-95% storage
- Compression: Built-in compression further reduces size
- Encryption: All data encrypted before leaving your server
- Incremental forever: Every backup is effectively incremental
Setup Borg Backup
# Install Borg
sudo apt install borgbackup -y
# Initialize repository
borg init --encryption=repokey /mnt/backup-drive/borg-repo
# Create first backup
borg create /mnt/backup-drive/borg-repo::backup-$(date +%Y%m%d) \
/var/www \
/etc \
/home
# Automated script
nano /usr/local/bin/borg-backup.sh
#!/bin/bash
export BORG_REPO='/mnt/backup-drive/borg-repo'
export BORG_PASSPHRASE='your-strong-passphrase'
# Create backup
borg create --stats --compression lz4 \
::'{hostname}-{now:%Y-%m-%d-%H%M}' \
/var/www \
/etc \
/home \
--exclude '/home/*/.cache'
# Prune old backups
borg prune --keep-daily=7 --keep-weekly=4 --keep-monthly=6
echo "Borg backup completed at $(date)" >> /var/log/borg-backup.log
Database-Specific Backup Strategies
MySQL/MariaDB
# Single database
mysqldump -u root -p database_name > backup.sql
# All databases
mysqldump -u root -p --all-databases > all-databases.sql
# With compression
mysqldump -u root -p database_name | gzip > backup.sql.gz
# Automated with credentials file (more secure)
nano /root/.my.cnf
# Add:
[mysqldump]
user=root
password=your_password
chmod 600 /root/.my.cnf
# Now you can run without password in script
mysqldump database_name | gzip > backup.sql.gz
PostgreSQL
# Single database
pg_dump database_name > backup.sql
# All databases
pg_dumpall > all-databases.sql
# Custom format (recommended, allows selective restoration)
pg_dump -Fc database_name > backup.dump
# Automated script
nano /usr/local/bin/postgres-backup.sh
#!/bin/bash
DATE=$(date +%Y%m%d)
BACKUP_DIR="/root/backups/postgres"
mkdir -p $BACKUP_DIR
# Backup all databases
sudo -u postgres pg_dumpall | gzip > $BACKUP_DIR/all-db-$DATE.sql.gz
# Delete backups older than 30 days
find $BACKUP_DIR -type f -mtime +30 -delete
MongoDB
# Backup MongoDB
mongodump --out=/root/backups/mongo-$(date +%Y%m%d)
# With authentication
mongodump --username=admin --password=password --authenticationDatabase=admin \
--out=/root/backups/mongo-$(date +%Y%m%d)
# Compress
tar -czf mongo-backup.tar.gz /root/backups/mongo-$(date +%Y%m%d)
Backup Monitoring and Alerts
Backups only work if they run successfully. Set up monitoring:
Method 1: Healthchecks.io (Free)
# Add to end of your backup script
curl -fsS -m 10 --retry 5 -o /dev/null https://hc-ping.com/YOUR-UUID-HERE
Healthchecks.io will alert you if the ping doesn't arrive on schedule.
Method 2: Email Notifications
# Install mail utility
sudo apt install mailutils -y
# Add to backup script
if [ $? -eq 0 ]; then
echo "Backup completed successfully at $(date)" | mail -s "Backup Success" your@email.com
else
echo "Backup FAILED at $(date)" | mail -s "BACKUP FAILURE - URGENT" your@email.com
fi
Testing Your Backups (The Most Important Step)
A backup you haven't tested is not a backup - it's SchrΓΆdinger's backup: simultaneously working and not working until you try to restore it.
Backup Testing Checklist
- Monthly restoration test: Restore a random file from backup and verify integrity
- Quarterly full restoration: Restore entire backup to a test environment
- Database restoration: Import database backup and verify data consistency
- Disaster recovery drill: Annually, simulate complete server loss and full recovery
Testing Script Example
#!/bin/bash
# Backup test script
TEST_DIR="/tmp/backup-test-$(date +%s)"
mkdir -p $TEST_DIR
echo "Testing latest backup..."
# Extract latest backup
LATEST_BACKUP=$(ls -t /root/backups/web-*.tar.gz | head -1)
tar -xzf $LATEST_BACKUP -C $TEST_DIR
# Verify files exist
if [ -d "$TEST_DIR/var/www" ]; then
echo "β Backup extraction successful"
echo "β Files verified"
rm -rf $TEST_DIR
exit 0
else
echo "β Backup test FAILED"
echo "β Contact administrator immediately"
exit 1
fi
Backup Management Made Easy with VPS Commander
Setting up and monitoring backups manually is complex and error-prone. VPS Commander provides an intuitive interface to schedule, monitor, and test your VPS backups - all without touching the terminal. Get peace of mind with automated backup monitoring and one-click restoration.
Try VPS Commander - Starting at $2.99/monthDisaster Recovery Planning
Having backups is step one. Knowing how to use them during a disaster is step two.
Create a Disaster Recovery Document
Document these procedures and keep them accessible (not just on your VPS!):
- Emergency contacts: VPS provider support, domain registrar, etc.
- Access credentials: Stored securely (password manager)
- Backup locations: Where each backup type is stored
- Restoration procedures: Step-by-step recovery instructions
- RTO/RPO targets: Recovery Time Objective and Recovery Point Objective
Common Disaster Scenarios and Responses
| Scenario | Response |
|---|---|
| Accidental file deletion | Restore specific files from daily backup |
| Database corruption | Restore from latest daily database backup |
| Ransomware attack | Destroy VPS, provision new one, restore from offsite backup |
| VPS provider outage | Provision VPS at different provider, restore from cloud backup |
| Failed software update | Restore from VPS snapshot taken before update |
Common Backup Mistakes to Avoid
1. Storing Backups on the Same Server
Problem: If your VPS is compromised or destroyed, backups are lost too
Solution: Always maintain offsite backups (different provider/cloud storage)
2. Never Testing Restores
Problem: Discovering backups are corrupted when you need them
Solution: Monthly restoration tests, quarterly full recovery drills
3. No Monitoring/Alerting
Problem: Backup scripts fail silently for weeks/months
Solution: Use monitoring services like Healthchecks.io or email alerts
4. Backing Up Backups
Problem: Wasting storage by backing up your backup directory
Solution: Exclude backup directories: --exclude='/root/backups'
5. Hardcoded Passwords in Scripts
Problem: Security vulnerability
Solution: Use credential files with restricted permissions (chmod 600)
6. No Retention Policy
Problem: Backups consume all disk space
Solution: Implement automatic cleanup based on age and importance
Backup Checklist Summary
- β Follow 3-2-1 rule (3 copies, 2 mediums, 1 offsite)
- β Automate backups (daily for data, weekly for system)
- β Use encryption for sensitive data
- β Implement retention policies (delete old backups)
- β Store backups offsite (different datacenter/cloud)
- β Monitor backup success/failure
- β Test restoration monthly
- β Document recovery procedures
- β Exclude unnecessary files (logs, cache)
- β Review and update strategy quarterly
Conclusion
A comprehensive backup strategy is your insurance policy against the inevitable: hardware failures, human errors, security breaches, and natural disasters. The best time to implement proper backups was yesterday. The second best time is right now.
Start with the basics: automated daily backups stored in at least two locations, with monthly restoration tests. As your business grows, invest in more sophisticated solutions like Borg backup, managed backup services, or professional disaster recovery planning.
1. Set up automated backups using one of the scripts in this guide
2. Upload at least one backup to cloud storage
3. Schedule a calendar reminder to test restoration next month
4. Document your backup and recovery procedures
Your future self (and your users) will thank you.