Block-level nightly backup of your Pi to AWS S3. Restore a complete, bootable system to new hardware in one command — no manual setup, no secrets to re-enter.
Two shell scripts: one runs nightly on the Pi, one restores everything to new hardware. No agents, no daemons, no cloud accounts beyond AWS.
Containers halt so databases flush all writes to disk. No dirty pages, no recovery needed on restore. Downtime: 5–15 min at 2am.
Reads only used blocks from each partition — not empty sectors. 954 GB NVMe at 28% full: partclone reads 267 GB, dd would read 954 GB.
pigz compresses in parallel using all Pi 5 cores. Output streams directly to S3 — no local temp file needed, no second disk required.
Docker restarts, manifest JSON uploaded, push notification sent via ntfy.sh. Optional SHA-256 verification confirms every file in S3.
sfdisk restores the saved GPT layout to the new device. Partitions are recreated exactly — same sizes, same order.
Each partition streams S3 → gunzip → partclone.restore. No local download. Works from any Linux machine attached to the target drive.
Insert storage into the new Pi, power on. Raspberry Pi OS auto-expands the root filesystem on first boot. No extra config.
test-recovery.sh --post-boot checks OS, NVMe, Docker containers, Cloudflare tunnel, cron jobs, MariaDB, and HTTP — PASS/FAIL per check.
dd reads every sector regardless of whether it contains data. partclone reads the filesystem allocation bitmap and skips unallocated blocks. Same result, a fraction of the work.
| dd | partclone | |
|---|---|---|
| What it reads | Every sector (used + empty) | Used blocks only |
| Speed on 954 GB NVMe (28% full) | ~90 min | ~5 min |
| S3 upload size | ~10 GB (compressed zeros) | ~3–5 GB |
| Restore | gunzip | dd | partclone per partition |
| Docker downtime | 60–90 min | 5–15 min |
Because it's a block-level image of the full device, there's nothing to configure. Every file, database, container, service, and SSH key is included automatically.
OS, kernel, packages, systemd services — including cloudflared, custom watchdogs, and any compiled binaries.
All images, volumes, networks, and compose configs. MariaDB data, WordPress uploads, application files — everything in /var/lib/docker.
.env files, config.env, credentials, authorized_keys, cron jobs, logrotate rules — all restored exactly as-is.
config.txt, cmdline.txt, and the full /boot/firmware partition. The restored Pi boots identically to the original.
GPT layout saved separately as a sfdisk dump and applied first on restore. Works across different NVMe sizes.
Custom I/O scheduler settings, udev rules, and performance tuning survive the restore intact.
One install script handles dependencies, AWS verification, lifecycle policy, cron scheduling, and a dry-run test.
SSH into your Pi and clone the repo.
git clone https://github.com/andrewbakercloudscale/pi2s3.git ~/pi2s3 cd ~/pi2s3
Prompts for your S3 bucket, region, and ntfy URL. Installs partclone, pigz, AWS CLI v2, sets up lifecycle policy and cron.
bash install.sh
Force an immediate backup to confirm everything works end-to-end. You'll get a push notification when done.
bash ~/pi2s3/pi-image-backup.sh --force
bash ~/pi2s3/pi-image-backup.sh --list 2026-04-16/ 3.4 GB raspberrypi (nvme0n1, mmcblk0) 2026-04-15/ 3.3 GB raspberrypi 2026-04-14/ 3.3 GB raspberrypi
If you use multiple AWS accounts, set a profile in config.env:
AWS_PROFILE="pi-backup" # in config.env
bash ~/pi2s3/install.sh --status # cron, log tail, dependency versions bash ~/pi2s3/install.sh --upgrade # git pull + redeploy
When your Pi dies: validate from your Mac, flash a minimal SD card, attach the target NVMe, SSH in, and restore. Full runbook in RECOVERY.md.
Confirms S3 image exists, reads the manifest, estimates flash time, prints the exact restore command.
bash ~/pi2s3/test-recovery.sh --pre-flash ✔ AWS access OK ✔ Image exists: 2026-04-16/ (3.4 GB) ✔ Manifest: raspberrypi · Pi 5 · Bookworm · nvme0n1 Estimated restore time: ~12 min Run: pi-image-restore.sh --date 2026-04-16 --device /dev/nvme0n1
Interactive: pick backup date and target device. Or non-interactive for scripted recovery. Streams directly from S3 — no local storage needed.
bash ~/pi2s3/pi-image-restore.sh # Or non-interactively: bash ~/pi2s3/pi-image-restore.sh --date 2026-04-16 --device /dev/nvme0n1 --yes
Raspberry Pi OS auto-expands the root filesystem. Clear the old host key on your Mac before connecting.
ssh-keygen -R raspberrypi.local ssh pi@raspberrypi.local
Checks filesystem expansion, NVMe mount, all Docker containers, Cloudflare tunnel, cron jobs, MariaDB, HTTP, memory, and load.
bash ~/pi2s3/test-recovery.sh --post-boot ✔ OS: Debian GNU/Linux 12 (bookworm) aarch64 ✔ Filesystem expanded (954 GB) ✔ NVMe mounted at /mnt/nvme ✔ Docker: 6/6 containers running ✔ Cloudflare tunnel: active (2 ha_connections) ✔ Cron: pi2s3 backup + app-layer backup present ✔ MariaDB: responding, 42 tables ✔ HTTP: 200 OK on localhost
An optional monitor that runs every 5 minutes. If your site or tunnel goes down, it automatically recovers through three escalating phases before rebooting the Pi as a last resort.
Set CF_WATCHDOG_ENABLED=true in config.env and run bash install.sh --watchdog. That's it.
ntfy.sh alerts at every stage: first failure, each phase escalation, recovery, and "manual needed" if rate-limited from rebooting.
Phase 3 reboots are capped to once every 6 hours. If the rate limit is hit, an alert fires instead and the watchdog exits cleanly.
At 3–5 GB per compressed image using S3 STANDARD_IA. Costs vary by region — af-south-1 (Cape Town) is slightly higher than us-east-1.
| Retention | S3 storage | Monthly cost (STANDARD_IA) |
|---|---|---|
| 7 images | ~25 GB | <$1/month |
| 30 images | ~120 GB | ~$2/month |
| 60 images | ~240 GB | ~$3/month |
S3 lifecycle policy is installed automatically by install.sh --setup. Images beyond MAX_IMAGES (default: 60) are deleted automatically. Switch to GLACIER_IR for long-term cold storage at ~80% less cost.