
All or at least some of the mongodb backups deserve testing.
One of the ways of ensuring a backup is able to be used in a restore process is to simply join a new ephemeral node to the cluster, run tests on it, and then purge it.
This lab involves:
1. accessing an available backup
2. copying datafiles from backup to a local filesystem
3. spin up a mongodb process that mounts datafiles from backup
4. add ephemeral node to replica set and let it sync
5. run tests on restore node
6. purge restore node
This lab requires many layers:
Someone from LinkedIn said MinioS3 was dead because their repository entered maintenance mode so I started looking options out there.
One of them is garage, please check their work here:
https://garagehq.deuxfleurs.fr/documentation/quick-start/
All you need is an s3 compatible storage layer that PBM can talk to.
Here are the steps I took for setting up garage:
mkdir -p /bkp/data
mkdir /bkp/meta
chown -R $USER: /bkp
cat > garage.toml <<EOF
**metadata_dir = "/var/lib/garage/meta"
data_dir = "/var/lib/garage/data"**
db_engine = "sqlite"
replication_factor = 1
rpc_bind_addr = "[::]:3901"
rpc_public_addr = "127.0.0.1:3901"
rpc_secret = "$(openssl rand -hex 32)"
[s3_api]
s3_region = "garage"
api_bind_addr = "[::]:3900"
root_domain = ".s3.garage.localhost"
[s3_web]
bind_addr = "[::]:3902"
root_domain = ".web.garage.localhost"
index = "index.html"
[k2v_api]
api_bind_addr = "[::]:3904"
[admin]
api_bind_addr = "[::]:3903"
admin_token = "$(openssl rand -base64 32)"
metrics_token = "$(openssl rand -base64 32)"
EOF
Note that /var/lib/garage may be mounted later on using /bkp directory
Start your podman garaged container:
podman run -d \
--name garaged \
--network host \
--restart always \
-v ./garage.toml:/etc/garage.toml:ro,Z \
-v /bkp:/var/lib/garage:Z \
dxflrs/garage:v2.2.0
Create alias for checking status of garage container:
alias garage="podman exec -ti garaged /garage"
garage status
==== HEALTHY NODES ====
ID Hostname Address Tags Zone Capacity DataAvail Version
<NODE_ID> fedora 127.0.0.1:3901 NO ROLE ASSIGNED v2.2.0
Take note of the Node ID.
Assign a new 20GB partition layout and apply using the Node ID:
apollo@fedora:/repos/labs$ garage layout assign -z dc1 -c 20G <NODE_ID>
apollo@fedora:/repos/labs$ garage layout apply --version 1
Create PBM bucket and make sure it exists:
garage bucket create pbm
Create an API key using the following command, take note of Keys ID and Secret displayed:
garage key create pbm
==== ACCESS KEY INFORMATION ====
Key ID: <KEY_ID>
Key name: pbm
Secret key: very_secret_key
Created: 2026-02-18 23:46:31.255 +00:00
Validity: valid
Expiration: never
Make sure key exists:
garage key list
Allow key to access bucket:
garage bucket allow --read --write --owner pbm --key pbm
Run aws configure or edit credential environment variables for accessing s3:
[admin@db-1 ~]$ aws s3 ls
2026-02-18 23:43:29 pbm
I created some fake data with Daniel's great Percona Load Generator for MongoDB. It's a neat project for simulating the most diverse workloads.
After inserting data for 5 minutes, I got a data set with less than 5 GBs of data, which should be enough for this demo.
PBM will be used, it's the only production-grade open source MongoDB backup tool I know.
It does require you to run MongoDB on PSMDB, Percona's MongoDB drop-in replacement.
Please learn more about it here: https://docs.percona.com/percona-backup-mongodb/index.html
If you are using PBM, I assume you have access to a few backups available.
In my case, I access a server with PBM enabled and run pbm status:
[admin@db-1 ~]$ pbm status
Cluster:
========
rs0:
- db-1:27017 [S]: pbm-agent [v2.12.0] OK
- db-2:27017 [S]: pbm-agent [v2.12.0] OK
- db-3:27017 [P]: pbm-agent [v2.12.0] OK
PITR incremental backup:
========================
Status [ON]
Running members: rs0/db-1:27017;
Currently running:
==================
(none)
Backups:
========
S3 garage http://192.168.0.174:3900/pbm/pbm
Snapshots:
2026-03-27T00:02:01Z 669.02MB <physical> success [restore_to_time: 2026-03-27T00:02:03]
2026-03-24T00:02:01Z 0.00B <physical> failed [ERROR: Backup stuck at `starting` stage, last beat ts: 1774310521] [2026-03-24T00:02:01]
2026-03-23T00:02:01Z 671.25MB <physical> success [restore_to_time: 2026-03-23T00:02:03]
2026-03-20T00:02:01Z 667.52MB <physical> success [restore_to_time: 2026-03-20T00:02:03]
2026-03-19T23:42:54Z 667.35MB <physical> success [restore_to_time: 2026-03-19T23:42:56]
2026-03-17T00:02:01Z 721.09MB <physical> success [restore_to_time: 2026-03-17T00:02:03]
2026-03-15T00:28:26Z 151.84MB <physical> success [restore_to_time: 2026-03-15T00:28:29]
Keep in mind:
The key here is to have access to an s3 compatible bucket with mongodb datafiles.
You need to have DNS resolution across source replica set and target database pods - no matter how. In my case I have my own local libvirt network named 'priv' whose DNS listens under 192.168.100.1 .
# This is how I connect:
alias mongosh="podman run --network host --dns 192.168.100.1 -ti --rm alpine/mongosh mongosh"
# Build the image
podman build -t psmdb-s2d-aws .
# Check version
podman run --rm psmdb-s2d-aws mongod --version
List and pick the backup of your choice:
[admin@db-1 ~]$ pbm list
Backup snapshots:
2026-03-15T00:28:26Z <physical> [restore_to_time: 2026-03-15T00:28:29]
2026-03-17T00:02:01Z <physical> [restore_to_time: 2026-03-17T00:02:03]
2026-03-19T23:42:54Z <physical> [restore_to_time: 2026-03-19T23:42:56]
2026-03-20T00:02:01Z <physical> [restore_to_time: 2026-03-20T00:02:03]
2026-03-23T00:02:01Z <physical> [restore_to_time: 2026-03-23T00:02:03]
2026-03-27T00:02:01Z <physical> [restore_to_time: 2026-03-27T00:02:03]
PITR <on>:
2026-03-15T00:28:30 - 2026-03-27T00:02:29
Create directory for decompressing data from s3:
mkdir /bkp/decompressed
List desired backup directory:
apollo@fedora:/repos/labs$ aws s3 ls s3://pbm/pbm/2026-03-27T00:02:01Z/rs0/
PRE admin/
PRE airline/
PRE config/
PRE journal/
PRE local/
2026-03-26 21:02:10 12045 WiredTiger.backup.s2
2026-03-26 21:02:10 68 WiredTiger.s2
2026-03-26 21:02:06 22807 WiredTigerHS.wt.s2
2026-03-26 21:02:06 7057 _mdb_catalog.wt.s2
2026-03-26 21:02:10 15045 filelist.pbm
2026-03-26 21:02:10 2293 sizeStorer.wt.s2
2026-03-26 21:02:10 132 storage.bson.s2.0-114
Start file download:
apollo@fedora:/repos/labs$ aws s3 cp s3://pbm/pbm/2026-03-27T00:02:01Z/rs0/ /bkp/decompressed --recursive
Don't forget the keyfile is necessary too:
# Copy from production!
sudo cp /etc/mongod_keyfile .
sudo chown admin. mongod_keyfile
# Logout and copy file to your target server
logout
Connection to 192.168.100.10 closed.
$ scp admin@192.168.100.10:~/mongod_keyfile .
mongod_keyfile 100% 130 346.2KB/s 00:00
The key is using to resolve the libvirt network hostnames:
# Run ephemeral node with DNS pointing to libvirt network DNS
podman run -d --name podman_restore --network host --dns 192.168.100.1 --dns-search default.local -v /bkp/decompressed:/data/db:Z -e HOST_IP="192.168.100.99" -e MONGOD_CONFIG=./mongod.conf podman_restore
DNS flags explained:
- - Use libvirt's dnsmasq to resolve VM hostnames
- - Append domain to bare hostnames
Verify DNS works from inside container:
podman exec -ti psmdb-test bash
ping db-1 # should resolve to 192.168.100.10
ping db-1.default.local # FQDN should also work
From a primary node or using mongosh with DNS:
mongosh "mongodb://db-1:27017/?replicaSet=rs0" --eval '
rs.add({
host: "<host-ip-or-hostname>:27017",
priority: 0,
votes: 0
})
'
Or if ephemeral node runs on different port:
mongosh "mongodb://db-1:27017/?replicaSet=rs0" --eval '
rs.add({
host: "192.168.100.<new-ip>:27017",
priority: 0,
votes: 0
})
'
# Check replica set status
mongosh "mongodb://db-1:27017/?replicaSet=rs0" --eval 'rs.status()'
# Wait for SECONDARY state, then run tests
mongosh "mongodb://psmdb-test:27017/?replicaSet=rs0" --eval 'db.collection.find().limit(5)'
# Remove from replica set
mongosh "mongodb://db-1:27017/?replicaSet=rs0" --eval '
rs.remove("<ephemeral-host>:27017")
'
# Stop and remove container
podman stop psmdb-test
podman rm psmdb-test
# Clean up backup data if desired
rm -rf /path/to/backup/data/*
Author: epaminondas
Tags: #mongodb #backups #restore
Accessed 90 times.
Loading interactive comments...