- 0 Posts
- 8 Comments
So if I want a new container stack, I make a new Proxmox “disk” in the ZFS filesystem under the Hardware tab of the VM. This adds a “disk” to the VM when I reboot the VM (there are ways of refreshing the block devices online, but this is easier). I find the new block device and mount it in the VM at a subfolder of /stacks, which will be the new container stack location. I also add this mount point to fstab.
So now I have a mounted volume at /stacks/container-name. I put a docker-compose.yml in there and all data that the stack will use will be subfolders of that folder with bind mounts in the compose file. When I back up, that ZFS dataset that contains everything in that compose stack is snapshotted and backed up as a point-in-time. If that stack has a postgres database, it and all the data it references is internally consistent because it was snapshotted before backup. If I restore the entire folder from backup, it just thinks it had a power outage, replays it’s journals in the database, and all’s well.
So when you have a backup in PBS, from your Proxmox node you can access the backups via the filesystem browser on the left.
When you go to that backup, you can choose to do a File Restore instead of restoring the entire VM. Here I am walking the storage for my nextcloud data within the backups, and I can walk this storage for all discrete backups.
If I want to just restore a container, I will download that “partition” and transfer it to the docker VM. Down the container stack in question, blow out everything in that folder and then restore the contents of the download to the container folder. Start up the docker stack for that folder and it’s back to where it was. Alternatively, I could just restore individual files if I wanted.
Yes. So my debian docker host has some datasets attached:
mounted via fstab:
and I specify that path as the datadir for NCAIO:
Then when PBS calls a backup of that VM, all the datasets that Proxmox is managing for that backup take a snapshot, and that’s what’s backed up to PBS. Since it’s a snapshot, I can backup hourly if I want, and PBS dedups so the backups aren’t using a lot of space.
Other docker containers might have a mount that’s used as a bind mount inside the compose.yml to supply data storage.
Also, I have more than one backup job running on PBS so I have multiple backups, including on removable USB drives that I swap out (I restart the PBS server to change drives so it automounts the ZFS volumes on those removable drives and is ready for the next backup).
You could mount ZFS datasets you create in Proxmox as SMB shares in a sharing VM, and it would be handled the same.
As for documentation, I’ve never really seen any done this way but it seems to work. I’ve done restores of entire container stacks this way, as well as walked the backups to individually restore files from PBS.
If you try it and have any questions, ping me.
ikidd@lemmy.worldto Technology@lemmy.world•UK Official Calls for Age Verification on VPNs to Prevent Porn LoopholeEnglish2·4 days agoI wonder how they figure that’s going to work out.
I couldn’t imagine being this pants-shittingly stupid about how the internet works.
Thank you for shitting your pants on our behalf.
I run a docker host in Proxmox using ZFS datasets for the VM storage for things like my mailserver and NexcloudAIO. When I backup the docker VM, it snapshots the VM at a point in time, and backs up the snapshot to PBS. I’ve restored from that backup and it’s like the machine had just shut down as far as the data is concerned. It journals itself back to a consistent state and no data loss.
I wouldn’t run TrueNAS at all because I have no idea how that’s managing it’s storage and wouldn’t trust the result.
ikidd@lemmy.worldto Selfhosted@lemmy.world•Is there a selfhosted eBooks app that can do this?English0·8 days agoDoes it sync progress?
Caves. We should all live in caves.