For backups, mapping one file to one object rarely works well. Tools that use this strategy come with a long list of scenarios requiring expensive operations. For instance renaming a directory or changing a few bytes in a large file.
On the other hand tools that don't view the object storage as a file system have way less gotchas.
That's a fair point but this is for my family photos and videos; in the event I die I'd rather the S3 bucket I hand over to my wife/kids in my last wishes look like a real filesystem rather than a bazillion blobs that require a special tool with a programmer's expertise to reassemble
" ... in the event I die I'd rather the S3 bucket I hand over to my wife/kids in my last wishes look like a real filesystem rather than a bazillion blobs that require a special tool ..."
You'd need a cloud storage provider that just gave you a plain old UNIX filesystem to do whatever you want with.
You'd need a cloud storage provider that just gave you a plain old UNIX filesystem to do whatever you want with.
Doesn't iCloud Drive fit the bill?
Access to it is slightly obscure, i.e. ~/Library/Mobile Documents/com~apple~CloudDocs/ but wouldn't that work?
It's free for me to use. That's because I'm already paying Apple $10/mo to backup the family's iPhones. We're only using a little over 200 GB out of the 2000 GB we have. (I'm sure that Apple is counting on most people not using their full amount).
I've only put a few files out there, so maybe there are a lot of potential pitfalls. But it doesn't get much simpler than using cp or mv.
In reality it's most emphatically not a "plain old UNIX filesystem". Apple is doing some magic and storing blobs out in Amazon S3 or in their own datacenters. But to me it has the appearance of a Unix (Posix?) filesystem.
I realize that rsync.net couldn't survive with a business model that limits users to 2000 GB, which is Apple's maximum. But I thought I'd mention it, since it just might be the perfect "free" solution for a lot of people.
It's too bad you guys cost ~2x as much for storage as S3 when I evaluated you in 2018... ($0.04/GB vs $0.023/GB) ;) Glad to see you're beating S3 in $/GB now!
rclone is a nice way to sync files to B2. It also has a mount option that quite literally mounts the cloud storage as a filesystem, though this requires Linux and a little bit of know how. However, in a pinch B2 has a serviceable web interface, and Backblaze will even ship you a drive of your files if you request it, so I think it would be pretty usable by just about anyone.
I have a local ZFS pool of hard drives and a script that `rsync -avz`s my iPhone's photos+videos to it. Then a separate cron script that periodically syncs from the ZFS pool to my S3 bucket using `aws s3 sync`. The S3 bucket has versioning turned on so it's effectively append-only.
I used to be able to trigger my iPhone -> ZFS script when I plugged my iPhone into my Ubuntu desktop using udev (also had to wrap it in flock[1] because it would trigger multiple times for some reason), but at some point that stopped working and I've been too lazy to figure out why.
It's far from perfect but for me it works alright. In this scenario I prefer straightforward and slightly kludgy compared to something with hidden complexity that could go wrong in so many ways. Could you imagine if you used a tool like restic or borg and the pack encoding format changed, or if the tool sources are simply gone when your relatives have to figure out how to get at the files in 10, 15, 20 years - I don't want my relatives playing code detective or archaeologist!
Which reminds me of a downside to using tools like restic, borg, and the like I forgot to mention. When I evaluated them for my hundreds of GB of family pics+videos, there is a "dedupe" step that all these tools want to perform. When I tested them a couple years ago they were dog slow for my files, because pics + video are already highly compressed and there is very little "deduping" you're going to wring out of them unless you have multiple copies of the same files. IIRC borg took several hours to run at at the end it reported 0.01% or less deduping efficiency. Also as I recall there was no way to opt-out of the dedupe step due to the way borg stores "packs". Very annoying!
On the other hand tools that don't view the object storage as a file system have way less gotchas.
In my experience B2 + restic works really well.