• 0 Posts
  • 20 Comments
Joined 3 months ago
cake
Cake day: November 21st, 2025

help-circle









  • fizzle@quokk.autoSelfhosted@lemmy.worldSyncthing Backup w Raspberry Pi
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    26 days ago

    Hmm. An interesting point and a good consideration - maybe a reason not to make this recommendation to others. In my own case I’m not concerned.

    I’m using 1tb SSDs. They’re pretty cheap now. I don’t think they suffer from any of the problems you’ve described?

    I couldn’t find any information about longevity offline vs online. In daily use SSDs do seem to be more reliable than HDDs, particularly as they get older.

    The other thing is my strategy is something like 4-3-2, so the offline is an additional final hail mary. The chances that I would require it and it would have failed in the month or so since I updated it are infinitessimal.

    Finally there are practical considerations. My offline copy resides in a physical safe in our house, and is unencrypted. If I were to die suddenly, this would be the most accessible copy of important documents, family photos, et cetera.

    It’s not a perfect system but it’s “pretty good” and I’m hoping I don’t die suddenly so there’s that LOL.


  • As others have said, sync isn’t backup - if you inadvertently delete something then it will get deleted everywhere.

    I’ve been using borgmatic (config interface for borg) for many years.

    A long while ago I switched to catch and release for media. Curating a large collection just took too much effort, and backing it up was too impractical. Like you probably have 200gb of movies, 20gb of photos, and 20mb of personal documents. These categories have different risk profiles - for me an offsite air gapped backup of movies would be excessive, but personal documents absolutely isn’t. It’s just an important consideration when designing a backup system.

    That said, 200gb isn’t that much, and restic / borg will de-duplicate your archives anyway. Just something to keep in mind.

    A low powered PC in someone else’s apartment satisfies the second location requirement. Will DNS be a problem?

    An alternative is to get 2x external drives. Keep one in your house and update it whenever, then take it to your sister’s whenever you visit and swap it with the one left there.


  • Should I be using Debian?

    That’s unanswerable but …

    I’ve used Debian exclusively for many years. There are several aspects that have served me well:

    • debian is one of the older, more popular distros - huge community and catalogue of solved problems.
    • it just makes sense to run the same OS on my desktop and on servers, no oddities between them.
    • it’s stable and boring.

    On that last point, before switching to debian I (like everyone) enjoyed different DE’s and distros because they look great and the constant change gives a feeling of progress. However, at some point I realised that I didn’t want my OS to be a distraction from what I’m actually doing. Like I want to get my work done, and something not working quite right with the OS due to some bug or update is a huge distraction. Debian’s release cycle mitigates that problem.

    In the before times it used to be annoying that the software in Debian’s repos lagged a long way behind the current releases, but that’s not really a problem with the advent of flatpak, nix, and (my preference) AppImages.

    Recently I was tempted to switch to NixOS, but I didn’t.




  • Deduplication based on content-defined chunking is used to reduce the number of bytes stored: each file is split into a number of variable length chunks and only chunks that have never been seen before are added to the repository.

    A chunk is considered duplicate if its id_hash value is identical. A cryptographically strong hash or MAC function is used as id_hash, e.g. (hmac-)sha256.

    To deduplicate, all the chunks in the same repository are considered, no matter whether they come from different machines, from previous backups, from the same backup or even from the same single file.

    Compared to other deduplication approaches, this method does NOT depend on:

    • file/directory names staying the same: So you can move your stuff around without killing the deduplication, even between machines sharing a repo.

    • complete files or time stamps staying the same: If a big file changes a little, only a few new chunks need to be stored - this is great for VMs or raw disks.

    • The absolute position of a data chunk inside a file: Stuff may get shifted and will still be found by the deduplication algorithm.

    This is what their docs say. Not sure what you mean about diffferent file types but this seems fairly agnostic?

    I actually didn’t realise that first point, as in you can move folders and the chunks will still be deduplicated.




  • My docker files, configs, and volumes are all kept in a structure like:

    /srv  
     - /docker  
     - - /syncthing  
     - - - /compose.yml  
     - - - /sync-volume  
     - - /traefik  
     - - - /compose.yml  
     [...]  
    

    I just backup /srv/docker, but I black list some subfolders for things like databases for which regular dumps are created or something. Currently the compressed / deduplicated repos consume ~350GB.

    I use borgmatic because you do 1 full backup and thereafter everything is incremental, so minimal bandwidth.

    I keep one backup repo on the server itself in /srv/backup - yes this will be prone to failure of that server but it’s super handy to be able to restore from a local repo if you just mess up a configuration or version upgrade or something.

    I keep two other backup repos in two other physical locations, and one repo air gapped.

    For example I rent a server from OVH in a Sydney data centre, there’s one repo in /srv/backup on that server, one on OVH’s storage service, one kept on my home server, and one on a removable drive I update periodically.

    All repo’s are encrypted except for the air gapped one. That one has instructions intended for someone to use if I die or am incapacitated. So it has my master password for my password database, ssh keys, everything. We have a physical safe at home so that’s where that lives.


  • I’ve never tried restic.

    I’m happy with borg and no real reason to switch.

    Just wanted to add that borgmatic is like a configuration manager for borg backup. Still CLI & config file, and just running borg commands on the back end, but adds some nice features like notifications while really simplifying the configuration required.


  • I’m not really confident in this answer but, “not that I’m aware of”.

    I use mxroute as a paid / hosted IMAP & SMTP server. They run spam assassin, but it’s obviously not trained on my own reports.

    I’ve grown fond of Thunderbird as an email client. It’s spam management is clunky but if you spend 15 minutes or so learning how it works, and then train it with both junk and not junk, it works reasonably well.

    Sadly, it does occasionally throw a false positive, like maybe twice in the last year it identified a legit email as spam.

    So, while I’m running a spam assassin and thunderbird combo, it’s really TB that’s doing the work because SA is really just filtering the super low hanging fruit.

    TB is doing a very respectable job, but needs to be trained.