There are so, so many options here.
I’ve found alternativeto.net to be a great way to investigate alternative software:
https://alternativeto.net/software/mediawiki/?license=opensource&platform=self-hosted
There are so, so many options here.
I’ve found alternativeto.net to be a great way to investigate alternative software:
https://alternativeto.net/software/mediawiki/?license=opensource&platform=self-hosted


The screen caps on the original project page seem to imply a proper web ui:
https://www.libreoffice.org/download/libreoffice-online/
Hard to be sure but the menu bar doesn’t look like the native menu.


Either plan is fine.


I’m certainly not an expert on such things but I just didn’t think bridged networks in virtual box (or docker) were intended to work that way.
The behaviour you’re seeing is exactly what i would have expected.
In docker I think the solution would be to use the “host” network adapter on the guest VM.


I just sync the titles I want to listen to, to my phone using syncthing.
It just works reliably in all cases with no dicking around.
Audiobookshelf is fine, but if Wife Approval Factor is important this is what I’d do.
That said my partner and I are both team android so if you’re iOS then your SOL with syncthing I think.
Some kind of weird WebGL error on their site I’ve never seen before.
Doesn’t load for me.
LibreWolf doesn’t seem to be offering to activate canvas.
Oh well.


Nah.
Piracy was just my gateway.
I dont have a media server anymore.


Piracy, basically.
Self-hosting wasn’t my intention, I just wanted a media server. Then a media server that downloaded all my stuff easily. Then a server that was more accessible. Then a server that had better Wife-Approval-Factor.
Hmm. An interesting point and a good consideration - maybe a reason not to make this recommendation to others. In my own case I’m not concerned.
I’m using 1tb SSDs. They’re pretty cheap now. I don’t think they suffer from any of the problems you’ve described?
I couldn’t find any information about longevity offline vs online. In daily use SSDs do seem to be more reliable than HDDs, particularly as they get older.
The other thing is my strategy is something like 4-3-2, so the offline is an additional final hail mary. The chances that I would require it and it would have failed in the month or so since I updated it are infinitessimal.
Finally there are practical considerations. My offline copy resides in a physical safe in our house, and is unencrypted. If I were to die suddenly, this would be the most accessible copy of important documents, family photos, et cetera.
It’s not a perfect system but it’s “pretty good” and I’m hoping I don’t die suddenly so there’s that LOL.
As others have said, sync isn’t backup - if you inadvertently delete something then it will get deleted everywhere.
I’ve been using borgmatic (config interface for borg) for many years.
A long while ago I switched to catch and release for media. Curating a large collection just took too much effort, and backing it up was too impractical. Like you probably have 200gb of movies, 20gb of photos, and 20mb of personal documents. These categories have different risk profiles - for me an offsite air gapped backup of movies would be excessive, but personal documents absolutely isn’t. It’s just an important consideration when designing a backup system.
That said, 200gb isn’t that much, and restic / borg will de-duplicate your archives anyway. Just something to keep in mind.
A low powered PC in someone else’s apartment satisfies the second location requirement. Will DNS be a problem?
An alternative is to get 2x external drives. Keep one in your house and update it whenever, then take it to your sister’s whenever you visit and swap it with the one left there.
Should I be using Debian?
That’s unanswerable but …
I’ve used Debian exclusively for many years. There are several aspects that have served me well:
On that last point, before switching to debian I (like everyone) enjoyed different DE’s and distros because they look great and the constant change gives a feeling of progress. However, at some point I realised that I didn’t want my OS to be a distraction from what I’m actually doing. Like I want to get my work done, and something not working quite right with the OS due to some bug or update is a huge distraction. Debian’s release cycle mitigates that problem.
In the before times it used to be annoying that the software in Debian’s repos lagged a long way behind the current releases, but that’s not really a problem with the advent of flatpak, nix, and (my preference) AppImages.
Recently I was tempted to switch to NixOS, but I didn’t.


You’re welcome to your own definition.
Whether you’re configuring a docker container running on a server in your basement or on a VPS the issues you encounter are going to be much the same. The definition of self-hosted isn’t really relevant.
If you want to exclude people running services on rented hardware that just seems dumb.


IDK what’s happened to you or why your post got removed.
Obviously “self-hosting” as a term is broad and subjective.
IMO this community discusses hosting services in an environment where you’re responsible for installing, configuring, and maintaining your own stuff.
A purist might argue that self-hosting doesn’t include services residing on a VPS, but what’s the point of excluding those discussions from this community? In practical terms the nature of the activity is the same.
Deduplication based on content-defined chunking is used to reduce the number of bytes stored: each file is split into a number of variable length chunks and only chunks that have never been seen before are added to the repository.
A chunk is considered duplicate if its id_hash value is identical. A cryptographically strong hash or MAC function is used as id_hash, e.g. (hmac-)sha256.
To deduplicate, all the chunks in the same repository are considered, no matter whether they come from different machines, from previous backups, from the same backup or even from the same single file.
Compared to other deduplication approaches, this method does NOT depend on:
file/directory names staying the same: So you can move your stuff around without killing the deduplication, even between machines sharing a repo.
complete files or time stamps staying the same: If a big file changes a little, only a few new chunks need to be stored - this is great for VMs or raw disks.
The absolute position of a data chunk inside a file: Stuff may get shifted and will still be found by the deduplication algorithm.
This is what their docs say. Not sure what you mean about diffferent file types but this seems fairly agnostic?
I actually didn’t realise that first point, as in you can move folders and the chunks will still be deduplicated.
Yes but rsync isn’t a “backup”.
Spouse i inadvertently deleted a heap of stuff last month. Rsync would happily reflect that change on the remote. Borg will store the change but you can still restore from an earlier point in time.
A docker volume?
I only use bind mounts, and in that case you can put them where you like and move them while theyre not mounted by a running container.
Docker volume locations are managed by docker, and i dont use those so not part of the above plan.
My docker files, configs, and volumes are all kept in a structure like:
/srv
- /docker
- - /syncthing
- - - /compose.yml
- - - /sync-volume
- - /traefik
- - - /compose.yml
[...]
I just backup /srv/docker, but I black list some subfolders for things like databases for which regular dumps are created or something. Currently the compressed / deduplicated repos consume ~350GB.
I use borgmatic because you do 1 full backup and thereafter everything is incremental, so minimal bandwidth.
I keep one backup repo on the server itself in /srv/backup - yes this will be prone to failure of that server but it’s super handy to be able to restore from a local repo if you just mess up a configuration or version upgrade or something.
I keep two other backup repos in two other physical locations, and one repo air gapped.
For example I rent a server from OVH in a Sydney data centre, there’s one repo in /srv/backup on that server, one on OVH’s storage service, one kept on my home server, and one on a removable drive I update periodically.
All repo’s are encrypted except for the air gapped one. That one has instructions intended for someone to use if I die or am incapacitated. So it has my master password for my password database, ssh keys, everything. We have a physical safe at home so that’s where that lives.
I’ve never tried restic.
I’m happy with borg and no real reason to switch.
Just wanted to add that borgmatic is like a configuration manager for borg backup. Still CLI & config file, and just running borg commands on the back end, but adds some nice features like notifications while really simplifying the configuration required.


I’m not really confident in this answer but, “not that I’m aware of”.
I use mxroute as a paid / hosted IMAP & SMTP server. They run spam assassin, but it’s obviously not trained on my own reports.
I’ve grown fond of Thunderbird as an email client. It’s spam management is clunky but if you spend 15 minutes or so learning how it works, and then train it with both junk and not junk, it works reasonably well.
Sadly, it does occasionally throw a false positive, like maybe twice in the last year it identified a legit email as spam.
So, while I’m running a spam assassin and thunderbird combo, it’s really TB that’s doing the work because SA is really just filtering the super low hanging fruit.
TB is doing a very respectable job, but needs to be trained.
I wouldn’t say it’s “hard”, but taking responsibility for all the photos your wife took of your darling children growing up is… a thing.