

such a system would need a strict time limit for restoration after the catastrophe. Otherwise leeching would be too easy.


such a system would need a strict time limit for restoration after the catastrophe. Otherwise leeching would be too easy.


better would be something that can just eat a zfs send stream, but I guess for an emergency it’s fine. but I would still want to encrypt everything somehow.


a firewall can be used to filter incoming traffic by its properties. most consumer home routers don’t expose the firewall settings


oh! I don’t know how nix containers work, but I would be looking into creating a shared network between the containers, that is not the normal network.


oh, I see what you mean!
they do that for the sake of providing an example that works instantly. but on the long term it’s not a good idea. if you intend to keep using a service, you are better off connecting it to a postgres db that’s shared across all services. once you get used to it, you’ll do that even for those services that you are just quicly trying out.
how I do this is I have a separate docker compose that runs a postgres and a mariadb. and these are attached to such a docker network, which is created once with a command, rather than in a compose file. every compose file where the databases are needed, this network is specified as an “external” network. this way containers across separate compose files can communicate.
my advice is its best to also have this network as “internal” too, which is a weird name but gist is, this network in itself won’t provide access to your LAN or the internet, while other networks may still do that if you want.
basically setup is a simple command like “docker network create something something”, and then like 3 lines in each compose file. you would also need to transfer the data from the separate postgreses to a central one, but thats a one time process.
let me know if you are interested, and I’ll help with commands and what you need. I don’t mind it either if you only get around to this months later, it’s fine! just reply or send a message


just to be clear, are you saying that most beginners just copy paste the example docker compose from the project documentation, and leave it that way?
I guess that’s understandable. we should have more starter resources that explain things like this. how would they know, not everyone goes in with curiosity to look up how certain components are supposed to be ran


almost every self hosted service needs a database. and what “another” database? are you keeping separate postgreses for each service that wants to use it? one of the most important features of postgres is that it as a single database server can hold multiple databases, with permissions and whatnot


I think it depends. when you run many things for yourself and most services are idle most of the time, you need more RAM and cpu performance is not that important. a slower CPU might make the services work slower, but RAM is a boundary to what you can run. 8 GB is indeed a comfortable amount when you don’t need to run even a desktop environment and a browser on it besides the services, but with things like Jellyfin and maybe even Immich, that hoard memory for cache, it’s not that comfortable anymore.


its probably hoarding it as “cache” when it thinks no other program needs it. maybe it would release some when the system has memory pressure, but this is terrible because those mechanisms are reacting very slowly


I’m aware of what the arr stack is for generally, but not with overseerr and jellyseerr


I’m aware of what the arr stack is for generally, but not with overseerr and jellyseerr


ok, but why do I want to use this? what does it do? what is its purpose?


the healthcheck URL should point to some HTTP API that the container makes available, so it should point to the container.
in place of localhost should be the container’s name, and port should be the port the container exposes as the web server. some services, like Jellyfin, have a specific webpage path for this purpose: https://jellyfin.org/docs/general/post-install/networking/advanced/monitoring/
and others, like gitea, hide the fact well that they have a health check endpoint, because its not mentioned in documentation: https://github.com/go-gitea/gitea/pull/18465
but check if docker’s way of doing healthchecks produces a lot of spam in the system log, in which case you could choose to just disable health checking, because it would push out useful logs
I heard blue iris can be run with wine on linux


I guess it’s just google sans, they use this placeholder elsewhere too


oh, LXC containers! I see. I never used them because I find LXC setup more complicated, once tried to use a turnkey samba container but couldn’t even figure out where to add the container image to LXC, or how to start if not that way.
but also, I like that this way my random containerized services use a different kernel, not the main proxmox kernel, for isolation.
Additionally, having them as CTs mean that I can run straight on the container itself instead of having to edit a Docker file which by design is meant to be ephemeral.
I don’t understand this point. on docker, it’s rare that you need to touch the Dockerfile (which contains the container image build instructions). did you mean the docker compose file? or a script file that contains a docker run command?
also, you can run commands or open a shell in any container with docker, except if the container image does not contain any shell binary (but even then, copying a busybox or something to a volume of the container would help), but that’s rare too.
you do it like this: docker exec -it containername command. bit lengthy, but bash aliases help
Also for the over committing thing, be aware that your issue you’ve stated there will happen with a Docker setup as well. Docker doesn’t care about the amount of RAM the system is allotted. And when you over-allocate the system, RAM-wise, it will start killing containers potentially leaving them in the same state.
in docker I don’t allocate memory, and it’s not common to do so. it shares the system memory with all containers. docker has a rudimentary resource limit thingy, but what’s better is you can assign containers to a cgroup, and define resource limits or reservations that way. I manage cgroups with systemd “.slice” units, and it’s easier than it sounds


just know that sometimes their buggy frontend loads the analytics code even if you have opted outm there’s an ages old issue of this on their github repo, closed because they don’t care.
It’s matomo analytics, so not as bad as some big tech, but still.


unless you have zillion gigabytes of RAM, you really don’t want to spin up a VM for each thing you host. the separate OS-es have a huge memory overhead, with all the running services, cache memory, etc. the memory usage of most services can largely vary, so if you could just assign 200 MB RAM to each VM that would be moderate, but you can’t, because when it will need more RAM than that, it will crash, possibly leaving operations in half and leading to corruption. and to assign 2 GB RAM to every VM is waste.
I use proxmox too, but I only have a few VMs, mostly based on how critical a service is.


Honestly, this is the kind of response that actually makes me want to stop self hosting. Community members that have little empathy.
why. it was not telling that they should quit self hosting. it was not condescending either, I think. it was about work.
but truth be told IT is a very wide field, and maybe that generalization is actually not good. still, 15 containers is not much, and as I see it they help with not letting all your hosted software make a total mess on your system.
working with the terminal sometimes feels like working with long tools in a narrow space, not being able to fully use my hands, but UX design is hard, and so making useful GUIs is hard and also takes much more time than making a well organized CLI tool.
in my experience the most important here is to get used to common operations in a terminal text editor, and find an organized directory structure for your services that work for you. Also, using man pages and --help outputs. But when you can afford doing it, you could scp files or complete directories to your desktop for editing with a proper text editor.
and also accounting for low bandwidth connections… whats more, some shitty providers even have monthly data caps
yeah, that would be almost a necessary feature. being able to hold on to the backup when you really can’t restore.