

And, importantly, run the db on postgre, not sqlite, and implement the regular db maintenance steps explained in the wiki. I’ve been running mine like that in a small VM for about 6 months, i join large communities, run whatsapp, gmessages and discord bridges, and my DB is 400MB.
Before when I was still testing and didn’t implement the regular db maintenance it balloned up to 10GB in 4 months.


I purge 2 weeks old media using these. Then I purge the largest rooms’ history events using these. Then I compress the DB using this.
It looks like this:
export PGPASSWORD=$DB_PASS export MYTOKEN="mytokengoeshere" export TIMESTAMP=$(date --date='2 weeks ago' '+%s%N' | cut -b1-13) echo "DB size:" psql --host core -U synapse_user -d synapse -c "SELECT pg_size_pretty(pg_database_size('synapse'));" echo "Purging remote media" curl \ -X POST \ --header "Authorization: Bearer $MYTOKEN" \ "http://localhost:8008/_synapse/admin/v1/purge_media_cache?before_ts=%24%7BTIMESTAMP%7D" echo '' echo 'Purging local media' curl \ -X POST \ --header "Authorization: Bearer $MYTOKEN" \ "http://localhost:8008/_synapse/admin/v1/media/delete?before_ts=%24%7BTIMESTAMP%7D" echo '' echo 'Purging room Arch Linux' export ROOM='!usBJpHiVDuopesfvJo:archlinux.org' curl \ -X POST \ --header "Authorization: Bearer $MYTOKEN" \ --data-raw '{"purge_up_to_ts":'${TIMESTAMP}'}' \ "http://localhost:8008/_synapse/admin/v1/purge_history/$%7BROOM%7D" echo '' echo 'Purging room Arch Offtopic' export ROOM='!zGNeatjQRNTWLiTpMb:archlinux.org' curl \ -X POST \ --header "Authorization: Bearer $MYTOKEN" \ --data-raw '{"purge_up_to_ts":'${TIMESTAMP}'}' \ "http://localhost:8008/_synapse/admin/v1/purge_history/$%7BROOM%7D" echo '' echo 'Compressing db' /home/northernlights/scripts/synapse_auto_compressor -p postgresql://$DB_USER:$DB_PASS@$DB_HOST/$DB_NAME -c 500 -n 100 echo "DB size:" psql --host core -U synapse_user -d synapse -c "SELECT pg_size_pretty(pg_database_size('synapse'));" unset PGPASSWORDAnd periodically I run vacuum;