

This does not look like it was generated by an off-the-shelf LLM. It could be from a custom fine-tuned LLM (or even few shot) but it’s likely not written by vanilla ChatGPT, Gemini, etc…
It can be really difficult to detect LLM written text but the easiest heuristics are:
- Specific keywords
- The use of three examples, often bullet points (Hah!)
- “Final thoughts” or a summary
That said, there are many techniques to make an LLM sound more like an author; so, you never really know…
Final thoughts
In conclusion: we can’t be sure, but at first glance, this looks like it was written by a human.
And when the government comes knocking - and they are knocking, right now, today - these companies will hand it over
EDIT:
I have seen many people convert the em-dash into a single dash, much like OP uses. e.g.
And when the government comes knocking - and they are knocking, right now, today - these companies will hand it over
I think QSV is the new “easiest” way if you have an Intel CPU. Here are some docker compose values that might help:
group_add: - "110" - "44" devices: - /dev/dri/renderD128:/dev/dri/renderD128110isrender44isvideoYou can
grep render /etc/groupto find your values.I found CPU accelerated transcoding to be as effective as using GPU acceleration for my small media server setup. Nvidia wasn’t worth it for me.