A study conducted by researchers at IMDEA Networks Institute has revealed that ChatGPT (OpenAI), Claude (Anthropic), Grok, and Perplexity AI use different types of trackers from Meta, Google, TikTok and other companies, potentially exposing data about users’ conversations and activity. In just a few years, these generative AI systems have become widely adopted, with many...
I use an APK called Off Grid and load ai onto that (right now I’m using genna 4). It’s all done on my phone. Nothing on the cloud. No data sent anywhere. Completely local. No entities get shit from it. The only way I’ll use ai.
I highly value privacy, but the gap between local LLMs vs top of the line cloud LLMs (e.g. Claude & DeepSeek) is still too great for me to switch completely to the former.
I’ll use PWAs to sandbox LLMs from everything else (and each other) and try to create semantic distance between the user and the queries.
How about that leaked Claude source code? Is there a reliably clean version of that available anywhere yet?
I have my local LLM currently setup and it runs just as well as Sonnet 4.6 from a quality standpoint, and for performance it is slightly slower but it’s still faster than I can respond.
This is with a Strix Halo APU with 128GB unified memory using the latest Qwen3.6 models with llama.cpp.
I use an APK called Off Grid and load ai onto that (right now I’m using genna 4). It’s all done on my phone. Nothing on the cloud. No data sent anywhere. Completely local. No entities get shit from it. The only way I’ll use ai.
I highly value privacy, but the gap between local LLMs vs top of the line cloud LLMs (e.g. Claude & DeepSeek) is still too great for me to switch completely to the former.
I’ll use PWAs to sandbox LLMs from everything else (and each other) and try to create semantic distance between the user and the queries.
How about that leaked Claude source code? Is there a reliably clean version of that available anywhere yet?
Gemma 4 seems pretty legit so far.
I have my local LLM currently setup and it runs just as well as Sonnet 4.6 from a quality standpoint, and for performance it is slightly slower but it’s still faster than I can respond.
This is with a Strix Halo APU with 128GB unified memory using the latest Qwen3.6 models with llama.cpp.
Can you please provide any feedback on this?