You must log in or # to comment.
When are people going to realize that an LLM is not a calculator and doesn’t actually know anything?
That it is not a calculator and is horrible at determinism is not debatable, however its (very biased) huge knowledge is its core feature
How come it’s inaccurate about 40% of the time when I know the answer then? It’s a bullshit factory. A chatbot that’s fundamentally designed to sound like a person and be able to respond to any prompt. But truth isn’t any part of the fundamental architecture of an LLM.

