4.6 Opus was a huge jump from earlier models and the first that was actually useful for things like this from my experience (and 4.7 is significantly worse for some reason).
I have made many anti-LLM posts here and I remain pretty negative on them, but they have absolutely become useful. Part of the problem is the truth is really somewhere between the insane promises and the dismissals.
My problems are many fold though, from being propped up by insane subsidies, the massive power usage to the thing I most care about: taking more power from the masses. The more useful they get, the more power gets concentrated to those able to afford the data centers.
Computers used to be at least somewhat democratizing, sure there were some things like weather modeling that an ordinary person couldn’t do, but a random person on thier computer could put something together to change the world.
What happens when the breakthroughs are available only for the wealthiest? Regular folks can buy tokens at a reasonable price today, but running cutting edge models on consumer hardware isn’t really feasible. We’ve ceded too much control.
What democratization? The AI companies you prefer are creating a worse oligarchy in an economic and warmonger sense. This should disturb the people who don’t have their heads buried deep in the sand, or in the orifices of Satya Nadella or Dario Amodei.
That’s promoted by AI haters. The copyright people want to privatize human knowledge and charge rent for it. The latest lawsuit against Meta even includes Elsevier, ffs.
Then there’s all the busy-bodies who want everything surveilled “for the children”. Cause people might be chatting about self-harm, or generate nudes, or some other “harmful” content.
a random person on thier computer could put something together to change the world.
Yes. For example the random people who founded these AI start-ups.
Right now, the world of technology is uniquely malleable in a way it has not been since the dotcom crash. That’s what motivates most of the hate. It’s people who feel that they will lose out; eg the news media that already suffers from the rise of the internet.
I think the people that will “lose out” includes all but like 50 people, then those 50 later. Look, it is awesome that this thing can find bugs and help complete code, but the way it is being made, it is actively trying to destroy society, and those making of it are marketing that as a feature while they burn down the forests and evaporate all the fresh water.
We need a better rollout plan or we’ll have bug-free browsers as consolation for most of us dying. There are too many of us now to just have no solution or mitigation plans for runaway resource consumption and carbon release.
The owners of the companies that own the largest ai models go on TV as often as they can get a reporter to point a camera at them, and almost every time claim that these tools will be the end of work for most humans, and offer no solutions for that dramatic change.
So, in other words, they claim the AI tools will rapidly destroy one of the bigger underpinnings of western society, and offer no solutions for what to put in their place other than some half-assed UBI suggestions. If you take millions of people’s jobs away in a short time, that’s called a depression, and if they’re never coming back, that’s the end of that society.
If that is where we are destined to go, doing so without a plan for what to do about the masses of unemployed working-age people will lead to global suffering, death, riots, and warfare. Rather than gleefully floor it over a cliff, perhaps we can take the reins from the sociopathic tech bros and try to gracefully migrate to a post-work society without most of us having to die.
Note that previous paragraph is for the sake of the debate, and I do not actually hold the belief that LLMs will meaningfully disrupt global economics over the long term, once the vast should-be-illegal money duplicating scam the AI companies and Nvidia are engaged in is put to a halt.
General_Effort, it’s pretty clear that you are an AI evangelist, so you should know this already. But for people who are genuinely unfamiliar, they should look at the creepy words of Anthropic ally, Palantir.
4.6 Opus was a huge jump from earlier models and the first that was actually useful for things like this from my experience (and 4.7 is significantly worse for some reason).
I have made many anti-LLM posts here and I remain pretty negative on them, but they have absolutely become useful. Part of the problem is the truth is really somewhere between the insane promises and the dismissals.
My problems are many fold though, from being propped up by insane subsidies, the massive power usage to the thing I most care about: taking more power from the masses. The more useful they get, the more power gets concentrated to those able to afford the data centers.
Computers used to be at least somewhat democratizing, sure there were some things like weather modeling that an ordinary person couldn’t do, but a random person on thier computer could put something together to change the world.
What happens when the breakthroughs are available only for the wealthiest? Regular folks can buy tokens at a reasonable price today, but running cutting edge models on consumer hardware isn’t really feasible. We’ve ceded too much control.
I prefer Gemma 4. Does what I need. Obviously there are quite a few problems. But democratization of technology is starting to catch up.
What democratization? The AI companies you prefer are creating a worse oligarchy in an economic and warmonger sense. This should disturb the people who don’t have their heads buried deep in the sand, or in the orifices of Satya Nadella or Dario Amodei.
That’s promoted by AI haters. The copyright people want to privatize human knowledge and charge rent for it. The latest lawsuit against Meta even includes Elsevier, ffs.
Then there’s all the busy-bodies who want everything surveilled “for the children”. Cause people might be chatting about self-harm, or generate nudes, or some other “harmful” content.
Yes. For example the random people who founded these AI start-ups.
Right now, the world of technology is uniquely malleable in a way it has not been since the dotcom crash. That’s what motivates most of the hate. It’s people who feel that they will lose out; eg the news media that already suffers from the rise of the internet.
I think the people that will “lose out” includes all but like 50 people, then those 50 later. Look, it is awesome that this thing can find bugs and help complete code, but the way it is being made, it is actively trying to destroy society, and those making of it are marketing that as a feature while they burn down the forests and evaporate all the fresh water.
We need a better rollout plan or we’ll have bug-free browsers as consolation for most of us dying. There are too many of us now to just have no solution or mitigation plans for runaway resource consumption and carbon release.
How so?
The owners of the companies that own the largest ai models go on TV as often as they can get a reporter to point a camera at them, and almost every time claim that these tools will be the end of work for most humans, and offer no solutions for that dramatic change.
So, in other words, they claim the AI tools will rapidly destroy one of the bigger underpinnings of western society, and offer no solutions for what to put in their place other than some half-assed UBI suggestions. If you take millions of people’s jobs away in a short time, that’s called a depression, and if they’re never coming back, that’s the end of that society.
If that is where we are destined to go, doing so without a plan for what to do about the masses of unemployed working-age people will lead to global suffering, death, riots, and warfare. Rather than gleefully floor it over a cliff, perhaps we can take the reins from the sociopathic tech bros and try to gracefully migrate to a post-work society without most of us having to die.
Note that previous paragraph is for the sake of the debate, and I do not actually hold the belief that LLMs will meaningfully disrupt global economics over the long term, once the vast should-be-illegal money duplicating scam the AI companies and Nvidia are engaged in is put to a halt.
General_Effort, it’s pretty clear that you are an AI evangelist, so you should know this already. But for people who are genuinely unfamiliar, they should look at the creepy words of Anthropic ally, Palantir.
Palantir’s ‘manifesto’ has been described as an ‘AI-driven threat to humanity’s existence’ and ‘technofascism’.
Palantir CEO Says a Surveillance State Is Preferable to China Winning the AI Race
Leaked: Palantir’s Plan to Help ICE Deport People