I heard that there would be a new Great Depression.

I’ve also heard a lot of different theories, and one of them interested me: after the bursting of the bubble, AI will not disappear anywhere, is it in vain that so many data centers were built? AI will be embedded everywhere, people won’t even be able to test how it works, “it works somehow, great!” because all human workers will be fired, not at once, of course, but gradually, In a few years, about 2-7 years to be exact(depending on the industry). And because of this, AI systems will begin to get out of control, this will cause incorrect diagnoses, failures in the banking system, arresting or killing innocent people by drones, and so on.

The reason I’ve explained this so poorly is because, in the first place, the topic of AI and the debate around it is terribly infuriating to me, and it’s obvious that the harm from AI won’t be able to compensate for the little benefit it will bring. Secondly, I use a translator, so my text may seem crooked, unnatural, or silly.

  • robinadams@lemmy.wtf
    link
    fedilink
    arrow-up
    3
    ·
    21 hours ago

    LLMs as we know them now are absolutely going to disappear. There won’t be nice free text interfaces to models with trillions of parameters. That will seem obscenely extravagant.

    n LLM is fantastically expensive to run: OpenAI makes a loss of over 10 billion dollars per year, and loses money even on its highest-oaying customers.

    There’s a reason why Anthropic and OpenAI were competing to sell their war-crimes-as-a-service to the Department of War. Private investment is starting to dry up and the Pentagon is one of the few places with the sort of money an LLM needs to keep alive.

    Using generative AI is an extremely inefficient way to do most of the things the world uses it for at the moment. It only appears efficient because the AI companies are giving it away for free and eating the loss, burning through capital as fast as they can.

    What will be left? Small local models that will go out of date quickly since they won’t be continuously retrained. Still ok for writing boilerplate code and such, but not the thing that seems like it can replace every job on the planet. Generative AI will be seen as a much more limited tool.

    Data centers that could be useful for lots of SaaS tasks that don’t involve running supercomputer GPUs at full pace 24 hours a day so they fail and need to be replaced like vacuum tubes in a 1950s computer.

    A countermovement called something like “slimline productivity” where doing stuff without AI becomes trendy. No need to spend so much on AI, just get your employees to do it by hand! Cheaper, more reliable, more human! After a few years everything is advertising itself as “slimline”.

    An enormous amount of work for software engineers who know how to work without AI, debugging the terrible AI-generated code that we’ve flooded the world with… if anyone can pay for it.

    • Upgrayedd1776@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 hours ago

      yeah, building an app now, could have saved myself some time and just put an llm handler whereever code got tricky, but i know the hammer is coming, i put it in the background and spend extra time making sure the output is continuously improving on underlying code strategy to make itself obsolete whereever i can and also massive amounts of documentation so when it does get scarce, my atrophied vibe coder brain can try to navigate it, patrick boyle also has been claiming the same thing about ai runway https://www.youtube.com/watch?v=NbL7yZCF-6Q