Source.

Transcript

Screenshot of a Mastodon post by Kevin Beaumont: “Generative Al government lobbying.”

Photo of AI/tech company CEO’s, captioned:
We spent a Trillion on NVDA GPUs antide dont have any Al product you want.

Photo of a crying male, captioned:
Please like our Al bro This is the last time bro. So many possibilities bro. Its the future bro. Just need you to like it bro. We worked real hard bro. Our stockholders need this one bro.

  • SomeAmateur@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    103
    ·
    edit-2
    17 hours ago

    AI is a distruptive technology…just not in a way most people can use in a positive productive way. As entertainment it’s a cool toy

    But its best use right now is manipulating public opinion and creating convincing but non-legit material. As a propaganda tool it is a dream come true

    • SugarCatDestroyer@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 hours ago

      Well, who was funding this shit, the big corporations, right? AI is a tool of control and power, and that’s essentially why they were pouring crazy amounts of money into trying to find a way to control everyone.

    • breecher@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      4 hours ago

      But its best use right now is manipulating public opinion and creating convincing but non-legit material. As a propaganda tool it is a dream come true

      And that is why it is here to stay. Lots of these AI startups may crash and burn when/if the bubble bursts, especially “this is a cool toy” companies. But companies making disinformation and surveillance their main AI focus will become huge.

    • It’s like a monkeys paw bit.

      WISH: I want a piece of software that can look at ANYTHING and then describe it in real-time detail for blind or hard of seeing people.

      REALITY: Blind people can navigate streets but also now it’s possible to conduct fraud on a cosmic level.

    • ch00f@lemmy.world
      link
      fedilink
      English
      arrow-up
      39
      arrow-down
      2
      ·
      15 hours ago

      I think it’s cute when people say that some uses are good (cool toy) ignoring the fact that the current business model is dependent on it becoming much more than that.

      The “AI is here to stay” crowd will evaporate as soon as they have to start paying.

      • WoodScientist@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        7 hours ago

        The “AI is here to stay” crowd will evaporate as soon as they have to start paying.

        And it goes further than that. Imagine if the hyperscalers actually succeed at their goals. They create truly useful agent models. But crucially to their profits, these models can only be built at scale. Their business model fails if someone learns to duplicate their work on sane levels of compute. So let’s say everything goes OpenAI’s way. They invent truly useful models, and the tech can only work at the massive scale they’ve invested in, so they don’t have to worry about being cut off at the knees.

        Now imagine you’re a knowledge worker. A programmer. An engineer. An analyst. An editor. Really any job that is done sitting at a keyboard, manipulating data in one form or another. Now imagine you’re a knowledge worker and you adopt these new models. You become dependent on them. Less and less of your technical skills actually reside in your own mind. The difference between your knowledge base and that of any rando off the street is now less and less. At some point, you’re completely dependent on them to do the most basic functions of your job. Why shouldn’t OpenAI charge you or your employer a license fee equal to half your salary? Why shouldn’t your boss respond by cutting your salary in half and paying for the LLM license that way?

        If the great fever dreams of the hyperscalers comes true, basically every white collar employee in the country sees their labor bargaining power collapse. It would be like the decline of weaving as a profession in England at the start of the Industrial Revolution, which spawned the historical Luddite movement. What was once a skilled profession requiring years of formal apprenticeship became mechanized low-skilled labor that could be (and often was) done by literal children. In OpenAI’s ideal future, that is what will happen to anyone that currently makes a living working at a keyboard.

        It would devalue labor by making it less specialized. The real skill still present would be those who are just good at carefully stating prompts to the god machine. And if you’re good at stating prompts towards one end, you’re likely good at stating prompts across many domains. If we really had the kind of LLMs that these companies dream of creating? Prompt engineering would become a mandatory class in high school. You wouldn’t be able to graduate high school without learning how to use them. It would be as critical a skill as writing. But it would be a skill that everyone possessed, and thus of little ability to command a decent living from.

        • hitmyspot@aussie.zone
          link
          fedilink
          English
          arrow-up
          4
          ·
          7 hours ago

          And just as the end of the loom was a good thing in the end, so would ai that is that powerful. It would be hugely disruptive of course. Self driving, ai (LLMs), agi, robotics all have the potential to put huge amounts of people out of work, relatively quickly. If we allow the companies to have power, they will take the benefits for themselves. If we take it for society, we all win. This is why it’s so important we have co petition and also why it’s important we seriously talk about things like minimum wages, living wages and UBI. When the jobs are already gone, it’s already too late,

          • TipsyMcGee@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            6
            ·
            2 hours ago

            if you dedicate society to inventing a technology that makes people superfluous, it makes zero sense to keep them alive. It’s not a viable political project

          • thedruid@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 hours ago

            Tech that harms us never good

            A. I is harmful to every thing humans touch

            As is usual with anything humans touch

          • SugarCatDestroyer@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 hours ago

            Yes, only this income will be digital and every purchase you make will be tracked, as a result at any moment they can simply block your account for disobedience or suspicious activity and eventually you will die of hunger, unless you rob someone.

        • ch00f@lemmy.world
          link
          fedilink
          English
          arrow-up
          16
          ·
          edit-2
          12 hours ago

          Ads can barely cover video streaming services. Running an LLM costs orders of magnitude more. Even the paid tiers are starting to throttle usage.

            • ch00f@lemmy.world
              link
              fedilink
              English
              arrow-up
              11
              ·
              edit-2
              10 hours ago

              They can try, but it won’t be enough. The AI bubble is currently predecated on removing $50k a year employees. Nothing short of that is profitable. Everything we have now is being funded by that bet.

              • thedruid@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                3 hours ago

                You’re mistaken. That salary target is much higher.

                They want to replace anyone not inbyhe c suite e

        • valkyre09@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          14 hours ago

          Sure thing I can help you put together a 5 point plan on how to take over the world, but first here’s a word from our sponsor, NordVPN

          • JackbyDev@programming.dev
            link
            fedilink
            English
            arrow-up
            6
            ·
            10 hours ago

            More like

            Let me help you make a five point plan with Trello. Open an account and make a new task

            1. Use NordVPN to protect browsing online
    • Thekingoflorda@lemmy.world
      link
      fedilink
      English
      arrow-up
      23
      ·
      16 hours ago

      State actors used to have to have a bunch of people in an office to spread propaganda, now they can be so much more “effective” by running 1 server farm.

      • Lodespawn@aussie.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 hours ago

        Yeah, setting up a secure, reliable and maintained server farm with working software and a functional upgrade plan for both software and hardware is going to cost at least as much as an office full of skilled people. Especially given you still need skilled people to provide input and interpret output, but now you also have no work for newbies to train their skills.

        • Thekingoflorda@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 hours ago

          Train their skills in what? Spreading misinformation?

          I think one server farm with LLM can output hundreds time more crap then a full office can. Misinformation doesn’t have to be quality as long as it is repeated so often that dumb people start taking it as a fact.

    • bulwark@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      ·
      16 hours ago

      Yeah, I think the main thing it’s accomplished so far is making everyone doubt the authenticity of literally all digital media.

      • TipsyMcGee@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        Yeah, it really poisoned the well for any human interaction on the internet and any form of creativity. I wonder if having everyone question what is real is a side effect or the very point.

        And now, even if you write something people think is funny, good, poignant, whatever, they’re thinking if there’s anything uniquely human about it that makes it so an AI couldn’t have generated it. Human expression is basically dead already

    • some_designer_dude@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      16 hours ago

      In this sense, it’s actually just highlighting how easily manipulated everything has always been. The printing press made it easy to duplicate the shit out of whatever words you wanted, and AI just made it way more attainable than it’s ever been to create even higher fidelity falsehoods. The internet itself should take the “blame” for the proliferation of propaganda.

      But the root problem is the ruling class making it as hard possible to get a good education unless you’re already one of them. An educated populace would be far more resistant to all this.

    • Delphia@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      12 hours ago

      It has so many potential good uses for society as a whole but they led with the predatory shit.