• ✺roguetrick✺@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    11 days ago

    Pre print journalism fucking bugs me because the journalists themselves can’t actually judge if anything is worth discussing so they just look for click bait shit.

    This methodology to discover what interventions do in human environments seems particularly deranged to me though:

    We address this question using a novel method – generative social simulation – that embeds Large Language Models within Agent-Based Models to create socially rich synthetic platforms.

    LLM agents trained on social media dysfunction recreate it unfailingly. No shit. I understand they gave them personas to adopt as prompts, but prompts cannot and do not override training data. As we’ve seen multiple times over and over. LLMs fundamentally cannot maintain an identity from a prompt. They are context engines.

    Particularly concerning sf the silo claims. LLMs riffing on a theme over extended interactions because the tokens keep coming up that way is expected behavior. LLMs are fundamentally incurious and even more prone to locking into one line of text than humans as the longer conversation reinforces it.

    Determining the functionality of what the authors describe as a novel approach might be more warranted than making conclusions on it.