• FauxLiving@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    It is definitely overhyped in the fields of language models and image/video generation. The idea that we’re going to have language models replacing people is completely hype. Those tools have some uses, but they’re not remotely close to the things that are being promised by the AI companies.

    Hardly anyone pays attentions to the massive improvements being made in robotics or things like protein folding.

    Sure, they’re expensive, but not prohibitively so and they’ll only get cheaper and better as investments are made. Investments like South Korea is doing.

    Compare the early Boston Dynamics videos of their Big Dog robot using human programmed feedback control systems vs this robot trained using reinforcement learning: https://www.youtube.com/watch?v=I44_zbEwz_w

    Programming a feedback control system is expensive and requires experts in multiple fields. Training models is a, relatively, simple process so the cost for robotics startups will be much lower. Motors, accelerometers, and image sensors and a strong graphics card is all you need. This process will be further sped up by foundational World Models which allows the training of a control system without any physical components as they’re trained in simulation.

    LLMs are way overhyped, certainly, but that’s only a tiny portion of the things that neural networks are being used for.