Note: this lemmy post was originally titled MIT Study Finds AI Use Reprograms the Brain, Leading to Cognitive Decline and linked to this article, which I cross-posted from this post in !fuck_ai@lemmy.world.
Someone pointed out that the “Science, Public Health Policy and the Law” website which published this click-bait summary of the MIT study is not a reputable publication deserving of traffic, so, 16 hours after posting it I am editing this post (as well as the two other cross-posts I made of it) to link to MIT’s page about the study instead.
The actual paper is here and was previously posted on !fuck_ai@lemmy.world and other lemmy communities here.
Note that the study with its original title got far less upvotes than the click-bait summary did 🤡
I don’t refute the findings but I would like to mention: without AI, I wasn’t going to be writing anything at all. I’d have let it go and dealt with the consequences. This way at least I’m doing something rather than nothing.
I’m not advocating for academic dishonesty of course, I’m only saying it doesn’t look like they bothered to look at the issue from the angle of:
“What if the subject was planning on doing nothing at all and the AI enabled the them to expend the bare minimum of effort they otherwise would have avoided?”
I would argue that if you used AI, you still haven’t done any writing.
I don’t think you can definitely say that you wouldn’t have done it anyway. That’s speculative based on a theoretical situation.
It’s possible you might have been moved to write if AI never existed, maybe not. But whatever you do write without AI is actually something you made, good or bad. LLM output isn’t.
You haven’t done anything, though. If you’re getting to the point where you are doing actual work instead of letting the AI do it for you, then congratulations, you’ve learned some writing skills. It would probably be more effective to use some non-ai methods to learn as well though.
If you’re doing this solely to produce output, then sure, go ahead. But if you want good output, or output that actually reflects your perspective, or the skills to do it yourself, you’ve gotta do it the hard way.
Could you expand with an example because what you said is too vague to really extract any point from. I’d argue that if it gives you wrong information, doing something wrong is worse than doing nothing.
Is this a general statement right? Try to forget about context then and read that again 😅
I actually think the moments when AI goes wrong are the moments that stimulate you and make you realize what you’re doing and what you want to achieve better. And when you do subsequent prompts to fix the issue, you essentially do problem solving on figuring out what to ask to make it do the exact thing you want. And it’s never going to be always right, simply because most of cases of it being wrong is you not providing enough details about what you actually want. So step-by-step AI usage with clarifications and fixes is always going to be brain-stimulating problem solving process.
Well that’s why u was asking for an example of sorts. The problem is that if you’re just starting out, you don’t know what you don’t know and more importantly, you won’t be able to tell if something is wrong. It doesn’t help that LLMs are notoriously good at being confidently incorrect and prone to hallucinations.
When I tried it for programming, more often than not, it has hallucinated functions and APIs that did not exist. And I know that they don’t because I’ve been working at this for more than half of my life so I have the intuition to detect bullshit when it appears. However, for learners they are unlikely to be able to differentiate that.
When you run it, test it, and it doesn’t work as expected (or doesn’t work at all), that means most likely something is wrong. Not all fields of work require programs to be 100% correct from the first try, pretty often you can run and test your code infinite number of times before shipping/deploying.
So vibe coding?
I’ve tried using llm for a couple of tasks before I gave up on the jargon outputs and nonsense loops that they kept feeding me.
I’m no coder / programmer but for the simple tasks / things I needed I took inspo from others, understood how the scripts worked, added comments to my own scripts showing my understanding and explaining what it’s doing.
I’ve written honestly so much, just throwing spaghetti at the wall and seeing what sticks (works). I have fleshed out a method for using base16 colour schemes to modify other GTK* themes so everything in my OS matches. I have declarative containers, IP addresses, secrets, containers and so much more. Thanks to the folks who created nix-colors, I should really contribute to that repo.
I still feel like a noob when it comes to Linux however seeing my progress in ~ 1y is massive.
I managed to get a working google coral after everyone else’s scripts (that I could find on Github) had quit working (NixOS). I’ve since ditched that module as the upkeep required isn’t worth a few ms in detection speeds.
I don’t believe any of my configs would be where they are if I’d asked a llm to slap it together for me. I’d have none of the understanding of how things work.
I’m happy for your successes and your enthusiasm! I’m in a different position, I’m kinda very lazy and have little enthusiasm regarding coding/devops stuff specifically, but I enjoy backsitting the Copilot. I also think that you’re definitely learning more by doing everything yourself, but it’s not really true that you learn nothing by only backsitting LLM, because it doesn’t just produce working solution from a single prompt, you have to reprompt and refine things again and again until you get what you want and it’s working as expected. I feel myself a bit overpowered this way because it lets me get things done extraordinarily fast. For example, at 00:00 I was only choosing a VPS to buy and by 04:00 I already had wireguard server with port forwarding up and running and all my clientside stuff configured and updated accordingly. And I had some exotic issues during setup which I also troubleshoot using LLM, like for example, my clientside
wg.conf
file getting wrong SELinux context and wg-quick daemon refusing to work because of that:unconfined_u:object_r:user_home_t:s0
I never knew such this thing even exist, and LLM just casually explained that and provided a fix:
sudo semanage fcontext -a -t etc_t "/etc/wireguard(/.*)?" sudo restorecon -Rv /etc/wireguard
LLMs are good as a guide to point you in the right direction. They’re about the same kind of tool as a search engine. They can help point you in the right direction and are more flexible in answering questions.
Much like search engines, you need to be aware of the risks and limitations of the tools. Google with give you links that are crawling with browser exploiting malware and LLMs will give you answers that are wrong or directions that are destructive to follow (like incorrect terminal commands).
We’re a bit off from the ability to have models which can tackle large projects like coding complete applications, but they’re good at some tasks.
I think the issue is when people try to use them to replace having to learn instead of as a tool to help you learn.
I believe they’re (Copilot and similar) good for coding large projects if you use them in small steps and micromanage everything. I think in this mode of use they save a huge amount of time, and more importantly, they prevent you wasting your energy doing grindy/stupid/repetitive parts and allow you to save it for actually interesting/challenging parts.
I’m in the same boat with many things I’m using AI for. I would never write natpmpc port-forwarding demons, I would never create my own DIY VPN, etc, if I had to do this all by myself. Not because I can’t, but because I don’t enjoy spending my time diving into tons of manuals for various utilities, protocols, OS level stuff, networking, etc. I would simply give up and use some premade solutions. But with AI, I was able to get it all done while also quickly getting to know some surface-level things about all of this stuff myself.
A diy VPN is exactly the disaster scenario of vibe coding.
I agree, I should have clarified I actually meant setting up a wireguard server on a vps, not developing and alternative to wireguard or openvpn.
a custom VPN without security minded planning and knowledge? that sounds like a disaster.
surely you could do other things that have more impact for yourself, still with computers. use wireguard and spend the time with setting up your services and network security.
and, port forwarding… I don’t know where are you running that, but linux iptables can do that too, in the kernel, with better performance.
Oops, I meant self-hosting a wireguard server, not actually doing an alternative to wireguard or openvpn themselves…
With my previous paid VPN I had to use natpmpc to ask their server for forwarding/binding ports for me, and I also had to do that every 45 seconds. It’s nice to get a bash script running in a systemd demon that does that in a loop, and also parses output and saves remote ports server gave us this time to file in case we need them (like, for setting up a tor relay). Also, I got another script and demon for tor relay that monitors forwarded port changes (from a file) and updates torrc and restarts tor container. All this by Copilot, without knowing bash at all. Without having to write complex regexes to parse that output or regexes to overwrite tor config, etc. It’s not a single prompt, it requires some troubleshooting and clarifications and ultimately I got to know some of the low level details of this myself. Which is also great.
oh, that’s fine then, recommended even.
oh so this is a management automation that requests an outside system to open ports, and updates services to use the ports you got. that’s interesting! what VPN service was that?
be sure to run shellcheck for your scripts though, it can point out issues. aim for it to have no output, that means all seems ok.
Proton VPN
It does some logging though, and I read what it logs via
systemctl --user status
. Anyway, those scripts/services so far are of a simple kind - if they don’t work, I notice that immediately, because my torrents not seeding or my tor/i2p proxy ports not working in browser. In case when error can only be discovered conditionally somewhere during a long runtime, it needs more complicated and careful testing.sad that people knee jerk downvote you, but i agree. i think there is definitely a productive use case for AI if it helps you get started learning new things.
It helped me a ton this summer learn gardening basics and pick out local plants which are now feeding local pollinators. That is something i never had the motivation to tackle from scratch even though i knew i should.
Given the track record of some models, I’d question the accuracy of the information it gave you. I would have recommended consulting traditional sources.
jfc you people are so eager to shit on anything even remotely positive of AI.
Firstly, the entire point of this comment chain is that if “consulting traditional sources” was the only option, I wouldn’t have done anything. My back yard would still be a barren mulch pit. AI lowered the effort-barrier of entry, which really helps me as someone with ADHD and severe motivation deficit.
Secondly, what makes you think i didn’t? Just because I didn’t explicitly say so? yes, i know not to take an LLM’s word as gospel. i verified everything and bought the plants from a local nursery that only sells native plants. There was one suggestion out of 8 or so that was not native (which I caught before even going shopping). Even with that overhead of verifying information, it still eliminated a lot of busywork searching and collating.
Saw you down-voted and wanted to advise that I am glad you went on to learn some things you had been meaning to, that alone makes the experiment worthwhile as discipline is a rare enough beast. To be clear I myself have a Claude subscription that is about to lapse, and find the article unfortunately spot on. I feel fortunate to have moved away from LLMs naturally.