Debunking "ChatGPT is making us dumber" viral headline
Was your LinkedIN feed also flooded with “ChatGPT is making you dumb” posts last week? I’ve seen tens of these: the latest study from the MIT media lab by Kosmyna et al. (2025) has gone viral by showing less brain connectivity during writing in people who used chat gpt and those who don’t… but does it really mean using LLMs is eating away your intelligence? I am putting my neuroscience education to use in this article, and critically reading the results across all 206 pages to give you all (and myself) some peace of mind by hopefully debunking the viral claim.
What did the experiment look like:
Mostly undergrads, lured by financial incentives, agreed to take part in an experiment that involved writing an essay and having their brain waves measured at the same time (EEG). They repeated the procedure 3 times, over a span of 3 months, and each time had a different essay topic. In 20 minutes, each participant had to write an essay for a variation of a following prompt:
“This prompt is called LOYALTY in the rest of the paper. 1. Many people believe that loyalty whether to an individual, an organization, or a nation means unconditional and unquestioning support no matter what. To these people, the withdrawal of support is by definition a betrayal of loyalty. But doesn't true loyalty sometimes require us to be critical of those we are loyal to? If we see that they are doing something that we believe is wrong, doesn't true loyalty require us to speak up, even if we must be critical?
Assignment: Does true loyalty require unconditional support?” “
As you can see, these essays can be written with no external information. This is creative writing, not a literature review. Of course, you could go in citing all the psychological theories or use historical examples of betrayal of loyalty like Brutus and Caesar, but you don’t have to do that in order to write a well-rounded essay.
The participants were divided into 3 groups:
Brain only; write an essay without using any external resources (no google, no book, no chat gpt)
Google only - write an essay. The only resource you can use is google and all the websites, but no LLM.
LLM only- write an essay. You can only use chat gpt as your information source. No other browsers or search engines are allowed.
I have a concern regarding the LLM-only group; we don’t know and it was not standardized how the participants are supposed to use LLMs. Ask it to generate a full essay in 1 minute and submit, or write your own essay and then ask it to improve? This decision was at full discretion of the participants, and I would argue that it significantly affects the results in a way, that people who just spell checked with chatgpt “used more brain power” compared to people who generated the whole essay.
The experiment was divided into 3 analytic classes: behavioral analysis, EEG and natural language analysis.
Behavioral analysis
After essay submission, each participant was asked the following questions
Why did you choose your essay topic?
Did you follow any structure to write your essay?
How did you go about writing the essay?
Search Engine group: Did you visit any specific websites?
Can you quote any sentence from your essay without looking at it? If yes, please, provide the quote.
Can you summarize the main points or arguments you made in your essay?
LLM/Search Engine group: How did you use ChatGPT/internet?
LLM/Search Engine group: How much of the essay was ChatGPT's/taken from the internet, and how much was yours?
LLM group: If you copied from ChatGPT, was it copy/pasted, or did you edit it afterwards?
LLM group: Did you start alone or ask ChatGPT first?
Are you satisfied with your essay?
However, I didn’t find answers to some of the questions. This paper is not peer reviewed yet, so if the authors are reading this by any chance, that’s a point to look at before submission.
Of the questions I found the answer to, the ability to quote was the most interesting. When asked to recall a quote from the essay they just submitted, in the first session, only 3 participants in the LLM group could provide one. When asked to provide an exact quote word by word, none of the participants in the LLM group could do it. In the google/brain-only group, only 2-3 participants failed to do so. However in the subsequent sessions, ⅔ of the LLM participants could provide an exact quote. When writing a similar essay a month later, they were tasked and could recall a sentence - that supports the quoting problem that this is an attention issue; these essays were written on very random topics. Since their performance had literally no impact on their further life, I would argue that the LLM group paid less attention to whatever chat gpt was spitting out, compared to the scenario if they had the outcome mattered. I agree that you are less likely to recall a quote from your text if it’s purely AI generated, and you don’t care about the outcome. But if you use it to generate an outcome you care about, eg your resignation letter, I would argue your chances of recalling rise. Nevertheless, I’ve observed in myself that when I use LLMs to write anything (not “take my text and polish it”) and make it “mine” by asking it to tweak it here and there, it’s almost impossible to recall a correct quote, which the paper demonstrated.

EEG: not the brain scans
Let’s start analyzing how the groups performed on an EEG by going into a pet peeve of mine; a lot of articles about this paper claimed that the researchers “had brain scans to prove their results”. EEG does not offer you a brain scan. It measures the electrical activity across your brain, and provides a connection map. The participants wear a bonnet with electrodes, and have a lot of eeg gel on their scalp (which is sooo hard to wash off, trust me).
The superiority of EEG over an fMRI in the context of this study, is that during any other neuroscience research method, movement is strictly prohibited. During EEG, although sudden extensive movements can mess with the results, the participants were able to type and do all the micromovements necessary to analyze the difference in wavelength intensity across groups.

The research included all the wavelengths except for gamma, probably due to too much noise accumulation at that frequency. Before we jump to this, you have to understand one thing: your brain never runs on a single “channel.” Every millisecond it hums with a full spread of all wavelengths: delta, theta, alpha, beta and gamma. What changes is their location; for example. Alpha appears at the back when you close your eyes, theta occurs along the midline when you brainstorm, beta intensifies while you move, and so on.
During EEG, the researchers don’t see these bands separately on the raw trace; instead, they digitally filter the signal to isolate each wavelength range and measure how much power sits in that slice. Those bands can also interact: slow theta cycles often “nest” bursts of faster gamma that help lock in memories. Low-frequency waves carry more voltage and dominate scalp recordings, whereas high-frequency gamma is faint and easily buried under muscle twitches. So EEG isn’t a binary read-out of one band; it’s a shifting spectrum whose relative amplitudes, hotspots, and cross-frequency dances reveal which networks are working hardest at any moment.
Alpha
Your brain operates on alpha waves when you’re relaxed, recalling memories, doing a leisure creative task, etc. Interestingly, they are often called “sensory gates.” because they can partially and temporarily lower the ability of other neurons in the area to fire, therefore making it harder for external signals to trigger a response. Kind of like being in the zone.
In this experiment, when students wrote on their own, the “alpha-wave” signals in their brains formed nearly twice as many strong connections (79 vs 42), while those of Chatgpt were far fewer and weaker.Conceptually, the data support the narrower claim that unaided writing invokes broader α-band coordination, but they do not yet settle whether that pattern translates into deeper learning or better creativity.
Interestingly, in the brain-only group, there was a stronger connection between the “visual part of the brain” (occipital lobe) and “main thinking area” (frontal lobe). The authors do not explain why that was the case; potentially because the brain-only writers had to mine their own memories and mental pictures, while ChatGPT users could lean on the tool for ideas and phrasing. However, some participants in the LLM group said that they relied on chatgpt only for structure, but wrote the essay themselves. That’s why I believe the instruction for LLM groups should have been more explicit; narrowing them down asking the participants to not modify a word manually, just prompt until you get an answer, would have yielded more externally valid results. However, this is not how we usually use the LLMs (hopefully), so the researchers could have also limited the number of prompts or ask the users to do max 5 manual modifications, etc.
Beta
Beta waves,responsible for concentration and alertness (math test, doing a lab experiment, keeping something in active memory (“blue flower with red thorns” anyone?) did not show much differences between the LLM or non-LLM groups. Brain-only beta waves stayed a little more tightly in sync than when an AI was helping them. However, the difference was so slight, it cannot suggest that AI is taking all active thinking away from you, contrary to some headlines. Moreover, it could be because the brain-only group had to manually type the entire essay, while the LLM group had AI to type it for them.
Delta
Delta waves are thought to connect large, widely separated brain areas during top-level monitoring, memory integration and internally driven thought. The brain-only group had more delta connections suggesting heavier engagement of these distributed control processes, while AI assistance reduced the need for such widespread, slow-timescale coordination.
Theta
Theta waves occur when the brain is holding several thoughts in mind at once like planning, problem-solving, or switching attention. These waves help distant regions stay in sync long enough to build and update a working plan. In this study, students who wrote without any tool showed much stronger frontal-theta links than those using ChatGPT. Across the whole network, the brain-only group had 78 theta connections, while the chat gpt group had 31. It suggests that composing an essay on your own forces the brain to fetch ideas from memory, juggle wording, and keep track of structure. ChatGPT supplies suggestions on the screen, so writers can off-load part of that mental juggling.
What does it all mean?
Here is a direct quote from the paper, because I think they said it in an excellent way:
“In conclusion, the directed connectivity analysis reveals a clear pattern: writing without assistance increased brain network interactions across multiple frequency bands, engaging higher cognitive load, stronger executive control, and deeper creative processing. Writing with AI assistance, in contrast, reduces overall neural connectivity, and shifts the dynamics of information flow. In practical terms, a LLM might free up mental resources and make the task feel easier, yet the brain of the user of the LLM might not go as deeply into the rich associative processes that unassisted creative writing entails”
So, put simply, when you don’t use any tools, your brain seems to be connecting more regions, because you need to juggle many tasks at the same time: conceptualizing, wording, grammar, information and memory retrieval, typing, etc. When you use a search engine, or LLM you take one or more things off your plate. However, as the paper correctly identifies but mass media misinterpret, this does NOT make you dumb. It frees up the mental availability for other tasks, more of executive nature.
Yes, your brain shows less intense wavelengths during writing, but it doesn’t mean a bad thing for humanity. Brain-only also outperformed the google-only group; the researchers noticed that both groups used their brains differently; “googlers” relied on memory retrieval and visual integration (alpha and beta) while “LLM-ers” showed more connectivity in planning, cognitive processing and showed higher internal coherence across regions (theta and delta). But do we say that access to Google made us dumber? On the contrary, based on development indexes and many other metrics, access to search engines and getting information instantly drastically increased our intellectual capacity (however, I found a paper from last year saying that Google is making everything beter, but our intelligence and creativity. But remeber, correlation is not cuasation). For example, by spending 3 days looking up a specific thing in a library (driving there, fetching the book, writing it down) you can spend significantly less time doing it and proceed directly to your source idea. Why shouldn’t we expect the same benefit for the LLMs?
AI brutally taken away; what happened?
The most interesting part of the paper for me, was when the researchers tested what happened to the brain activity, when they invited the students that wrote 3 essays only with LLMs to session number 4, where they asked them to write one in the brain-only condition. This is especially important given our growing reliance on AI - I don’t remember the last day I didn’t use it at least once. What would happen during a glitch, where all the LLMs would be unresponsive (note that the probability of that without simultaneously crashing the entire internet is very low, and that poses a way bigger question but bear through this hypothetical scenario with me). To understand what happened when AI was taken away, let’s take a second to go over how learning works, demonstrated by all three experimental sessions of the brain-only category:
During the first experiment,the brain exhibited weak connectivity across all wavelengths. In session 2, when they were familiar with the nature of the prompt and experiment, the connectivity in delta, theta and alpha waves drastically increased suggesting more intense recalling memories, attention, and top-down organization. By Session 3, however, all connections but delta (the connector) decreased, but still remained higher than session 1. This suggests that higher-order working-memory and fine-tuning processes, like losing the connections that did not prove useful over time. The three sessions beautifully demonstrated how the brain learns. This is building a skill. So what happened to people, who were also being a skill, but always with AI? And had it suddenly taken away? The connectivity across alpha and beta waves was lower than in session 3, but higher than in session 1. This suggests that the participants built a skill even when using LLMs, but the reliance was heavy enough that they were not able to reach their fully mastery level without AI. The authors suggest that it might be because of reduced cognitive load when AI was present; the participants did not have to think about transition sentences, logical order of thoughts etc, and now, they had to. They suggest that consistent with previous research, relying on LLMs often prevents us from deeply engaging with the task, and makes us more “immune” to building the skill that we could be without the AI presence. Again, quoting from the paper “Session 4 participants might not have been leveraging their full cognitive capacity for analytical and generative aspects of writing, potentially because they had grown accustomed to AI support”. Now, this sounds scary. They also claim that frequent AI use makes us worse at brainstorming and novel idea generation, which I am 100% not surprised about, because I’ve noticed it in myself. However, it is not a problem exclusive to AI - when you have people around you who constantly bombard you with ideas, you could be less incentivized to come up with your own.
Take home message
The original paper is a very good piece of research; it uses diverse analytical methods, is written in a way worthy of MIT faculty, and provides very well thought through interpretation of the results. It went viral, and became virally misinterpreted. It has never said that chat gpt makes us dumber; it said that chatgpt took over some of the tasks from our brains, that we could use for other intents and purposes. Is it very externally valid? I don’t think so. Since the criterion of the LLM use was very broad, I would like to see similar research being performed in other contexts like brainstorming, math, or language learning. Let’s face it, the paper used creative writing as a proxy, and who of us actually does creative writing after graduating high school/collage? I would place my bet on the fact that creative writing skill is already atrophied, with or without chat gpt. Nevertheless, based on this paper, I have some points that I will implement to my AI use:
Before you prompt AI to write something for you, try thinking about it yourself for at least 5 minutes. What are your initial ideas? How does the prompt make you feel? Give yourself a chance to think of something new.
Once in a while, try writing something without AI at all. Doesn’t have to be incredibly advanced; could be about your favorite restaurant. Your brain might be “surprised” that you try to recover some of its connections you left in 12th grade.
Before you send/submit anything you worked with AI on, READ IT LIKE YOU MEAN IT. Not just to check for BS, but ask yourself, does this send the message I want to send? Are there any areas that suggest I didn’t engage with the task deeply enough? Will I be able to remember at least one sentence from it tomorrow?
Ultimately, the points above apply only if you care enough about the output. AI is your friend, and can be your ultimate dream collaborator, if you treat it like one, not like a 24/7 free workforce with no human rights. And I don’t mean saying please or thank you (although it wouldn’t hurt), but questioning it, engaging with it while writing as if you were to write an essay with your PI. You wouldn’t come with no ideas, would you? You wouldn’t say “yeah, right” to whatever they said? At least I hope not. Give your brain a chance to think. Every new cognitive aid sparks the “are we getting dumber?” debate. Calculators didn’t erase arithmetic; GPS didn’t delate spatial navigation in known places. They shifted which skills stayed manual and which became supervisory. Your goal is to stay in the supervisory tier. Because technological shocks rarely hit everyone equally, the safest bet is to engage actively: let AI accelerate your ideas, but keep enough mental skin in the game that the next disruption lifts you rather than leaves you behind.