
Whether technology makes people dumber is a question as old as technology itself. Socrates, for example, 2300 years ago, faulted the invention of writing for weakening human memory.
I have been struggling to articulate my concerns about the increasing use of technology in general and A.I. in particular, not least the cognitive impact on young minds. Clarification came in the form of an article in The Guardian last month. Two years ago Nataliya Kosmyna, a research scientist at MIT, in response to e-mails from students reporting that their memories seemed to have declined since they started using language models such as ChatGPT, divided 52 of her students into three groups, asked then to write essays, and used electroencephalograms to monitor their brain activity as they did so. The first group had no digital assistance; the second had access to an internet search engine, and the third had help from ChatGPT (what Clare Densely of Buckfast Abbey calls ‘Chatty Pete.’)
Immediately after they had completed their essay, the students were asked to recall what they had just written. The vast majority of ChatGPT users (83 percent) could not recall a single sentence. In contrast, the students using Google’s search engine could quote some parts, and many of the students who relied on no tech could quote almost the entirety of their essays verbatim.
In June of this year, even before the experiment was peer reviewed, Nataliya posted it online thinking other researchers might find it interesting. The response was immediate – 4 000 e-mails from across the world, primarily from teachers who worry that A.I. is creating a generation who can produce passable work without either usable knowledge or comprehension of the content. The more external help the participants had, the lower their level of brain activity, and in particular those neural networks associated with cognitive processing, attention and creativity.
I wonder what the difference is between using A.I. in such an instance and getting one’s elder brother to write the essay for you. In the old days we called it cheating. I wonder too about the implications for people using A.I. chatbots in fields where retention is essential, like a pilot studying to get a license. The appropriate application of A.I. may be more selective than we currently realize. Clearly research is needed as to how we can use A.I. and retain information.
Like monitoring a hive of honey bees, writing an essay requires the ability to analyze and synthesize information and to consider several alternatives before committing to a course of action. Is it possible that future beekeepers will not be able to complete a hive inspection without first entering data into their phone and then following a course of action prescribed by A.I.?
A report in the New York Time on Nov 10, titled How A.I. and Social Media Contribute to Brain Rot, describes how, last spring, Shiri Melumad, a professor at the Wharton School of the University of Pennsylvania, gave a group of 250 people a simple writing assignment – share advice with a friend on how to lead a healthier lifestyle. Some were allowed to use a traditional Google search while others could rely only on summaries of information generated automatically with A.I.
The advice from the A.I. summaries was generic, obvious and largely unhelpful — eat healthy foods, stay hydrated and get lots of sleep. Those who found information with a traditional Google web search shared more nuanced advice about focusing on the various pillars of wellness, including physical, mental and emotional health. Yet the tech industry continues to tell us that chatbots and new A.I. search tools will supercharge the way we learn and that anyone who ignores the technology risks being left behind. By comparison, Dr. Melumad’s experiment found that people who rely heavily on chatbots and A.I. search tools for tasks like writing essays and research are generally performing worse than people who don’t use them. It is hardly surprising that the Oxford English Dictionary, named brain rot as the word of the year in 2024.
Yuval Harari , the Israeli medievalist, military historian, public intellectual, popular science writer and history professor at the Hebrew University of Jerusalem, (his book, Sapiens : a brief history of mankind, is fascinating reading) argues that at no time in the past has an inferior species taken control of a superior species. Based on the theory that we learn more from what we witness than what we are told, A.I. will observe closely how, despite what we say, much of human interaction is based on power, lies, greed and manipulation. He predicts that, in some twenty years time, these will be the norms that will drive A.I. which in turn will bitterly divide the global population, from which there is no recovery – large language models will have an influence and power beyond the ability of any human to change it.
In his interview with Kirk Webster, as printed in the Dec. 2025 issue of Bee Culture, Ross Conrad writes that “We are creative beings and we need to experience the satisfaction that comes with living our lives through our own creativity and decision making process. If A.I. continues to evolve to the point where some say it is headed, it will be deciding where we live, what we do for work and how we spend most of the day. If this happens, the level of mental illness experienced by society will dwarf the current levels of depression, suicide, and mood, thinking and behavioral disorders associated with smart phones and social media.”
The issue is that we are primed by evolution to use short cuts that make our lives easier. It started with hunter gatherers discovering the use of stones as tools, followed by bronze and iron, bows and arrows, gunpowder, chemicals, aircraft and eventually nuclear power. Each step made it easier to kill first more animals for food and clothing, then more people, for power. But our brains need what Nataliya calls ‘friction’ to learn; it needs to have a challenge. It is mindful of the man who, witnessing a butterfly struggling to emerge from her cocoon, cut away the outer skin to make it easier for her … and the butterfly collapsed to the ground unable to fly . He did not realize that struggle, or friction, is an essential part of the birth process, in this case allowing the wings to harden. And when we do solve a meaningful challenge, the pleasure hormones such as dopamine, serotonin, endorphins and oxytocin released by the brain provide the stimulus necessary to tackle the next major undertaking.
The future is not predetermined; there are solutions to the current trend. For social media, for example, parents can enforce screen-free zones and prohibit phone use in areas like the bedroom and dinner table so that children can stay focused on their studies, on sleep and on verbal communication at mealtimes. Many schools are now banning cell phones; in a bold social experiment, Australia last month enacted a law to prevent anyone under the age of 16 from having access to social media apps. Indonesia is poised to follow suit
As for A.I. chatbots, there was an interesting wrinkle in the M.I.T. study that presented a possible solution on how we can best use Chatty Pete to learn and write. Eventually, the groups in that study swapped roles : those who had relied only on their brains got to use ChatGPT, and vice-versa. All the students wrote essays on the same topics they had chosen before. The students in the first group recorded the highest brain activity once they were allowed to use ChatGPT; those in the second were never on a par with their colleagues when they were restricted to using their brains.
This suggests that, at least in the process of writing and learning, we need to start the process on our own before turning to the A.I. tools for revisions. As Dr. Melumad explained, A.I. tools transform an active process in our brain – creative thinking skills— into a passive one by automating it.
So perhaps the key to using A.I. in a healthier way is to be more mindful as to how we use it. Rather than ask a chatbot to do all the research on a broad topic, Dr. Melumad argues, use it as a part of the research process to answer small questions, but for deeper learning of a subject, consider reading a book. Or in the case of a beekeeper, evaluate a colony directly, knowingly engage in and appreciate the ‘friction’ that Dr. Kosmyna described (what I called in a previous piece ‘the groan zone’), use reliable technology to clarify issues on which you are not clear (eg. the symptoms of EFB) and for that vital big picture, read a book.

In terms of the very big picture, and if history does indeed repeat itself, we might be entering a neo-Feudal age. In essence, feudalism was a contract by which the lower classes – vassals – worked on the lands of their land lord and donated a portion of their crops and animals to him in return for protection from his knights or within his castle in times of danger. In other words, they outsourced their safety in return for their labor. They were as dependent on the protection of their seigneur as honey bees are on the pheromones in a hive.
The feudal age was ended by the introduction of gunpowder, which destroyed the invulnerability of a castle. Until recently, our ‘castle’ was the digital technology we used to outsource our memory and to store data. Now we can outsource our thinking itself, at the expense of our own cognition.

















