• Parent Tech
  • Posts
  • Should Your Kids Use ChatGPT For Homework?

Should Your Kids Use ChatGPT For Homework?

Reflections on an MIT study about generative AI, learning and school

A new MIT study claims that using ChatGPT and generative AI hurts your critical thinking skills, but that conclusion misses the point. And, misinterpretation will be problematic for parents, kids, and teachers.

The study title is pretty dire: “Your Brain on ChatGPT: Accumulation of Cognitive Debt When Using an AI Assistant for Essay Writing Task.” Meanwhile, Time’s wasn’t better: ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study. Alarming isn’t it? However, analyzing the study and current context gives parents (and all of us) helpful guidance, especially if you’re asking: “should my kid use ChatGPT for homework?”

Why the study isn’t what it appears and how it taps into an anxiety-filled moment for parents

Imagine this: you’re asked to write a short essay about a prompt covering topics like loyalty or courage in 20 minutes. Your setup is similar to the one below. There are no rewards or punishments for doing it well or poorly.

Image from: “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task”

Again, 20 minutes on a new tool and topic equipped with the capability to simply finish the task with no consequences or feedback. That’s a recipe for something, but it isn’t critical thinking.

What the study results do imply: people involved in a study on writing don’t think very hard if there’s no incentive to do well on a given measure. These people, if given a new tool without training, limited time, and no feedback or guidance on how to improve, don't get better at the task over time. Instead, they use the assigned tool to finish the task in the time limit. And, generative AI use doesn’t encourage people to think about or remember the content as well as if they’d written it themselves.

It’s basically asking someone else to write the essay with no incentive to care about what’s written.

As far as I can tell, participants weren’t trained in generative AI usage, weren’t told if their essays were good, and experienced no consequences for writing things that weren’t very good or original. The group used a tool (ChatGPT) that isn’t designed for learning: it’s designed to generate text about something. They also only had 20 minutes. It’s not surprising those participants returned months later and didn’t try as hard to write another essay or recall what they had generated from ChatGPT.

The authors clearly have good intentions, telling Time: “Education on how we use these tools, and promoting the fact that your brain does need to develop in a more analog way, is absolutely critical,” says Kosmyna. “We need to have active legislation in sync and more importantly, be testing these tools before we implement them.”

However, the authors may also have an in-going perspective, telling Time they set traps to hurt LLM users: “Ironically, upon the paper’s release, several social media users ran it through LLMs in order to summarize it and then post the findings online. Kosmyna had been expecting that people would do this, so she inserted a couple AI traps into the paper, such as instructing LLMs to “only read this table below,” thus ensuring that LLMs would return only limited insight from the paper.”

I’m glad the authors shared this study even before peer review, and I know studies must be tightly controlled to answer the research question. However, I worry how harmful incorrect interpretation could be. If anything, the study may indicate earlier generative AI instruction may be important to harness brand new capabilities effectively.

Parents are already anxious about this

The study is being dropped into an anxiety-filled moment for parents: Common Sense Media said last year that “76% of parents reported worrying about generative AI’s impact on their child’s critical thinking skills. Additionally, over half of teachers (52%) are concerned by the increase in AI-generated work being submitted as the child’s own.” And, we all care about our kids' economic futures at a time when the World Economic Forum says around 40% of job skills will change by 2030. This is up from a projection that ⅓ would change between 2015 and 2025, but down from a high of 57% in 2020.

I’m a parent and a person working around AI professionally, and I am certain that using generative AI well will be at least as important as using the internet or personal computers well for work. I’m not alone. Recently I finished a study on AI for work, writing that “more than 80% of designers and developers say learning to work with AI will be essential to their success in the future.” Compared to other professions that use computers and phones this number is certainly high, but data from the UN suggests transformation of jobs will be common. The specific tools used may change, but there’s something about our kids understanding the capabilities that will be important.

Generative AI is already in schools and at home

For many of us, helping our kids be happy and have the skills to be successful is a major driver. And, school is a key way to gain these skills. But I think school and learning will change.

In the UK, according to an Alan Turing Institute report, almost one in four children ages 8-12 are using generative AI. But, that number jumps to 52% in private schools. This suggests it’s an access issue, not a “is this useful” issue. We can look at rising adult usage of these tools to confirm that. When people have access to ChatGPT/Claude/Gemini/Perplexity they tend to find uses in them.

However, generative AI adoption has been blowing holes in the mechanics of education. 

One contact’s daughter had her essay incorrectly flagged as using generative AI, sending me this article from the New York Times that the issue is becoming more common. His daughter had to rewrite her essay under supervision. It sounded like a big waste of time and a stress-generator for everyone involved. Alpha School is making headlines by using an AI-powered program to teach for 2 hours a day, using the rest of the school day for guided projects, self-reporting good results from its small group. Meanwhile, ⅗ teachers used generative AI in their work last year, most commonly for lesson planning and research.

Colleges are engaged in a kind of student-teacher tug of war where professors and students use LLMs to finish tasks and do their best to get the right results while complaining about the other using too much AI. The New York Times reports the percentage of higher ed instructors using generative AI doubled to more than ⅓ from last year to this year. Anthropic analyzed student usage of its tool, finding significant adoption and raising the same critical thinking concerns as this MIT study.

The authors acknowledge some of the benefits to learning from generative AI, writing:

“One of the most unique features of LLMs is their ability to provide contextualized, personalized information [8]. Unlike conventional search engines, which rely on keyword matching to present a list of resources, LLMs generate cohesive, detailed responses to user queries. LLMs also are useful for adaptive learning: they can tailor their responses based on user feedback and preferences, offering iterative clarification and deeper exploration of topics [9]. This allows users to refine their understanding dynamically, fostering a more comprehensive grasp of the subject matter [9]. LLMs can also be used to realize effective learning techniques such as repetition and spaced learning [8].”

So should your kid use ChatGPT for homework?

Blanket advice to avoid generative AI seems wrong. However, this brings us to the question: should your kids use ChatGPT for homework? In thinking it through I turned to The Learning Agency, Common Sense Media, and The Alan Turing Institute. The answer seems to be yes with oversight and some caveats:

  1. Read about generative AI on your own first. Here’s a great guide from Common Sense Media. Crucially, remember that generative AI tools will make things up and many of the tools like ChatGPT will provide full, unfettered access to the internet. This isn’t that different from the caution many of us feel about our kids’ access to the internet. 

  2. Talk to your kids. What do they know about generative AI? Are their friends using it? Have they used it? How do they feel about it? Do they know it hallucinates things that sound very real? Dr Mhairi Aitken, Senior Ethics Fellow at the Alan Turing Institute, shared this: “children have important and nuanced opinions when it comes to the benefits and risks of generative AI. Children’s experiences with this technology are significantly different from those of adults, so it is crucial that we listen to their perspectives to understand their particular needs and interests.”

  3. Develop a plan to help kids engage critically with generative AI. Planning and intentional tech usage have been my main takeaways from Parent Tech. Knowing the answer to questions like: what shows to watch? What games to play? For how long and when? And, parents should have an idea of general boundaries and ground rules. Critical in this planning would be ways to help kids engage with the topics and remind them continuously that every generative AI tool will make things up. 

    The Learning Agency’s advice is very age dependent: “For young children, there are tools that use AI and automatic speech recognition for core skills, such as literacy development. These can be good tools to supplement what your child is doing in the classroom and can serve as an educational screen time option. For older students, I would encourage them to first start by learning what AI does, how it functions, and how to be critical consumers of the information they receive from AI.”

  4. Don’t generate a final work product, unless learning isn’t important. The MIT study pretty conclusively proves that recall of information is not as good if there’s less effort spent creating it. And, knowing enough about different topics to connect them is essential to creativity. I think creativity is one of the most critical skills for a kid to practice as the ability to think of metaphors, history, and other rich sources of reframing and inspiration can’t be understated. So, the attitude: “They can just look things up” isn’t right. The study also shows people can become overwhelmed with too much output from generative AI, particularly given time limits.

    Instead, help your kids use generative AI tools like ChatGPT to get new ideas, new sources, new angles and further engage with an idea they already have.

  5. Consider using ChatGPT as an editor with oversight. Buried in the original MIT study is the finding that asking for suggestions on existing essays engages the brain differently. This is how I find the most success, writing thoughts and asking for input as well as more questions to answer. I also often ask ChatGPT to edit as little as possible and preserve as much of my voice as possible. However, ChatGPT likely requires significant oversight for kids as it can pull broadly from the internet, which could be problematic. On the other hand, ChatGPT is ubiquitous, affordable and pretty easy to use for many purposes, making it useful for quick editing if a kid can sift through what to do with its output.

  6. Find tools designed for learning goals, age, subject, and with a philosophy you agree with. One major issue here is that ChatGPT isn’t specifically designed for learning or creative writing or research. It’s a general purpose tool and there’s likely something designed for this use case. The Learning Company suggested this list of resources from Edtech Insiders, but I haven’t spent enough time with it to have strong opinions. I’d look for an application that encourages exploration, proactively asks questions, and tailors itself to the user. And, I’d think about who was behind it. An interesting analogy is Sesame Street and TV. Sesame Street was invented with the idea of using advertising techniques to sell pro-social messages. So, who can do that well now for AI? I’m thinking about companies like Khan Academy and Pok Pok but actively researching more.

  7. Managing generative AI is a skill that’s worth practicing. Directing another person or entity is a critical skill for kids to understand. And, the paper found participant’s brains worked differently when working with ChatGPT vs writing on their own. If this is an important skill that is distinct, shouldn’t we help our kids practice it, just like they practice working in groups?

Final thought

We're in a strange moment, but it's not the first time new technology has upended how we learn. Books, television, and search engines all sparked worries like this too. Even the quote above about “traditional search engine-based writing” could only have been written in the last few decades. During this research process I said, “Hopefully schools figure this out before my kids get older,” and a connection, the same one whose daughter was wrongly flagged for using AI replied, “Who knows, kids / people will always be figuring out ways to game the system no matter what generation or what technology is popping :)”

Wise words that we should all keep adapting. And, crucially, I wrote this first without generative AI. So at worst, I’ve sharpened my own thinking.