Skip to content

EdNC. Essential education news. Important stories. Your voice.

Perspective | Educators need to understand and embrace artificial intelligence writing tools

Editor’s Note: This article was originally published in EdSource.

ChatGPT and other artificial intelligence, or AI, writing tools can generate humanlike stories, essays, poems and other written forms. Writers can use these tools in many ways, including as a muse that inspires ideas; a co-author that helps craft text; a reviewer that provides constructive feedback; an editor that checks the details; or a ghostwriter that writes without credit.

Educators have many concerns about the impact of these powerful tools on teaching, learning and using writing in schools. Should some uses of AI be considered appropriate, while others are treated as a modern form of plagiarism? Should students master certain writing skills before being allowed to use AI tools? Can we monitor how students use them? Do AI tools fundamentally change what students need to learn and how they should be taught?

A common initial response is to ban the use of ChatGPT. However, bans will be futile as AI writing capabilities become widely available and integrated into word-processing programs. We must accept that AI tools are changing how writing is accomplished in every field and embrace that students need to learn to use them effectively.

Reforming writing in schools requires careful consideration because it will involve changes in curriculum standards, teaching practices, student assessments, teacher preparation and education policies. In some ways, this parallels past changes in mathematics education, in which calculators went from being banned to being required. These changes take time and cannot move as quickly as AI tools are advancing.

Limitations of AI writing tools

The impressive capabilities of AI writing tools come with important limitations for educators to consider, including the following:

AI systems do not replicate human knowledge, cognition or emotion. AI systems are trained by processing an enormous corpus of digital text. By contrast, much of human knowledge stems from goal-driven activities, social interactions, modeling of others and other interactions in the real world. These experiences lead to embodied understandings of causes and effects; emotional intelligence involving understanding others’ needs, motives and perspectives; a sense of family, community and culture; and, perhaps most importantly, a sense of self. AI will never match the richness of the human experience.

AI writing quality is limited. Since AI-generated text is based on patterns found in the training texts, it often has a dull, written-by-committee style that lacks engaging and creative writing. In addition, AI tools are limited in handling complex ideas, so their output is often overly simplistic and fails to be convincing.

AI systems are often outdated. AI systems are trained when created and are not continuously updated, so they can produce outdated information and fail to respond well to requests that require timely knowledge.

AI systems can produce harmful content. The internet materials used to train AI systems can include racist, sexist, homophobic and other forms of offensive content. As a result, AI can generate unintended (or intended) toxic outputs.

AI systems can lack veracity. AI tools can fabricate statistics, historical events, quotes, references and all sorts of other information, often producing authoritative-sounding text that is simply untrue.

Writing with AI tools

Given the limitations, AI tools do not produce quality text at the push of a button. Using them effectively requires that students learn to do the following:

  • Set directions for the goals, content, audience and style, which often involves writing parts of the text to guide the AI tool about what it is to produce.
  • Prompt the AI to produce the specific outputs needed, often providing separate prompts for each desired outcome, which can range from individual sentences to a complete report or story.
  • Assess the AI output to validate the information for relevance, accuracy, completeness, bias, timeliness and writing quality. Assessment can lead to revising the directions and prompts and having AI generate alternative versions of the text.
  • Curate the AI text to select what to use and organize it coherently, often working from multiple versions generated by AI along with human-written materials.
  • Edit the combined human and AI contributions to produce a well-written document.

These steps, which form the acronym SPACE, encompass new forms of human-computer interactions to accomplish writing tasks.

Educators must understand and embrace the changes driven by advances in AI, and it is time to begin the challenging work of reforming how we teach students to write with AI tools. Success will require collaborations of educators, researchers, AI experts, policymakers and others across the public and private sectors, focusing on what students need to learn to be successful in the AI-augmented world in which they will — and already do — live.

Glenn M. Kleiman

Glenn M. Kleiman is a senior advisor at the Stanford Graduate School of Education. Previously, he was a professor of education at NC State University, where he served as the executive director of the Friday Institute for Educational Innovation from 2007-2018.  Dr. Kleiman led the Friday Institute teams that developed the North Carolina Digital Learning Plan for K-12 Schools and that collaborated with WestEd in developing the Sound Basic Education for All Action Plan for North Carolina.