Skip to content

EdNC. Essential education news. Important stories. Your voice.

Perspective | Teach middle school students to use AI chatbots for good

Voiced by Amazon Polly

When fed a prompt, an artificial intelligence (AI) chatbot can write an essay and then grade it, simultaneously replacing the student’s and the teacher’s labor. It’s easy to use, free to sign up for, and widely known. Students are going to use it. Schools need to respond.

Given their ubiquity, the use of AI chatbots among K-12 students is inevitable — we must implement a curriculum to teach students and educators how to appropriately use them whilst maintaining academic and intellectual integrity. Educators have two main concerns regarding student use of chatbots: cheating and false information.

AI chatbots like ChatGPT are machine and deep learning models, meaning that they utilize a set of training data to recognize patterns and draw conclusions. They are combined with natural language processing models to understand and formulate sentences based on typical speech patterns. Essentially, ChatGPT is a predictive language model, meaning that based on the patterns it has identified in training data, it will respond to user input by producing the most likely combination of words. It has no ability to determine fact from fiction and there is no guarantee of the validity of its claims. 

According to Impact Research polling, almost half of students aged 12-14 already report using ChatGPT for schoolwork; over three-quarters of student users say it has a positive impact. Teachers, nevertheless, have valid concerns over academic integrity. Among other tasks, students may use ChatGPT to conduct research (anything from collecting background knowledge or as a main source), summarize long articles or books in place of reading them, gather ideas for a project, or even to complete assignments on their behalf. Over-reliance can hinder students’ learning and facilitate deception. This prompts concerns about academic integrity, as teachers don’t currently have effective ways to determine whether a student has plagiarized an assignment from a chatbot.

School responses vary widely: some have responded by banning ChatGPT on campuses and others have incorporated it into their teaching. Not all teachers’ views of ChatGPT are negative: half of all teachers report using it for their job and 40% encourage their students to use it. Even among educators that embrace AI technologies like ChatGPT, though, the concern about falsehoods remains. 

Taking ChatGPT to be a reliable source leaves students vulnerable to misinformation. OpenAI, ChatGPT’s creator, warns that the program “can occasionally generate incorrect information” or “produce harmful instructions or biased content.” Even the admitted tendency to “make up facts” and “hallucinate outputs” understates the issue. Intended to serve a similar function, Google’s AI Chatbot, Bard, was found to generate misinformation 78% of the time.

Efforts to improve the factual nature of this technology have yet to succeed. OpenAI released GPT-4, heralding it as a safer version of the technology. Nevertheless, it appears to be even less effective than its predecessors at suppressing disinformation and utilizes fewer disclaimers, which alert users to unverifiable content. As tech companies fail to remediate these problems, ethical use becomes the responsibility of consumers — our educational system should serve to reinforce the essential practice of critical thinking.

Setting appropriate standards for use among early users of AI chatbots will enable students to responsibly benefit from the technology, maximizing their educational attainment and minimizing their susceptibility to harm. This curriculum can be integrated into other tech literacy programs in an in-depth format. Standards for behavior should be individually addressed by teachers at the beginning of every course, regardless of the academic subject. 

  • Teachers must define what behavior constitutes cheating and what does not. Educators should take into account that entirely banning AI use is ineffective and will likely not be adhered to. Setting reasonable standards encourages adoption. For example, submitting an essay entirely written by AI is inappropriate, but using it to generate a variety of potential topics may not be.
  • Teachers should engage in open conversation about the benefits and drawbacks of AI. Students should be allowed to interrogate the technology and must be made aware of its use as a tool as well as its fallibility.
  • AI should not be used to verify information or serve as a citable source. Students must be required to find additional justification from a credible source for all information drawn from ChatGPT. Students should also be made aware that AI is ineffective in STEM subjects, and its use may be further discouraged in these settings.

If equipped with the knowledge about when AI is helpful and when it is not, as well as clear and reasonable expectations for appropriate behavior, students will be primed to succeed in academic settings. Above all, critical thinking is essential — it will encourage continued learning and guard students against the falsehoods chatbots have a tendency to perpetuate. These are core principles of teaching, they simply need to adapt to our ever-changing technological landscape.

Sofia Bliss-Carrascosa

Sofia is a Duke University sophomore majoring in public policy with a certificate in journalism. She works in the Duke Reporter’s Lab analyzing fact-checking and misinformation, as well as with the Duke Tech Policy team researching the effects of social media algorithms.