Skip to content
EdNC. Essential education news. Important stories. Your voice.

How can students engage with AI while protecting their emotional well-being?

The William and Ida Friday Institute for Educational Innovation at North Carolina State University recently hosted a webinar focused on how students can engage with artificial intelligence (AI) safely while protecting their social and emotional well-being. During the event, panelists shared considerations for educators and parents on how to handle the technology. 

Over-reliance, cybersecurity, and transparency

Parents are concerned that interactions with conversational AI chatbots will replace human interactions for their children, said Pilyoung Kim, director of the Family and Child Neuroscience Laboratory and co-director of the Brain, Artificial Intelligence, and Child Center.

However, Kim also said that children prefer to have conversations with AI because they are nonjudgemental. 

“On the other hand, though, I think there’s been real concerns coming out related to some of the youth, (who are) especially very vulnerable for the possibility of this over-reliance,” Kim said. 

Kim said this can also be an issue with adults who are experiencing loneliness or on the autism spectrum. The professor also said that they believe that the capacity for younger people to understand other people’s emotions, let alone the lack of emotions in an AI, is limited.

Kristi Boyd, who works as a trustworthy AI specialist at SAS Institute, said people should ask questions about the functions of AI in different toys and tools in their everyday lives. 

“Your toothbrush probably doesn’t need AI,” Boyd said. 

Other questions that Boyd offered were whether data was being collected by the AI toy, where the data is being sent, and cybersecurity measures. 

“If it’s connected to the Wi-Fi, it means it can also be hacked in,” Boyd said.

Many AI tools are trained on the internet, which has an English-speaking and western focus. Therefore it might not be as easy to use for students in different cultures, Boyd said. 

Mathilde Cerioli, the last panelist and chief scientist at the nonprofit everyone.AI, said that we should think about evolving the definition of safety, which means thinking about how to watch for warning signs in a digital space for children compared to the physical world. Regarding concerns about how AI is developed, Cerioli said that language matters.

 “All language shapes our understanding of this world and the concept behind them,” Cerioli said.

Transparency

To mitigate the risks of children using AI, what if the technology was transparent about its abilities? Kim said this was one route for technology companies to go. 

Kim said that research shows the likeability of the AI bot does not change when they are explicit about not having emotions and only being able to provide information. 

Additionally, the level of transparency can depend on the audience, panelists said.

“The parallel I like to draw is I’m not an electrician, ” Boyd said. “What I do know is that if I plug my laptop into the outlet, it’s going to power it on. I also know that if I blow dry my hair in the shower — probably going to die, right? And so I have that general sense of literacy and awareness of what are the benefits that I can get from electricity. And also, how do I keep myself safe? How do I keep my family safe?”

Everyone does not have to become a data scientist, but having a baseline understanding of AI and what to not use it for is a step towards being AI literate, Boyd said.

Recommendations

Panelists concluded the session by talking about ethical concerns and recommendations for parents and educators.

Kim said children need a safe space where they can explore their ideas with the AI bots. 

“I think children’s technology use is a huge source of their stress because of the lack of guidance and the risks that are related to technology,” Kim said.

Cerioli also said that AI can also help engage students by providing a “dopamine hit” when they are completing tasks.

Boyd suggested for people to look at how other countries are managing children’s access to AI and technology use in general. She also said that people should engage in “ethical inquiry.”

For instance, asking questions about the purpose of the AI tool that is used and the goal of it.

“I think the pandemic showcased to us, are there communities that may not have access to the tool?” Boyd said. “A lot of use of AI depends on having the infrastructure set up right, and in the U.S., 25% of families still don’t have reliable infrastructure and reliable access to the internet. You take that globally, the number is exponentially greater. So who’s being left out and who’s not getting the benefits?”

A recording of the panel discussion and other information from the Friday Institute’s “Real Issues, Real Data” series can be found here