When Chat GPT-4 came out, Cory Kohn was itching to bring it into the classroom. A biology laboratory coordinator in an integrated science department at Claremont McKenna, Pitzer and Scripps Colleges, Kohn perceived the tool as useful.
It promised to increase efficiency, he argued. But more than that, it would be important to teach his science students how to interact with the tool for their own careers, he first told EdSurge last April. In his view, it would be like familiarizing his students with an early version of the calculator, and students who hadn’t encountered it would be disadvantaged.
Kohn is hardly the only teacher confronted with generative AI. While he’s enthusiastic about its potential, others are less sure what to think about it.
For businesses, artificial intelligence has proven immensely profitable, by some accounts even lifting the overall amount of funding flowing to edtech last year. That’s led to a frenetic rush to market educational tools as AI. But the desire among some entrepreneurs to use these tools as replacements for teachers or personal tutors has provoked skepticism.
It’s also somewhat eclipsed conversations about the ethics of how these tools are implemented, according to one observer. Nevertheless, teachers are already deciding how — or even whether — to adopt these tools into the classroom. And the decisions those teachers make may be influenced by factors like how familiar they are with the technology or even what gender they are, according to a new study.
A Difference of Opinion
People are still figuring out what the boundaries of this shiny, new piece of technology are in education, says Stephen Aguilar, an assistant professor at the University of Southern California Rossier School of Education. That can lead to missteps, like, in his opinion, regarding chatbots as a replacement for instructors or paraprofessionals. Deploying these tools in that way assumes that quick, iterative feedback drives critical thinking — when what students really need are deep conversations that will pull them in unexpected directions, Aguilar says.
If the tools are going to deliver on their promise to improve education, Aguilar thinks it will take a deeper meditation on what generative AI can do, one that moves beyond the focus on the tools’ promise to catalyze efficiency.
A former sixth and seventh grade teacher in East Palo Alto, in California, Aguilar is now the associate director of the Center for Generative AI and Society, which announced its launch, flush with $10 million in seed funding, last year. The center is striving to chart how AI is reshaping education so that it can craft useful recommendations for educators, Aguilar says. The goal is to truly understand what's happening on the front lines, because no one knows exactly what the major implications are going to be at this point, he adds.
As part of his role at the center, Aguilar conducted research into how teachers think about AI in classrooms. The study, “How Teachers Navigate the Ethical Landscape of AI in Their Classrooms,” interviewed 248 K-12 teachers. Those teachers were largely white and from public schools, introducing limitations.
The main findings? That teachers’ confidence or anxiety about using the technology impacted their thoughts about AI.
Perhaps more surprisingly, the study also found that teachers evaluate the ethical implications of these tools in different ways depending on their gender. When thinking about AI, women tended to be more rule-based in their reasoning, according to the report, considering what guidelines needed to be followed in using these tools in a beneficial way. They dwelled on the need to maintain privacy, or to prevent bias or confusion arising from the tools. Men, in contrast, tended to focus more on specific outcomes like the ability to boost creativity, the report says.
Artificial Tools, Human Judgments
When EdSurge first spoke to Kohn, the lab coordinator, he was using ChatGPT as a teacher’s assistant in biology courses. He cautioned that he couldn’t fully replace his human teaching assistants with a chatbot. Sometimes, he said, the chatbot was just off the mark. For example, it would recommend control variables when weighing experiment designs with students that just didn’t make sense. So its usefulness had to be weighed on a case by case basis.
Kohn also teaches a first-year writing course, AI Chatbots in Science, and he’s remained optimistic. He says his students use ChatGPT Plus, OpenAI’s paid version of ChatGPT, to brainstorm research questions, to help digest scientific articles and to simulate datasets. They also run an AI review of their writing, Kohn says.
That fits with what Aguilar has observed so far, about how the chatbot craze might affect writing instruction. Ultimately, Aguilar argues, large language models could represent an engaging way for students to ponder their own writing. That’s if students can approach them less like writing generators and more like readers, he says, an extra pair of digital eyes that can probe into text. That still requires students to evaluate the feedback they receive from these tools, he adds.
These days, Kohn thinks of a chatbot as a sort of a TA-plus. It can perform the duties of a human TA, he says, but also more varied jobs that would have traditionally been performed by a librarian or an editor, helping students to sift through literature or to refine ideas.
Still, students have to use it judiciously, he adds: “It’s not a truth-telling panacea.”