We can build robot teachers, or even robot teaching assistants. But should we? And if the answer is yes, what’s the right mix of human and machine in the classroom?
To get a fresh perspective on that question, this episode we take you to China, where a couple of us from EdSurge recently traveled for a reporting trip. One of the events we attended was a two-day conference about artificial intelligence in education organized by a company called Squirrel AI.
Its vision felt unusually utopian. The company’s co-founder, Derek Li, said during a keynote that replacing some teaching functions with AI-powered software would supercharge the country’s education system. Speaking to a crowd of some 2,000 attendees, he said: “If our children are educated by AI teachers, then their potential can be fully realized.”
Li mentioned that he has two young sons, twins, he said, who are very different from each other. And he believes that having AI-driven tutors or instructors will help them each get the individual approach they need. He closed his remarks by saying that he “hopes to provide each child with a super-power AI teacher that is the combination of Einstein and Socrates, that way it feels like she has more than 100 teachers in front of her.”
To understand that level of enthusiasm for AI, it’s useful to give some context. For starters, the Chinese government has declared a national goal of surpassing the U.S. in AI technology by the year 2030, so there is almost a Sputnik-like push for the tech going on right now in China. At the same time, China is also facing a shortage of qualified teachers in many rural areas, and there’s a huge demand for high-quality language teachers and tutors throughout the country. So in that context, there’s probably more openness there than in the U.S. to the idea of bringing in robots for some teaching functions.
But Li painted AI as not just some pale substitute, but as ultimately superior to humans when it comes to some aspects of teaching. Much of what he described was not the company’s current product, but its vision of where it wants to go, including a plan to create an AI-powered tutor that teaches kids to be more creative. So far the company says it runs 1,600 learning centers in more than 300 cities across China.
We were curious to talk with Li further about his thoughts, we sat down with him just after the event to learn about his dreams and concerns about the role of AI in classrooms.
EdSurge: You said that you're using human teachers to understand how they teach, and that sometimes you learn they don't use the most efficient paths in delivering information. In three years' time, what will be the role of human teachers? Will we need them?
Li: Machines always take human jobs, right? Now, the teacher is like the farmers in the field; we need AI to become a harvester. But I don't think the teachers will lose their job. I think the machine will part of the teaching job, but most of the job of a teacher [which humans are not doing well] are actually more important, for example communicating with students one-on-one and understanding students' emotion, their character, how to build their characters. That’s what the [human] teachers can do best, but they need a lot of time to do that. Most of teachers are busy with teaching [knowledge and academics].
If computers can know so much—and you’re saying they can eventually know everything—and people can only know a little bit, are we training our students to be like only a small part of a computer? Should we be training them entirely differently if they will only ever know a tiny bit of what the computers can learn?
I think that knowledge learning [just teaching facts and information] will become less important.
But that's what your company does some of, so if that is less important, then how does that work?
It's in the middle of a transition. That's why we let our machines grow AI to teach students ways of thinking.
All technology has implications, but sometimes negative ones too. What do you worry about with AI becoming so important in education?
Actually, the thing I worry about is most is [the AI teachers] getting out of control. When we are in a classroom, we know what the teacher knows and we know what the teacher will teach us. Even if they know something that's not proper, maybe dirty, they will not dare to teach it in class. For the computer, they know everything. They know every information, and if they want to please the student, sometimes that might not be a good thing.
They could teach them the wrong thing?
Yeah, they may teach the wrong thing. So how do we restrain, how do we limit it?
It's just like the control of the nuclear weapon, right? To use it to build electricity, that's good. But other uses will be a disaster. So how to regulate AI is very important.
Are there any experiments that you're trying to think about to regulate AI?
Not now, because currently what the machine can learn from us is quite controlled.
But I talked with [a researcher] at Carnegie Mellon University and he has a fabulous idea of how AI could evolve. He said before, algorithms would learn and evolve by themselves. But if we use the right human machine language, we could let every person teach the machine.
For example, if a GPS leads you the wrong way or in the wrong direction, you can teach it where the right thing is and you correct it. If everybody can teach the machine, the machine will become smarter and smarter very quickly. For Squirrel AI, maybe 80 percent of the time it's very efficient. But for say 10 percent of occasions, it may be very horribly stupid. AI is kind of like that, like a GPS it can take you to the wrong place.
If we can let every student and let every parent, and every teacher nearby teach the machine, it can get smarter very quick. But, that’s also the danger. If the machine can learn from the internet and if everybody teaches the machine, that would be out of control. How do we evaluate everything, and how do we decide what is right and what is wrong? I think we need an AI police officer.