What happens when we prefer artificial intelligence to real teachers?
I used to ask audiences two questions in rapid succession. First, “How many of you support the use of genetic engineering with humans?” Almost no hands would go up. In fact, I could hear gasps of disgust, as if I had asked, “How many of you support puppy killing just for the heck of it?” Then I would ask, “How many of you would support genetic engineering to save the life of your child?” Almost all hands would go up. Audience members were left to reconcile their responses.
By the way—I no longer ask these questions because we have come to expect gene therapy to help us live longer, healthier lives. In fact, we impatiently wait for its advances. What a difference twenty years can make.
When technology gets up close and personal
We are philosophical about adopting technology when dealing with it in the abstract. But our perspective changes dramatically when technology is up close and personal, especially when it comes to our children. After all, we’ll do anything for our kids. Otherwise, what kind of parents would we be?
So, let’s update my questions. “How many of you support the use of artificial intelligence to assist and perhaps replace human tutors in the education of your children?” My guess is few reading this are raising their hands. In fact I can hear whispers of disbelief emanating from the other side of the screen. Let me rephrase. “How many of you would support educational support systems that ensure the academic success of your children, even if it means using bots and other AI creations?” Pause. Still not sure? “The AI tutors are mobile, inexpensive and available 24/7. They are personalized, and continually adapt to your child’s learning style. They greatly improve the chance your child will succeed in school. And, above all, your kid loves using them.” Welcome to AI. When technology gets up close and personal, we say “bring it on.”
Here’s your quick refresher about the Turing Test. From Wikipedia: “The Turing test, developed by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language between a human and a machine designed to generate human-like responses.” If the evaluator can’t tell the difference, then the machine passes the Turing Test. These days AI is now routinely passing the Turing Test. And the future is just getting started.
Consider the AI-powered teaching assistant called Jill Watson. “She” assisted students at the Georgia Institute of Technology for months before Professor Ashok Goel revealed Jill was AI, not a real human being. Students were generally surprised they didn’t know. Apparently Jill passed the Turing Test with flying colors. One student even reported that he saw no personality in Jill’s response, which is what he expected from a human TA (teaching assistant). Perhaps human TAs need personality reprogramming?
In fact, we are entering a Post Turing Test era, in which we not only expect AI to fool us, but we welcome it to surpass us. We can easily imagine a situation in which a tutor becomes so adept in a content area that it is no longer just the tutor, but also takes the place of the teacher. The AI teacher will be a “value added human being,” in that it will be able to cull information from an unlimited number of information sources instantaneously, informing its lectures and conversations with the latest research in its field in real time. It will be able to read facial expressions to determine students’ level of understanding, and provide examples to illuminate course content that are specifically attuned to each student’s learning style. AI teachers may even draw on a joke database to help to make lessons more entertaining. The jokes of course would be AI-tested to make sure that students actually considered them to be funny.
But won’t we be crossing a dangerous line, losing the organic presence of a teacher that defines our humanity? Yes, we will. But will we be able to tell?
We may actually enter a phase in which we simply don’t want to know what’s AI and what’s human, because then we will have to deal with the discomfort of knowing. Up close and personal, all we really want is results.
Really smart people have said a number of unintelligent things to me over the years, like the Internet will never catch on and personal computers are just a fad. More recently, I have heard AI will never replace teachers. We will resist AI because, to put it in scientific terms, it gives us the creeps. Currently, it is our emotional response to AI that currently keeps it at bay. But that is only for the AI we know about. Most of it is so embedded in our daily lives we don’t realize it’s there. And even if we did, what would we do?
If we want a world in which AI doesn’t completely replace humans, then we need a new goal when it comes to progress. We need to decide that human imperfection is preferable to machine precision. And we need the wisdom to know the difference.
Dr. Jason Ohler is a professor emeritus and lifelong digital humanist who has been helping students at all levels, K thru PhD, understand the ethics of living a digital lifestyle. His recent book, 4Four Big Ideas for the Future, reflects on his 35 years in the world of educational media. Visit JasonOhlerIdeas.com.