I have been able to get out and enjoy some movies with my kids over the last few weeks. Black Widow, Jungle Cruise, and, most recently, Free Guy, have given me the opportunity to get back in the theaters, something I did not realize I missed as much as I did.
The last of those, Free Guy, is one of the funniest movies I have seen in a long time, and, considering the trailers, I am not giving anything away when I say there is an element of artificial intelligence within the plot. And it got me thinking more about how AI is perceived versus what it can do, and perhaps how that perception is oddly self-limiting.
Erik Larson’s The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do explores this topic in much greater depth than I can here, but Larson’s views mirror my own: the myth isn’t that true AI is possible, but rather, the myth is that its arrival is inevitable based on our present trajectory. And the business of AI is interfering with the science of AI in some very big ways.
Interfering? How, do you ask, can monetizing artificial intelligence interfere with its own progress?
AI today is good at narrow applications involving inductive reasoning and data processing, like recognizing images or playing games. But these successes do not push AI towards a more general intelligence. These successes do, however, make AI a viable business offering.
Human intelligence is a blend of inductive reasoning and conjecture, i.e. guesses. These are guesses informed by our own experiences and the context of the situation, called
abduction by the AI community. And we have no idea how to program this type of contextual/experiential guessing into computers today. But our success in those narrow areas has taken the focus away from understanding the complexities of abduction, and has stifled innovation within the field.
It is a scientific unknown as to whether or not we are capable of producing an artificial intelligence with levels of both inductive reasoning and conjecture. But to assume it will “just happen” by chipping away at one part of the problem is folly. Personally, I believe artificial intelligence is possible, but not without a shift in focus from productization to research and innovation. If we understand how people make decisions, we can not only try to mimic that behavior with AI, but also gain more insight into ourselves in the process.