Artificial intelligence is coming! It’s right around the corner and it’s going to change everything! We should maybe even fear for our future as a species! Ohmahgersh!!
Erm, well, that’s what they said in the eighties. And that’s what they said in the sixties. And, no surprise, it’s what many people are saying now. While I was at the Cannes Lions conference last week, I got to see Sir Tim Berners-Lee, the inventor of the World Wide Web, discuss his thoughts on AI in the context of a new report/book called Sentience. Sure enough, AI is coming according to Mike Cooper, who introduced Berners-Lee.
Optimism springs eternal, but with AI it so far has always been crushed with a resounding “nope, not yet”. Periods of intense interest and investment in AI have so far been followed by a series of so-called AI winters. Technologies billed as AI have failed to live up to their hype and work that had been branded as AI gets rebranded, e.g. as “expert systems” or “machine learning”. Which is to say lots of great advances have been made, but it hasn’t resulted in intelligent machines so far.
Plenty of people write about AI, so I’m not going to go into a long discussion about what is or isn’t AI. But let me start by drawing a strong distinction, summarized in this tweet I sent out at the start of the talk.
I hate the conflation of AI and machine learning. They aren’t the same.
— Jason Baldridge (@jasonbaldridge) June 23, 2015
As James Iry rightly pointed out, I fulfilled AI effect by saying that. But here’s what I mean and why I think it matters to distinguish ML and AI. Regardless of any nuanced technical definition of AI, I’m pretty sure that when the public hears “artificial intelligence”, they think of conscious non-biological entities that interact with humans much as we interact with each other. They don’t think of an expert system that can analyze a complex domain-specific problem and provide interesting courses of action, or machine learning algorithms that find fascinating patterns in heaps of data. Despite this, the general public seems to find it all to easy to mentally close the gap between these two very different levels of technological and scientific accomplishment on the spectrum of AI-related work.
As a researcher who has been working on semi-supervised machine learning for natural language processing for the past fifteen years, I appreciate the enormity of this gap from a first-hand perspective. In my experience, most researchers in machine learning, natural language processing, computer vision and AI have pretty healthy skepticism about claims that the conscious AI of the public’s imagination is imminent. We’ve made tremendous advances on analyzing speech, text, images and more with computers, but we are still crawling when it comes to general reasoning, perception and self-reflection in machines.
In this respect, despite my mild annoyance with the AI-is-coming introduction, I was actually quite pleased with many aspects of Berners-Lee’s talk. Two of the things he discussed were recursive neural networks (RNNs) and game-playing algorithms. Both of these lines of work are based on what is now called deep learning, an area of machine learning that builds on neural networks of thirty years ago and is making a big impact on the performance of machine learned models for natural language processing, computer vision and more. RNNs construct internal representations of a problem that reduce much of the effort that a machine learning practitioner must exert (which we generally refer to as feature engineering). Berners-Lee mentioned Andrei Karpathy’s excellent blog post “The Unreasonable Effectiveness of Recursive Neural Networks” in this context. The game playing algorithms he discussed are interesting because they are learning to act within the context of different computer games without being given the rules but just receiving input on whether their actions are paying off or not. Importantly, this demonstrates some limited generalization and reasoning abilities, though they are still quite limited in their abilities.
Berners-Lee also brought up self-driving cars, which are indeed a great example of the power of current techniques and applied technology. And, he discussed personal data and contrasted its value to governments and corporations with its value to the individual. Machine learning is already playing a huge role both in analysis of aggregated personal data for understanding populations and for individual level recommendations (e.g. fitness tracking and Google Now).
Despite all this progress, and for better or for worse, these are still far from sentient machines. Deep learning is inspired by the functioning of human neurons, but as far as I’m aware, artificial neural networks as yet have nothing like the architecture of meat-based intelligence (yes, we’re made of meat, and that meat thinks!). This was really impressed on me several years ago when I saw a talk at UT Austin by a Michael Mauk. His work centered on mapping out the neural architecture of the part of the rabbit brain that controls the eyelids. It was far more dense, complex, detailed and diverse than I had even imagined. And it was all in service of allowing the rabbit to detect a blink-inducing stimulus and then blink as a result. For what it’s worth, I’d love to hear from researchers who work on deep learning regarding the validity of my own perception of the gap between the complexity/architecture of ANNs and actual brains.
So, call me a skeptic of actual thinking machines, or at least of the notion that they’ll be with us soon. Of course, expectations of less progress are equally prone to the quip that it is hard to make predictions, especially ones about the future. However, this isn’t what really matters: regardless of AI-in-the-popular sense, there are powerful machine learning algorithms being used right now, and these have real potential to both help and hinder the lives of people alive today. This may be less sexy and doesn’t lend itself to books titled “Sentience”, but more valuable thoughts and discussions are likely to come out of what is here-and-now rather than some yet-to-come AI singularity. I think Andrew Ng did a great job of expressing this in his recent interview on the Talking Machines podcast.
It is exactly this that sort of left me a bit frustrated after Berners-Lee’s talk at Cannes. He was set up by the AI-is-coming introduction. He then apologized for not being an expert in artificial intelligence, but said they had asked him to give his thoughts, so, well, here they are. Berners-Lee then discussed a number of excellent and relevant here-and-now aspects of current artificial intelligence research. Unfortunately his delivery was often rambling and semi-incoherent. As a result, I think he didn’t manage to course-correct the audience from the AI-in-their-heads to the AI-that-is-now. He’s clearly a brilliant man with much to say on the topic, but the audience would likely have benefitted from having a long-time machine learning researcher deliver the message. I’m thinking of people like my UT Austin colleagues Ray Mooney and Peter Stone. Ray has worked on AI, natural language processing, computer vision and machine learning for the past 30 years–and he wrote this essay on AI in 1979, when he was 17 years old. Peter has focused on robotics (especially soccer playing robots) and reinforcement learning, and he has recently worked on computational simulations of the cerebellum with Michael Mauk.
Recent Comments