This Google blog entry nicely summarizes what happened when a computer beat a world-champion Go player 4 games to 1 last month: “…while the match has been widely billed as ‘man vs. machine,’ AlphaGo is really a human achievement. [Korean champion] Lee Sedol and the AlphaGo team both pushed each other toward new ideas, opportunities and solutions…”. The outcome surprised many or most Go players and artificial intelligence (AI) people, coming perhaps even decades sooner than expected.
(For context, I play but not strongly, best-ever rank maybe 6 or 7 kyu. Wikipedia has a good article about the game. In chess you kill the opposing king, but in Go you only need to carve out more market share than your opponent.)
AlphaGo remembers, reasons (applies logic) and learns. But does AlphaGo think? Does it exhibit intelligence? And what does its victory say about artificial intelligence? I answer yes, yes, and some but not much.
Go is a deterministic perfect-information game, highly complex yet still only a tiny world unto itself. By contrast, the real physical world is incomprehensibly large, probabilistic at bottom, and mostly hidden, as in dark matter. We humans hide our feelings. We deceive other living beings, as do other animals and even plants.
If intelligence equates to memory plus logic plus the ability to learn, then AlphaGo surely has some intelligence. And AlphaGo does indeed think … about Go. But AlphaGo depends entirely on human handlers to feed it new information and manage its learning. Of course, we humans as infants depend similarly on our parents. But within a few years of birth we start gathering information on our own, and learning (or not) according to our abilities, training and maturity. Reasoning per se is a world unto itself, just logic, a closed world. Memory for a computer is now a relatively simple matter and essentially boundless. But becoming able to experience and learn autonomously is a huge barrier to widespread machine intelligence.
Which brings me to my main point: Memory and logic suffice for intelligence, but effective learning needs constant exposure to the always-changing real world, and machines are far, far away from that. As I’ve said before to you, I think the Turing Test for machine intelligence is bogus because it assumes the machine must have experienced the world exactly as a human has. (See the book The Most Human Human for an entertaining and compelling account of competing in the annual Turing Test.)
In sum, I believe AlphaGo’s victory means we’re advancing faster than many have realized along the avenues of memory and logic. But I believe that machines are not much closer than before to experiencing the ever-changing world and learning in the way we evolved mammals do, i.e. independently and with self-awareness.