Category Archives: Software

AlphaGo Zero Masters Chess in a Few Hours

Bernard,

Having mastered the game of Go over the course of months, Google’s AlphaZero machine-learning AI was supplied the rules of chess and no other information whatsoever about the game. It learned to be a superhuman chess master after 4 hours of playing games against another instance of itself. Later that same day it also mastered Shogi, the Japanese version of chess, which is more difficult than Western chess. More here.

Wayne

 

Leave a comment

Filed under artificial intelligence, Software

Go-ing: Gone

Bernard,

In March 2016 Google’s AlphaGo program defeated one of the top Go players in the world, a breakthrough for so-called artificial intelligence. AlphaGo learned the game starting from records of thousands of Go games played by masters around the world – mostly in Japan, Korea and China – for the past few hundred years.

The latest version of AlphaGo, named AlphaGo Zero, started learning last year with only the rules of Go and no input whatsoever from humanity’s history of the game. Zero learned by playing millions of Go games against another instance of itself, remembering what worked well and what did not. Then it played one hundred games against last year’s AlphaGo. Score: Zero 100 wins, AlphaGo none.

From an article in the MIT Technology Review: “The most striking thing is we don’t need any human data anymore … By not using human data or human expertise, we’ve actually removed the constraints of human knowledge…” [italics added].

Game over, for humans.

Wayne

Leave a comment

Filed under artificial intelligence, Software, Uncategorized

Arctic Sea Ice Extent: Sprucing up a Chart

Bernard,

This important and useful chart was on the National Snow and Ice Data Center site on August 8, 2017:

But I found it unusually hard to read. Here is my spruced-up version:

No more hard-to-read vertical text. Zero-based, thus enabling one immediately to see how much lower the extent was in 2012 and is projected to be in 2017. Date of measurement prominent in the title area. Labels close to their items, so the eye doesn’t have to travel back and forth to interpret. Percentage of total ocean area on the left axis, as opposed to square km values in the original, for which one would have to know that the Arctic Ocean’s total area is 14m+ square km in order to realize that the ice extent remains nearly 100% at the start of May.

To me, the original’s main errors were 1) not being zero-based, which forces you to imagine the full picture in order to grasp the real meaning, and 2) expressing measured ice area on the left axis instead of % of total Arctic Ocean area, forcing you to look elsewhere to find out how full or empty the Arctic Ocean actually was/is of sea ice. A basic rule of user interface design is, “Don’t Make Me Think!” unnecessarily. That’s the title of my favorite user-interface book, written lightly and gracefully by Steve Krug and well worth a read.

Wayne

Leave a comment

Filed under Climate Change, Environment, Software, Uncategorized

So-Called Artificial Intelligence: Google Translate Awakens

Bernard,

This weekend’s New York Times has a fine article on AI, The Great A.I. Awakening. I commented briefly on the article on the Times’s site but I want to say a lot more.

The article vividly documents Google Translate’s recent revolution in how it works. Until now, auto-translation engines have modeled languages explicitly via rules, dictionaries, and the like. The new Translate, and its Chinese competitor on Baidu, instead enable a multi-layer neural net – a simulated brain, basically – to learn language translation by being fed thousands or millions of existing examples of translations. Researchers fed Google Translate the complete English and French versions of the Canadian Parliament’s proceedings, for instance, presumably along with many translated classic books, newspapers, and so forth. The new engines learn like human toddlers do, by unconsciously copying behaviors they observe, over and over again, until they evolve to proficiency. And like humans, the new engines will continue learning their entire “lives” by observing and copying new examples with new words and new phrases in all languages. But note: the new engines will not be able to go out in the world themselves to find worthy new examples, not for a very long time yet. They’ll need human care and feeding for the foreseeable future.

From near the end of the article:

A neural network built to translate could work through millions of pages of documents of legal discovery in the tiniest fraction of the time it would take the most expensively credentialed lawyer. The kinds of jobs taken by automatons will no longer be just repetitive tasks that were once — unfairly, it ought to be emphasized — associated with the supposed lower intelligence of the uneducated classes. We’re not only talking about three and a half million truck drivers who may soon lack careers. We’re talking about inventory managers, economists, financial advisers, real estate agents.

All true. But all decades out in the future, maybe several or many decades. Why? I see it like this. A toddler computer learns from a team of humans whose only job today is to feed and raise this child quickly to do one thing well. The toddler computer has no eyes, ears, nose, mouth, hands, legs or feet. The toddler computer processes fed-in data 24×7 and learns its one thing quickly in its tiny simulated brain, much faster than a human child would. But a human child processes many orders of magnitude of far, far richer data per time period than the computer child can: visual, auditory, tactile, olfactory, the physics of standing and walking and making sounds, the complexities of language, and so forth, all integrated and organized within its real brain. See for example this discussion.

The human child’s brain is the culmination of millions of generations of evolving, increasingly more powerful prototypes equipped with extraordinarily capable sensors of several kinds. The human brain perceives and processes the world around it continuously at a huge data rate. The human body moves freely in space. With respect to attaining human-like intelligence and self awareness, then, the computer toddler has a truly enormous gap yet to cross. The crossing cannot possibly be quick, meaning in just a few years or a decade. It’s been only a few years since the total computing power on the planet exceeded just one human brain’s computing power.

Not to diminish or underplay the Google Translate achievements in any way. They are stunning. But I view them as like Watt’s invention of rotary steam motion in the late 1700s: an enormous enabler of a revolution, but still just the very beginning. And I’m not one bit worried by the article’s conclusion that “once machines can learn from human speech, even the comfortable job of the programmer is threatened.” No, not for a very long time yet to come.

As the article says, “The goal posts for ‘artificial intelligence’ are thus constantly receding.” Each step seems major, and Google Translate’s awakening is indeed major, but it’s still tiny in the big picture of true intelligence and self awareness.

Wayne

 

Leave a comment

Filed under artificial intelligence, Evolution, Life, Software

So-Called Artificial Intelligence: Hate Sites and Fake News

cropped-guardian_dark-web_2016-12-04_1211x315.jpg

Bernard,

An online Guardian article on December 4 described how search-engine auto-completion of typed-in queries leads preferentially to certain hate-filled sites and fake news. Some very smart and industrious people figured out how to game search engines – Google in particular – into automatically bringing up suggestions that took searchers to propaganda-laden or fake sites. For example, if you typed “are jews” into Google, among the first auto-completed choices that Google would bring up were things like “are jews evil” or “are jews white”, questions that some people pose and discuss in order to sway your thoughts in their preferred direction.

In the week or more since the article, Google has begun working with various companies to weed out auto-completions that directly promote lies and hate. But Google has very long way to go yet. See for example this Guardian article from December 11. I urge you to read it.

In essence, the smart and industrious people only had to register lots of links with search engines, tens or hundreds of thousands of links to their own sites, maybe millions. The engines’ “artificial intelligence” algorithms then automatically promoted “popular” linked-to content high up into auto-completed search suggestions (I’m probably overly simplistic here, but close enough for the discussion).

Continue reading

Leave a comment

Filed under Life, Politics, Software

Troubles in virtual currencies = troubles producing error-free code

Wayne,

I read this alarming account, A Hacking of More Than $50 Million Dashes Hopes in the World of Virtual Currency, in the New York Times, published on June 17, 2016. Here are three excerpts from the longer article that is well worth reading:

A hacker on Friday siphoned more than $50 million of digital money away from an experimental virtual currency project that had been billed as the most successful crowdfunding venture ever — taking with him not just a third of the venture’s money but also the hopes and dreams of thousands of participants who wanted to prove the safety and security of digital currency. …

But just before the project stopped raising money in late May, computer scientists pointed out several vulnerabilities in its underlying code — effectively warning that what happened to the experimental consortium would be possible or even likely. 

The specific mechanism the hackers used is known as a recursive call vulnerability, — essentially a malicious transaction that moves money away from the D.A.O. into a side fund in an endlessly repeating loop.

I followed a link in the New York Times account, Flaws in Venture Fund Based On Virtual Money, which is also worth reading.

Continue reading

Leave a comment

Filed under Economics, Provably correct code, Software

Way to go, Trudeau! Quantum computing and the observable universe

Wayne,

This Slate posting recounts Justin Trudeau’s off-the-cuff description of quantum computing during a press conference. Not just a handsome face, and not just a prime minister! (of Canada) His words:

“Normal computers work, either there’s power going through a wire or not, it’s one or a zero. They’re binary systems. What quantum systems allow for is much more complex information to be encoded into a single bit. A regular computer bit is either a one or zero, on or off; a quantum state could be much more complex than that because as we know things can be both particles and waves at the same time and the uncertainty around quantum states allows us to encode more information into a much smaller computer. That’s what’s exciting about quantum computing…”

Bernard


Following links from the splendid piece on Trudeau, I was very much struck by this article by Dennis Overbye. From it:

“So where is the center of the universe? Right here. Yes, you are the center of the universe.

Continue reading

Leave a comment

Filed under Physics, Science in the News, Software