An online Guardian article on December 4 described how search-engine auto-completion of typed-in queries leads preferentially to certain hate-filled sites and fake news. Some very smart and industrious people figured out how to game search engines – Google in particular – into automatically bringing up suggestions that took searchers to propaganda-laden or fake sites. For example, if you typed “are jews” into Google, among the first auto-completed choices that Google would bring up were things like “are jews evil” or “are jews white”, questions that some people pose and discuss in order to sway your thoughts in their preferred direction.
In the week or more since the article, Google has begun working with various companies to weed out auto-completions that directly promote lies and hate. But Google has very long way to go yet. See for example this Guardian article from December 11. I urge you to read it.
In essence, the smart and industrious people only had to register lots of links with search engines, tens or hundreds of thousands of links to their own sites, maybe millions. The engines’ “artificial intelligence” algorithms then automatically promoted “popular” linked-to content high up into auto-completed search suggestions (I’m probably overly simplistic here, but close enough for the discussion).
Humans at Google began to solve the problem by scrutinizing lots of possible auto-completions for offensive or false implications. Quite likely some humans wrote software to help analyze hundreds of thousands or even hundreds of millions of searches for offensiveness or fakery. Out of the effort came hundreds, maybe thousands, of new rules about what searches NOT to suggest during auto-completion despite their apparent popularity.
Conclusion: So-called artificial intelligence remains far, far away from being an automatic and reliable part of human interactions with the world and the universe, especially regarding important matters like finding out facts and truth. We humans can use and build software, for example search engines and their extensive rules, to help us sort out what’s real and true and and what’s not. Machines are nowhere near capable to do this work on their own. I believe strongly, based on my decades of building and using complex software, that it’ll take at least decades more until machines can even approach capability.
See the excellent book The Most Human Human for one measure of the gap between humans and machines. And remember the “intelligent” Microsoft chat-bot, Tay, who in early 2016 evolved into a racist Nazi within hours of going live.