AI Machine Beats Champion Chess Program


AlphaZero
, the game-playing AI created by Google sibling DeepMind, has beaten the world’s best chess-playing computer program, having taught itself how to play in under four hours. The repurposed AI, which has repeatedly beaten the world’s best Go players as AlphaGo, has been generalised so that it can now learn other games. It took just four hours to learn the rules to chess before beating the world champion chess program, Stockfish 8, in a 100-game match up. AlphaZero won or drew all 100 games, according to a non-peer-reviewed research paper published with Cornell University Library’s arXiv.

CLICK ON THE IMAGE AND SEE ALPHA ZERO DEVOURING  STOCKFISH

Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi [a similar Japanese board game] as well as Go, and convincingly defeated a world-champion program in each case,” said the paper’s authors that include DeepMind founder Demis Hassabis, who was a child chess prodigy reaching master standard at the age of 13.

“It’s a remarkable achievement, even if we should have expected it after AlphaGo,” former world chess champion Garry Kasparov told Chess.com. “We have always assumed that chess required too much empirical knowledge for a machine to play so well from scratch, with no human knowledge added at all.

Computer programs have been able to beat the best human chess players ever since IBM’s Deep Blue supercomputer defeated Kasparov on 12 May 1997DeepMind said the difference between AlphaZero and its competitors is that its machine-learning approach is given no human input apart from the basic rules of chess. The rest it works out by playing itself over and over with self-reinforced knowledge. The result, according to DeepMind, is that AlphaZero took an “arguably more human-like approach” to the search for moves, processing around 80,000 positions per second in chess compared to Stockfish 8’s 70m.

After winning 25 games of chess versus Stockfish 8 starting as white, with first-mover advantage, a further three starting with black and drawing a further 72 games, AlphaZero also learned shogi in two hours before beating the leading program Elmo in a 100-game matchup. AlphaZero won 90 games, lost eight and drew 2. The new generalised AlphaZero was also able to beat the “super human” former version of itself AlphaGo at the Chinese game of Go after only eight-hours of self-training, winning 60 games and losing 40 games.

While experts said the results are impressive, and have potential across a wide-range of applications to complement human knowledge, professor Joanna Bryson, a computer scientist and AI researcher at the University of Bath, warned that it was “still a discrete task“.

Source: https://www.theguardian.com/

Robots Can Speak Like Real Humans

Generating speech from a piece of text is a common and important task undertaken by computers, but it’s pretty rare that the result could be mistaken for ordinary speech. A new technique from researchers at Alphabet’s DeepMind  (Google) takes a completely different approach, producing speech and even music that sounds eerily like the real thing.

robot-terminator

Early systems used a large library of the parts of speech (phonemes and morphemes) and a large ruleset that described all the ways letters combined to produce those sounds. The pieces were joined, or concatenated, creating functional speech synthesis that can handle most words, albeit with unconvincing cadence and tone. Later systems parameterized the generation of sound, making a library of speech fragments unnecessary. More compact — but often less effective.

WaveNet, as the system is called, takes things deeper. It simulates the sound of speech at as low a level as possible: one sample at a time. That means building the waveform from scratch16,000 samples per second.

milliwavenetEach dot is a separately calculated sample; the aggregate is the digital waveform.

You already know from the headline, but if you don’t, you probably would have guessed what makes this possible: neural networks. In this case, the researchers fed a ton of ordinary recorded speech to a convolutional neural network, which created a complex set of rules that determined which tones follow other tones in every common context of speech.

Each sample is determined not just by the sample before it, but the thousands of samples that came before it. They all feed into the neural network’s algorithm; it knows that certain tones or samples will almost always follow each other, and certain others will almost never. People don’t speak in square waves, for instance.

Source: https://techcrunch.com/tone

Artificial Intelligence: The Rise Of The Machines

In a milestone for artificial intelligence, a computer has beaten a human champion at a strategy game that requires “intuition” rather than brute processing power to prevail, its makers said Wednesday. Dubbed AlphaGo, the system honed its own skills through a process of trial and error, playing millions of games against itself until it was battle-ready, and surprised even its creators with its prowess.

go game

AlphaGo won five-nil, and it was stronger than perhaps we were expecting,” said Demis Hassabis, the chief executive of Google DeepMind, a British artificial intelligence (AI) company.

A computer defeating a professional human player at the 3,000-year-old Chinese board game known as Go, was thought to be about a decade off. The clean-sweep victory over three-time European Go champion Fan Huisignifies a major step forward in one of the great challenges in the development of artificial intelligence—that of game-playing,” the British Go Association said in a statement. The two-player game is described as perhaps the most complex ever designed, with more configurations possible than there are atoms in the Universe, Hassabis says. Players take turns placing stones on a board, trying to surround and capture the opponent’s stones, with the aim of controlling more than 50 percent of the board. There are hundreds of places where a player can place the first stone, black or white, with hundreds of ways in which the opponent can respond to each of these moves and hundreds of possible responses to each of those in turn.

Source: http://phys.org/

Smart Glasses For Visually Impaired

It might look like a cartoon, but this could change the lives of those with sight problems. Images like these are seen by wearers of Oxford University‘s new smart-glasses. The smart-glasses are designed to help people with serious visual impairments see. Developed by Stephen Hicks and his research team, they use cameras to augment vision. Hicks says they even work for those registered blind, by improving their depth perception.

smart glasses
When you go blind, you generally have some sight remaining, and using a combination of cameras and a see-through display, we’re able to enhance nearby objects to make them easier to see for obstacle avoidance and also facial recognition,” says neuroscience researcher, Stephen Hicks, from Oxford University. The glasses use three-dimensional cameras that can detect the structure and position of nearby objects. Software then uses that information to block out the background and highlight only what’s nearest to the user. “We turn that into a high contrast cartoon that we then present on the inside of a see-through pair of glasses, and then we can add the person’s normal vision to the enhanced view that you can show here, and allow the person to use their remaining site as the generally would have done to see the world in a better way” added Hicks. More than 360,000 people in the UK are registered as blind, according to a British sight charity. Hicks says the glasses are different to other products – depth perception a unique facet of the smart-glasses technology. Google helping fund the research after it won an award. After testing the glasses outside a laboratory setting, the final challenge before production will be to make them smaller.
Source: http://www.reuters.com/

Anyone Can Buy Google Glass April 15

Starting at 9 a.m. ET on April 15 anyone in the US will be able to buy Google Glass for one day. This is the first time the device has been available to the general public. So far, the face-mounted nanocomputers have been sold only to Google “Explorers,” the company’s name for early adopters. At first only developers could buy Glass, but Google slowly expanded the program to include regular people. Some were hand-picked, others applied to be Explorers through Google contests by sharing what cool projects they would do if they had Glass.

Google Glass is a wearable nanocomputer with an optical head-mounted display (OHMD). It was developed by Google with the mission of producing a mass-market ubiquitous nanocomputer.Google Glass displays information in a smartphone-like hands-free format. Wearers communicate with the Internet via natural language voice commands.

Source: http://www.google.com
AND
http://en.wikipedia.org/

Futuristic Vision Of Fashion

In the September issue of Vogue, the magazine for rich and beautiful people, 12 pages are dedicated to Google glass, the high-tech spectacles. This article titled “The Final Frontier” offers “a futuristic vision of fashion,” and free advertising for Google.
Vogue September 2013
In the first pages model Raquel Zimmermann wears a pair of the charcoal -colored $1,500 glasses

The features of the glass are impressive and located in the right earpiece attached to the frame: a nanocomputer with memory and a processor, a camera, speaker and microphone, Bluetooth and wi-fi antennas, accelerometer, gyroscope, compass and a battery. All inside the earpiece. Of course the final objective is that eventually, Glass will have a cellular radio, with online capabilities; hooked up wirelessly with your phone for an online connection.

The Google glass is catching eyes in the fashion world. At last year’s New York Fashion week in September, models at the Diane von Furstenberg event walked the catwalk with the glasses on.
They are many different companies around the world competing with Google to produce similar glasses. Let’s mention the US companies Microsoft and Apple. We label these devices with the generic appellation quantglass as each one has a very different mark.

Source: http://www.quantglass.com
AND
http://en.wikipedia.org/wiki/QuantGlass
AND
http://vogue.com

Better Than GOOGLE!

Aurora Clark, an associate professor of chemistry at Washington State University, has adapted Google’s PageRank software to create moleculaRnetworks, which scientists can use to determine molecular shapes and chemical reactions without the expense, logistics and occasional danger of lab experiments.”What’s most cool about this work is we can take technology from a totally separate realm of science, computer science, and apply it to understanding our natural world,” says Clark.
What Aurora Clarck probably did not know is that the algorithm used successfully by Google founders is based mostly on a free formula developed by an Italian Professor of mathematics from the Univeristy of Padua. Now professor Massimo Marchiori has opened a new search engine on the web, with specific features that will surpass the accuracy of Google search engine. At this moment, the new search engine address is www.volunia.com.
Google

Google’s PageRank software, developed by its founders at Stanford University, uses an algorithm—a set of mathematical formulas—to measure and prioritize the relevance of various Web pages to a user’s search.

Google Glass Project Announces Nanocomputer’s Era

If you venture into a coffee shop in the coming months and see someone with a pair of futuristic glasses that look like a prop from Star Trek, don’t worry. It’s probably just a Google employee testing the company’s new augmented reality glasses. Instead, Glass looks like only the headband of a pair of glasses — the part that hooks on your ears and lies along your eyebrow line — with a small, transparent block positioned above and to the right of your right eye. That, of course, is a screen, and the Google Glass is actually a fairly full-blown computer.

click and enjoy the video demonstration

Or maybe like a smartphone that you never have to take out of your pocket. Inside the right earpiece — that is, the horizontal support that goes over your ear — Google has packed memory, a processor, a camera, speaker and microphone, a step toward the nanocomputer, Bluetooth and Wi-Fi antennas, accelerometer, gyroscope, compass and a battery. All inside the earpiece. Google has said that eventually, Glass will have a cellular radio, so it can get online; at this point, it hooks up wirelessly with your phone for an online connection. The tiny screen is completely invisible when you’re talking or driving or reading. You just forget about it completely. There’s nothing at all between your eyes and whatever, or whomever, you’re looking at. And yet when you do focus on the screen, shifting your gaze up and to the right, that tiny half-inch display is surprisingly immersive. It’s as though you’re looking at a big laptop screen or something.
Have a look on competitors (Apple, Microsoft, DARPA) similar projects on www.quantglass.com

Google’s vision of augmented reality

If you venture into a coffee shop in the coming months and see someone with a pair of futuristic glasses that look like a prop from “Star Trek,” don’t worry. It’s probably just a Google employee testing the company’s new augmented-reality glassesGoogle company is the last to declare its interest for augmented reality. Apple with its nanocomputer iLens one year ago, Microsoft lately, and the American Army (DARPA) secretely, have been  working hard in this specific research field. In a a post shared on Google Plus, employees in the Google company laboratory known as Google X, including Babak Parviz, Steve Lee and Sebastian Thrun, asked people for input about the prototype of Project Glass

Click to enjoy the video demonstration.

 We’re sharing this information now because we want to start a conversation and learn from your valuable input,” the three employees wrote. “Please follow along as we share some of our ideas and stories. We’d love to hear yours, too. What would you like to see from Project Glass?”

 

See more on DARPA, Apple and Microsoft projects:
http://www.nanocomputer.com/?p=1703
http://www.nanocomputer.com/?p=1512
http://www.nanocomputer.com/?page_id=563