Posts belonging to Category Artificial Intelligence



Artificial Synapse For “Brain-on-a-Chip”

When it comes to processing power, the human brain just can’t be beat. Packed within the squishy, football-sized organ are somewhere around 100 billion neurons. At any given moment, a single neuron can relay instructions to thousands of other neurons via synapses — the spaces between neurons, across which neurotransmitters are exchanged. There are more than 100 trillion synapses that mediate neuron signaling in the brain, strengthening some connections while pruning others, in a process that enables the brain to recognize patterns, remember facts, and carry out other learning tasks, at lightning speeds.

Researchers in the emerging field of “neuromorphic computing” have attempted to design computer chips that work like the human brain. Instead of carrying out computations based on binary, on/off signaling, like digital chips do today, the elements of a “brain on a chip” would work in an analog fashion, exchanging a gradient of signals, or “weights,” much like neurons that activate in various ways depending on the type and number of ions that flow across a synapse.

In this way, small neuromorphic chips could, like the brain, efficiently process millions of streams of parallel computations that are currently only possible with large banks of supercomputers. But one significant hangup on the way to such portable artificial intelligence has been the neural synapse, which has been particularly tricky to reproduce in hardware.

Now engineers at MIT have designed an artificial synapse in such a way that they can precisely control the strength of an electric current flowing across it, similar to the way ions flow between neurons. The team has built a small chip with artificial synapses, made from silicon germanium. In simulations, the researchers found that the chip and its synapses could be used to recognize samples of handwriting, with 95 percent accuracy.

The design, published today in the journal Nature Materials, is a major step toward building portable, low-power neuromorphic chips for use in pattern recognition and other learning tasks.

Source: http://news.mit.edu/

AI Improves Heart Disease Diagnosis

Researchers from the University of Oxford are using artificial intelligence (AI) to improve diagnostic accuracy for heart disease. The team hope to roll out the system across the NHS later this year, helping to improve patient outcomes and saving millions is misdiagnoses. The research, led by Prof Paul Leeson and RDM DPhil student Ross Upton (Cardiovascular Clinical Research Facility), took place in the Oxford University Hospitals Foundation Trust and is the basis of spin-out company Ultromics.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

Thousands of people every year have an echocardiogram – a type of heart scan – after visiting hospital suffering with chest pain. Clinicians currently assess these scans by eye, taking into account many features that could indicate whether someone has heart disease and if they are likely to go on to have a heart attack. But even the most well trained cardiologist can misdiagnose patients. Currently, 1 in 5 scans are misdiagnosed each year – the equivalent to 12,000 patients. This means that people are either not being treated to prevent a heart attack, or they are undergoing unnecessary operations to stave off a heart attack they won’t have.

The new system uses machine learning – a form of artificial intelligence – to tap into the rich information provided in an echocardiogram. Using the new system, AI can detect 80,000 subtle changes inviable to the naked eye, improving the accuracy of diagnosis to 90%. The machine learning system was trained using scans from previous patients, alongside data about whether they went on to have a heart attack. The team hope that the improved diagnostic accuracy will not only improve patient care and outcomes, but save the NHS £300million a year in avoidable operations and treatment.  So far the system has been trialled in six cardiology units in the UK. Further implementation of the technology is now being led by Ultromics – a spin-out company co-founded by Ross Upton and Paul Leeson (Cardiovascular Clinical Research Facility). The software will be made available for free throughout the NHS later this year.

Source: https://www.rdm.ox.ac.uk/

Memristors Retain Data 10 Years Without Power

The internet of things ( IoT) is coming, that much we know. But still it won’t; not until we have components and chips that can handle the explosion of data that comes with IoT. In 2020, there will already be 50 billion industrial internet sensors in place all around us. A single autonomous device – a smart watch, a cleaning robot, or a driverless car – can produce gigabytes of data each day, whereas an airbus may have over 10 000 sensors in one wing alone.

Two hurdles need to be overcome. First, current transistors in computer chips must be miniaturized to the size of only few nanometres; the problem is they won’t work anymore then. Second, analysing and storing unprecedented amounts of data will require equally huge amounts of energy. Sayani Majumdar, Academy Fellow at Aalto University (Finland), along with her colleagues, is designing technology to tackle both issues.

Majumdar has with her colleagues designed and fabricated the basic building blocks of future components in what are called “neuromorphiccomputers inspired by the human brain. It’s a field of research on which the largest ICT companies in the world and also the EU are investing heavily. Still, no one has yet come up with a nano-scale hardware architecture that could be scaled to industrial manufacture and use.

The probe-station device (the full instrument, left, and a closer view of the device connection, right) which measures the electrical responses of the basic components for computers mimicking the human brain. The tunnel junctions are on a thin film on the substrate plate.

The technology and design of neuromorphic computing is advancing more rapidly than its rival revolution, quantum computing. There is already wide speculation both in academia and company R&D about ways to inscribe heavy computing capabilities in the hardware of smart phones, tablets and laptops. The key is to achieve the extreme energy-efficiency of a biological brain and mimic the way neural networks process information through electric impulses,” explains Majumdar.

In their recent article in Advanced Functional Materials, Majumdar and her team show how they have fabricated a new breed of “ferroelectric tunnel junctions”, that is, few-nanometre-thick ferroelectric thin films sandwiched between two electrodes. They have abilities beyond existing technologies and bode well for energy-efficient and stable neuromorphic computing.

The junctions work in low voltages of less than five volts and with a variety of electrode materials – including silicon used in chips in most of our electronics. They also can retain data for more than 10 years without power and be manufactured in normal conditions.

Tunnel junctions have up to this point mostly been made of metal oxides and require 700 degree Celsius temperatures and high vacuums to manufacture. Ferroelectric materials also contain lead which makes them – and all our computers – a serious environmental hazard.

Our junctions are made out of organic hydro-carbon materials and they would reduce the amount of toxic heavy metal waste in electronics. We can also make thousands of junctions a day in room temperature without them suffering from the water or oxygen in the air”, explains Majumdar.

What makes ferroelectric thin film components great for neuromorphic computers is their ability to switch between not only binary states – 0 and 1 – but a large number of intermediate states as well. This allows them to ‘memoriseinformation not unlike the brain: to store it for a long time with minute amounts of energy and to retain the information they have once received – even after being switched off and on again.

We are no longer talking of transistors, but ‘memristors’. They are ideal for computation similar to that in biological brains.  Take for example the Mars 2020 Rover about to go chart the composition of another planet. For the Rover to work and process data on its own using only a single solar panel as an energy source, the unsupervised algorithms in it will need to use an artificial brain in the hardware.

What we are striving for now, is to integrate millions of our tunnel junction memristors into a network on a one square centimetre area. We can expect to pack so many in such a small space because we have now achieved a record-high difference in the current between on and off-states in the junctions and that provides functional stability. The memristors could then perform complex tasks like image and pattern recognition and make decisions autonomously,” says Majumdar.

Source: http://www.aalto.fi/

AI Machine Beats Champion Chess Program


AlphaZero
, the game-playing AI created by Google sibling DeepMind, has beaten the world’s best chess-playing computer program, having taught itself how to play in under four hours. The repurposed AI, which has repeatedly beaten the world’s best Go players as AlphaGo, has been generalised so that it can now learn other games. It took just four hours to learn the rules to chess before beating the world champion chess program, Stockfish 8, in a 100-game match up. AlphaZero won or drew all 100 games, according to a non-peer-reviewed research paper published with Cornell University Library’s arXiv.

CLICK ON THE IMAGE AND SEE ALPHA ZERO DEVOURING  STOCKFISH

Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi [a similar Japanese board game] as well as Go, and convincingly defeated a world-champion program in each case,” said the paper’s authors that include DeepMind founder Demis Hassabis, who was a child chess prodigy reaching master standard at the age of 13.

“It’s a remarkable achievement, even if we should have expected it after AlphaGo,” former world chess champion Garry Kasparov told Chess.com. “We have always assumed that chess required too much empirical knowledge for a machine to play so well from scratch, with no human knowledge added at all.

Computer programs have been able to beat the best human chess players ever since IBM’s Deep Blue supercomputer defeated Kasparov on 12 May 1997DeepMind said the difference between AlphaZero and its competitors is that its machine-learning approach is given no human input apart from the basic rules of chess. The rest it works out by playing itself over and over with self-reinforced knowledge. The result, according to DeepMind, is that AlphaZero took an “arguably more human-like approach” to the search for moves, processing around 80,000 positions per second in chess compared to Stockfish 8’s 70m.

After winning 25 games of chess versus Stockfish 8 starting as white, with first-mover advantage, a further three starting with black and drawing a further 72 games, AlphaZero also learned shogi in two hours before beating the leading program Elmo in a 100-game matchup. AlphaZero won 90 games, lost eight and drew 2. The new generalised AlphaZero was also able to beat the “super human” former version of itself AlphaGo at the Chinese game of Go after only eight-hours of self-training, winning 60 games and losing 40 games.

While experts said the results are impressive, and have potential across a wide-range of applications to complement human knowledge, professor Joanna Bryson, a computer scientist and AI researcher at the University of Bath, warned that it was “still a discrete task“.

Source: https://www.theguardian.com/

Budweiser Orders 40 Tesla Electric Trucks

The list of companies placing orders for Tesla Semi electric trucks keeps growing weeks after the unveiling event last month. Now Anheuser-Busch, the brewer behind Budweiser, announced that it ordered 40 Tesla Semi trucks. Last week, DHL confirmed an order of 10 trucks – bringing the tally to just over 200 Tesla Semi trucks. The brewer says that it will include the electric trucks in its distribution network as part of its commitment to reduce its operational carbon footprint by 30 percent by 2025. Considering the size of their distribution network, they say that it would be the equivalent of removing nearly 500,000 cars from the road globally each year.

At Anheuser-Busch, we are constantly seeking new ways to make our supply chain more sustainable, efficient, and innovative. This investment in Tesla semi-trucks helps us achieve these goals while improving road safety and lowering our environmental impact,” commented James Sembrot, Senior Director of Logistics Strategy.

Tesla Semi is actually only one part of Anheuser-Busch’s effort to modernize its fleet. They also confirmed orders from Nikola Motors for their battery/fuel cell hydrogen trucks and Uber’s Otto autonomous trucks.

Last year, Uber’s Otto completed its first shipment by self-driving truck with an autonomous beer run with Budweiser.

Source: https://electrek.co/

Copycat Robot

Introducing T-HR3, third generation humanoid robot designed to explore how clever joints can improve brilliant balance and real remote controlToyota says its 29 joints allow it to copy the most complex of moves – safely bringing friendly, helpful robots one step closer.


CLICK ON THE IMAGE TO ENJOY THE VIDEO

Humanoid robots are very popular among Japanese people…creating one like this has always been our dream and that’s why we pursued it,” says Akifumi Tamaoki, manager of Partner robot division at Toyota.

The robot is controlled by a remote operator sitting in an exoskeletonmirroring its master’s moves, a headset giving the operator a realtime robot point of view.

We’re primarily focused on making this robot a very family-oriented one, so that it can help people including services such as carer” explains Tamaoki.
Toyota said T-HR3 could help around the homes or medical facilities in Japan or construction sites, a humanoid helping hand – designed for a population ageing faster than anywhere else on earth.

Source: http://toyota.com/

Artificial Intelligence Chip Analyzes Molecular-level Data In Real Time

Nano Global, an Austin-based molecular data company, today announced that it is developing a chip using intellectual property (IP) from Arm, the world’s leading semiconductor IP company. The technology will help redefine how global health challenges – from superbugs to infectious diseases, and cancer are conquered.

The pioneering system-on-chip (SoC) will yield highly-secure molecular data that can be used in the recognition and analysis of health threats caused by pathogens and other living organisms. Combined with the company’s scientific technology platform, the chip leverages advances in nanotechnology, optics, artificial intelligence (AI), blockchain authentication, and edge computing to access and analyze molecular-level data in real time.

In partnership with Arm, we’re tackling the vast frontier of molecular data to unlock the unlimited potential of this universe,” said Steve Papermaster, Chairman and CEO of Nano Global. “The data our technology can acquire and process will enable us to create a safer and healthier world.”

We believe the technology Nano Global is delivering will be an important step forward in the collective pursuit of care that improves lives through the application of technology,” explained Rene Haas, executive vice president and president of IPG, Arm. “By collaborating with Nano Global, Arm is taking an active role in developing and deploying the technologies that will move us one step closer to solving complex health challenges.”

Additionally, Nano Global will be partnering with several leading institutions, including Baylor College of Medicine and National University of Singapore, on broad research initiatives in clinical, laboratory, and population health environments to accelerate data collection, analysis, and product development.
The initial development of the chip is in process with first delivery expected by 2020. The company is already adding new partners to their platform.

Source: https://nanoglobal.com/
AND
www.prnewswire.com

AI, “worst event in the history of our civilisation” says Stephen Hawking

Stephen Hawking has sent a stark warning out to the world, stating that the invention of artificial intelligence (AI) could be the “worst event in the history of our civilisation”. Speaking at the Web Summit technology conference in Lisbon, Portugal, the theoretical physicist reiterated his warning against the rise of powerful, conscious machines.
While Prof Hawking admitted that AI could be used for good, he also stated that humans need to find a way to control it so that it does not become more powerful than us as “computers can, in theory, emulate human intelligence, and exceed it.” Looking at the positives, the 75-year old said AI could help undo some of the damage that humans have inflicted on the natural world, help beat disease and “transform” every aspect of society. But, there are negatives that come with it.
CLICK ON THE IMAGE TO ENJOY THE VIDEO

Success in creating effective AI, could be the biggest event in the history of our civilisation. Or the worst. We just don’t know. “So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it. “Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilisation. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy,” explains the University of Cambridge alumni.

Prof Hawking added that to make sure AI is in line with our goals, creators need to “employ best practice and effective management.” But he still has hope: “I am an optimist and I believe that we can create AI for the good of the world. “That it can work in harmony with us. We simply need to be aware of the dangers, identify them, employ the best possible practice and management, and prepare for its consequences well in advance.”

Just last week, Prof Hawking warned that AI will replace us as the dominant being on the planet.

Source: http://www.express.co.uk/

Sophia The Robot Says: ‘I have feelings too’

Until recently, the most famous thing that Sophia the robot had ever done was beat Jimmy Fallon a little too easily in a nationally televised game of rock-paper-scissors.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

But now, the advanced artificial intelligence robot — which looks like Audrey Hepburn, mimics human expressions and may be the grandmother of robots that solve the world’s most complex problems — has a new feather in her cap:

Citizenship.

The kingdom of Saudi Arabia officially granted citizenship to the humanoid robot last week during a program at the Future Investment Initiative, a summit that links deep-pocketed Saudis with inventors hoping to shape the future.

Sophia’s recognition made international headlines — and sparked an outcry against a country with a shoddy human rights record that has been accused of making women second-class citizens.

Source: https://www.washingtonpost.com/

AI-controlled Greenhouse Uses 90 Percent Less Water To Produce Salads

Californian startup  Iron Ox runs an indoor farm complete with a few hundred plants—and two robot farmers. Instead of using technology to grow genetically modified food, a former Google engineer partnered with one of his friends who had a PhD in robotics to open a technology-based farm where they plant, seed, and grow heads of lettuce.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

Iron Ox’s goal is to provide quality produce to everyone without a premium price. According to Natural Society the average head of lettuce travels 2,055 miles from farm to market, which is why fresh lettuce is often so expensive. Currently, Iron Ox only provides produce to restaurants and grocery stores in the Bay Area of California, which is why after a daily harvest, their products are hours fresh as opposed to shipped in. The company aims to open greenhouses near other major cities, guaranteeing same-day delivery from their trucks at a fraction of the price of the current supply chain.

So why the robots? Lettuce has always been a testing ground for farming innovation, from early greenhouses to closed aquaponic ecosystems. According to Iron Ox, their AI-controlled greenhouse uses 90 percent less water than traditional farms, and because of the technology, each head of lettuce receives intimate individualized attention that is not realistic with human labor. Iron Ox also says that because they grow their products indoors with no pesticides, they don’t have to worry about typical farming issues like stray animals eating their product.

Iron Ox has yet to launch a fully-functioning automated greenhouse, but hope to build their first by the end of 2017. However, Iron Ox is not the only company to experiment with robot farming. Spread, a sustainable farming organization, broke ground on their first techno-farm, which will be fully automated and operated by robots growing lettuce, in May. They have plans to expand to the Middle East next and then continue growing.

Does this mean the future of produce is automation? Not exactly. Agriculture is complex business, and not all produce can be greenhouse-grown as efficiently and effectively as lettuce. But it’s one more reason for farmers to be aware of how the robots are coming for us all.

Source: https://www.saveur.com/

The Ultra Smart Community Of The Future

Japan’s largest electronics show CEATEC – showcasing its version of our future – in a connected world with intelligent robots And cars that know when the driver is falling asleep. This is Omron‘s “Onboard Driving Monitoring Sensor,” checking its driver isn’t distracted.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

We are developing sensors that help the car judge what state the driver is in, with regards to driving. For example, if the driver has his eyes open and set on things he should be looking at, if the driver is distracted or looking at smartphones, and these types of situations,” explains Masaki Suwa, Omron Corp. Chief Technologist.

After 18 years of consumer electronics, CEATEC is changing focus to the Internet of Things and what it calls ‘the ultra-smart community of the future‘ A future where machines take on more important roles – machines like Panasonic‘s CaloRieco – pop in your plate and know exactly what you are about to consume.

By placing freshly cooked food inside the machine, you can measure total calories and the three main nutrients: protein, fat and carbohydrate. By using this machine, you can easily manage your diet,” says Panasonic staff engineer Ryota Sato.

Even playtime will see machines more involved – like Forpheus the ping playing robot – here taking on a Olympic bronze medalist – and learning with every stroke.
Rio Olympics Table Tennis player , Jun Mizutani, Bronze Medalist, reports: “It wasn’t any different from playing with a human being. The robot kept improving and getting better as we played, and to be honest, I wanted to play with it when it had reached its maximum level, to see how good it is.”

Computer Reads Body Language

Researchers at Carnegie Mellon University‘s Robotics Institute have enabled a computer to understand body poses and movements of multiple people from video in real time — including, for the first time, the pose of each individual’s hands and fingers. This new method was developed with the help of the Panoptic Studio — a two-story dome embedded with 500 video cameras — and the insights gained from experiments in that facility now make it possible to detect the pose of a group of people using a single camera and a laptop computer.

Yaser Sheikh, associate professor of robotics, said these methods for tracking 2-D human form and motion open up new ways for people and machines to interact with each other and for people to use machines to better understand the world around them. The ability to recognize hand poses, for instance, will make it possible for people to interact with computers in new and more natural ways, such as communicating with computers simply by pointing at things.

Detecting the nuances of nonverbal communication between individuals will allow robots to serve in social spaces, allowing robots to perceive what people around them are doing, what moods they are in and whether they can be interrupted. A self-driving car could get an early warning that a pedestrian is about to step into the street by monitoring body language. Enabling machines to understand human behavior also could enable new approaches to behavioral diagnosis and rehabilitation, for conditions such as autism, dyslexia and depression.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

We communicate almost as much with the movement of our bodies as we do with our voice,” Sheikh said. “But computers are more or less blind to it.”

In sports analytics, real-time pose detection will make it possible for computers to track not only the position of each player on the field of play, as is now the case, but to know what players are doing with their arms, legs and heads at each point in time. The methods can be used for live events or applied to existing videos.

To encourage more research and applications, the researchers have released their computer code for both multi-person and hand pose estimation. It is being widely used by research groups, and more than 20 commercial groups, including automotive companies, have expressed interest in licensing the technology, Sheikh said.

Sheikh and his colleagues have presented reports on their multi-person and hand pose detection methods at CVPR 2017, the Computer Vision and Pattern Recognition Conference  in Honolulu.

Source: https://www.cmu.edu/