Posts belonging to Category algorithm

Not just speed: 7 incredible things you can do with 5G

You can’t walk around Mobile World Congress  without 5G slapping you in the face. If there’s a phenomenon that’s dominated this week’s trade show besides the return of a 17-year-old phone, it’s the reality that the next generation of cellular technology has arrived. Well, at least it’s real in the confines of the Fira Gran Via convention center in Barcelona.

Above the Qualcomm booth flashed the slogan: “5G: From the company that brought you 3G and 4G.” If you took a few more steps, you could hear an Intel representative shout about the benefits of 5G. If you hopped over to Ericsson, you’d find a “5G avenue” with multiple exhibits demonstrating the benefits of the technology. Samsung kicked off its press conference not with its new tablets, but with a chat about 5G.

Remote surgery via a special glove, virtual reality and 5G

(click on the image to enjoy the video)

The hype around 5G has been brewing for more than a year, but we’re finally starting to see the early research and development bear fruit. The technology promises to change our lives by connecting everything around us to a network that is 100 times faster than our cellular connection and 10 times faster than our speediest home broadband service.

But it’s not just about speed for speed’s sake. While the move from 3G to 4G LTE was about faster connections, the evolution to 5G is so much more. The combination of speed, responsiveness and reach could unlock the full capabilities of other hot trends in technology, offering a boost to self-driving cars, drones, virtual reality and the internet of things. “If you just think of speed, you don’t see the magic of all it can do,” said Jefferson Wang, who follows the mobile industry for IBB Consulting.

The bad news: 5G is still a while away for consumers, and the industry is still fighting over the nitty-gritty details of the technology itself. The good news: There’s a chance it’s coming sooner than we thought. It’s clear why the wireless carriers are eager to move to 5G. With the core phone business slowing down, companies are eager for new tech to spark excitement and connect more devices. “We are absolutely convinced that 5G is the next revolution,” Tim Baxter, president of Samsung’s US unit, said during a press conference.


Artificial Synapse For “Brain-on-a-Chip”

When it comes to processing power, the human brain just can’t be beat. Packed within the squishy, football-sized organ are somewhere around 100 billion neurons. At any given moment, a single neuron can relay instructions to thousands of other neurons via synapses — the spaces between neurons, across which neurotransmitters are exchanged. There are more than 100 trillion synapses that mediate neuron signaling in the brain, strengthening some connections while pruning others, in a process that enables the brain to recognize patterns, remember facts, and carry out other learning tasks, at lightning speeds.

Researchers in the emerging field of “neuromorphic computing” have attempted to design computer chips that work like the human brain. Instead of carrying out computations based on binary, on/off signaling, like digital chips do today, the elements of a “brain on a chip” would work in an analog fashion, exchanging a gradient of signals, or “weights,” much like neurons that activate in various ways depending on the type and number of ions that flow across a synapse.

In this way, small neuromorphic chips could, like the brain, efficiently process millions of streams of parallel computations that are currently only possible with large banks of supercomputers. But one significant hangup on the way to such portable artificial intelligence has been the neural synapse, which has been particularly tricky to reproduce in hardware.

Now engineers at MIT have designed an artificial synapse in such a way that they can precisely control the strength of an electric current flowing across it, similar to the way ions flow between neurons. The team has built a small chip with artificial synapses, made from silicon germanium. In simulations, the researchers found that the chip and its synapses could be used to recognize samples of handwriting, with 95 percent accuracy.

The design, published today in the journal Nature Materials, is a major step toward building portable, low-power neuromorphic chips for use in pattern recognition and other learning tasks.


AI Improves Heart Disease Diagnosis

Researchers from the University of Oxford are using artificial intelligence (AI) to improve diagnostic accuracy for heart disease. The team hope to roll out the system across the NHS later this year, helping to improve patient outcomes and saving millions is misdiagnoses. The research, led by Prof Paul Leeson and RDM DPhil student Ross Upton (Cardiovascular Clinical Research Facility), took place in the Oxford University Hospitals Foundation Trust and is the basis of spin-out company Ultromics.


Thousands of people every year have an echocardiogram – a type of heart scan – after visiting hospital suffering with chest pain. Clinicians currently assess these scans by eye, taking into account many features that could indicate whether someone has heart disease and if they are likely to go on to have a heart attack. But even the most well trained cardiologist can misdiagnose patients. Currently, 1 in 5 scans are misdiagnosed each year – the equivalent to 12,000 patients. This means that people are either not being treated to prevent a heart attack, or they are undergoing unnecessary operations to stave off a heart attack they won’t have.

The new system uses machine learning – a form of artificial intelligence – to tap into the rich information provided in an echocardiogram. Using the new system, AI can detect 80,000 subtle changes inviable to the naked eye, improving the accuracy of diagnosis to 90%. The machine learning system was trained using scans from previous patients, alongside data about whether they went on to have a heart attack. The team hope that the improved diagnostic accuracy will not only improve patient care and outcomes, but save the NHS £300million a year in avoidable operations and treatment.  So far the system has been trialled in six cardiology units in the UK. Further implementation of the technology is now being led by Ultromics – a spin-out company co-founded by Ross Upton and Paul Leeson (Cardiovascular Clinical Research Facility). The software will be made available for free throughout the NHS later this year.


Memristors Retain Data 10 Years Without Power

The internet of things ( IoT) is coming, that much we know. But still it won’t; not until we have components and chips that can handle the explosion of data that comes with IoT. In 2020, there will already be 50 billion industrial internet sensors in place all around us. A single autonomous device – a smart watch, a cleaning robot, or a driverless car – can produce gigabytes of data each day, whereas an airbus may have over 10 000 sensors in one wing alone.

Two hurdles need to be overcome. First, current transistors in computer chips must be miniaturized to the size of only few nanometres; the problem is they won’t work anymore then. Second, analysing and storing unprecedented amounts of data will require equally huge amounts of energy. Sayani Majumdar, Academy Fellow at Aalto University (Finland), along with her colleagues, is designing technology to tackle both issues.

Majumdar has with her colleagues designed and fabricated the basic building blocks of future components in what are called “neuromorphiccomputers inspired by the human brain. It’s a field of research on which the largest ICT companies in the world and also the EU are investing heavily. Still, no one has yet come up with a nano-scale hardware architecture that could be scaled to industrial manufacture and use.

The probe-station device (the full instrument, left, and a closer view of the device connection, right) which measures the electrical responses of the basic components for computers mimicking the human brain. The tunnel junctions are on a thin film on the substrate plate.

The technology and design of neuromorphic computing is advancing more rapidly than its rival revolution, quantum computing. There is already wide speculation both in academia and company R&D about ways to inscribe heavy computing capabilities in the hardware of smart phones, tablets and laptops. The key is to achieve the extreme energy-efficiency of a biological brain and mimic the way neural networks process information through electric impulses,” explains Majumdar.

In their recent article in Advanced Functional Materials, Majumdar and her team show how they have fabricated a new breed of “ferroelectric tunnel junctions”, that is, few-nanometre-thick ferroelectric thin films sandwiched between two electrodes. They have abilities beyond existing technologies and bode well for energy-efficient and stable neuromorphic computing.

The junctions work in low voltages of less than five volts and with a variety of electrode materials – including silicon used in chips in most of our electronics. They also can retain data for more than 10 years without power and be manufactured in normal conditions.

Tunnel junctions have up to this point mostly been made of metal oxides and require 700 degree Celsius temperatures and high vacuums to manufacture. Ferroelectric materials also contain lead which makes them – and all our computers – a serious environmental hazard.

Our junctions are made out of organic hydro-carbon materials and they would reduce the amount of toxic heavy metal waste in electronics. We can also make thousands of junctions a day in room temperature without them suffering from the water or oxygen in the air”, explains Majumdar.

What makes ferroelectric thin film components great for neuromorphic computers is their ability to switch between not only binary states – 0 and 1 – but a large number of intermediate states as well. This allows them to ‘memoriseinformation not unlike the brain: to store it for a long time with minute amounts of energy and to retain the information they have once received – even after being switched off and on again.

We are no longer talking of transistors, but ‘memristors’. They are ideal for computation similar to that in biological brains.  Take for example the Mars 2020 Rover about to go chart the composition of another planet. For the Rover to work and process data on its own using only a single solar panel as an energy source, the unsupervised algorithms in it will need to use an artificial brain in the hardware.

What we are striving for now, is to integrate millions of our tunnel junction memristors into a network on a one square centimetre area. We can expect to pack so many in such a small space because we have now achieved a record-high difference in the current between on and off-states in the junctions and that provides functional stability. The memristors could then perform complex tasks like image and pattern recognition and make decisions autonomously,” says Majumdar.


In 2025 Humanity Could Benefit From A Major New Source Of Clean Power

An international project to generate energy from nuclear fusion has reached a key milestone, with half of the infrastructure required now built. Bernard Bigot, the director-general of the International Thermonuclear Experimental Reactor (Iter), the main facility of which is based in southern France, said the completion of half of the project meant the effort was back on track, after a series of difficulties. This would mean that power could be produced from the experimental site from 2025.

Nuclear fusion occurs when two atoms combine to form a new atom and a neutron. The atoms are fired into a plasma where extreme temperatures overcome their repulsion and forces them together. The fusion releases about four times the energy produced when an atom is split in conventional nuclear fission

The effort to bring nuclear fusion power closer to operation is backed by some of the world’s biggest developed and emerging economies, including the EU, the US, China, India, Japan, Korea and Russia. However, a review of the long-running project in 2013 found problems with its running and organisation. This led to the appointment of Bigot, and a reorganisation that subsequent reviews have broadly endorsed.

Fusion power is one of the most sought-after technological goals in the pursuit of clean energy. Nuclear fusion is the natural phenomenon that powers the sun, converting hydrogen into helium atoms through a process that occurs at extreme temperatures.

Replicating that process on earth at sufficient scale could unleash more energy than is likely to be needed by humanity, but the problem is creating the extreme conditions necessary for such reactions to occur, harnessing the resulting energy in a useful way, and controlling the reactions once they have been induced.

The Iter project aims to use hydrogen fusion, controlled by large superconducting magnets, to produce massive heat energy which would drive turbines – in a similar way to the coal-fired and gas-fired power stations of today – that would produce electricity. This would produce power free from carbon emissions, and potentially at low cost, if the technology can be made to work at a large scale.

For instance, according to Iter scientists, an amount of hydrogen the size of a pineapple could be used to produce as much energy as 10,000 tonnes of coal.


AI Machine Beats Champion Chess Program

, the game-playing AI created by Google sibling DeepMind, has beaten the world’s best chess-playing computer program, having taught itself how to play in under four hours. The repurposed AI, which has repeatedly beaten the world’s best Go players as AlphaGo, has been generalised so that it can now learn other games. It took just four hours to learn the rules to chess before beating the world champion chess program, Stockfish 8, in a 100-game match up. AlphaZero won or drew all 100 games, according to a non-peer-reviewed research paper published with Cornell University Library’s arXiv.


Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi [a similar Japanese board game] as well as Go, and convincingly defeated a world-champion program in each case,” said the paper’s authors that include DeepMind founder Demis Hassabis, who was a child chess prodigy reaching master standard at the age of 13.

“It’s a remarkable achievement, even if we should have expected it after AlphaGo,” former world chess champion Garry Kasparov told “We have always assumed that chess required too much empirical knowledge for a machine to play so well from scratch, with no human knowledge added at all.

Computer programs have been able to beat the best human chess players ever since IBM’s Deep Blue supercomputer defeated Kasparov on 12 May 1997DeepMind said the difference between AlphaZero and its competitors is that its machine-learning approach is given no human input apart from the basic rules of chess. The rest it works out by playing itself over and over with self-reinforced knowledge. The result, according to DeepMind, is that AlphaZero took an “arguably more human-like approach” to the search for moves, processing around 80,000 positions per second in chess compared to Stockfish 8’s 70m.

After winning 25 games of chess versus Stockfish 8 starting as white, with first-mover advantage, a further three starting with black and drawing a further 72 games, AlphaZero also learned shogi in two hours before beating the leading program Elmo in a 100-game matchup. AlphaZero won 90 games, lost eight and drew 2. The new generalised AlphaZero was also able to beat the “super human” former version of itself AlphaGo at the Chinese game of Go after only eight-hours of self-training, winning 60 games and losing 40 games.

While experts said the results are impressive, and have potential across a wide-range of applications to complement human knowledge, professor Joanna Bryson, a computer scientist and AI researcher at the University of Bath, warned that it was “still a discrete task“.


Copycat Robot

Introducing T-HR3, third generation humanoid robot designed to explore how clever joints can improve brilliant balance and real remote controlToyota says its 29 joints allow it to copy the most complex of moves – safely bringing friendly, helpful robots one step closer.


Humanoid robots are very popular among Japanese people…creating one like this has always been our dream and that’s why we pursued it,” says Akifumi Tamaoki, manager of Partner robot division at Toyota.

The robot is controlled by a remote operator sitting in an exoskeletonmirroring its master’s moves, a headset giving the operator a realtime robot point of view.

We’re primarily focused on making this robot a very family-oriented one, so that it can help people including services such as carer” explains Tamaoki.
Toyota said T-HR3 could help around the homes or medical facilities in Japan or construction sites, a humanoid helping hand – designed for a population ageing faster than anywhere else on earth.


Artificial Intelligence Chip Analyzes Molecular-level Data In Real Time

Nano Global, an Austin-based molecular data company, today announced that it is developing a chip using intellectual property (IP) from Arm, the world’s leading semiconductor IP company. The technology will help redefine how global health challenges – from superbugs to infectious diseases, and cancer are conquered.

The pioneering system-on-chip (SoC) will yield highly-secure molecular data that can be used in the recognition and analysis of health threats caused by pathogens and other living organisms. Combined with the company’s scientific technology platform, the chip leverages advances in nanotechnology, optics, artificial intelligence (AI), blockchain authentication, and edge computing to access and analyze molecular-level data in real time.

In partnership with Arm, we’re tackling the vast frontier of molecular data to unlock the unlimited potential of this universe,” said Steve Papermaster, Chairman and CEO of Nano Global. “The data our technology can acquire and process will enable us to create a safer and healthier world.”

We believe the technology Nano Global is delivering will be an important step forward in the collective pursuit of care that improves lives through the application of technology,” explained Rene Haas, executive vice president and president of IPG, Arm. “By collaborating with Nano Global, Arm is taking an active role in developing and deploying the technologies that will move us one step closer to solving complex health challenges.”

Additionally, Nano Global will be partnering with several leading institutions, including Baylor College of Medicine and National University of Singapore, on broad research initiatives in clinical, laboratory, and population health environments to accelerate data collection, analysis, and product development.
The initial development of the chip is in process with first delivery expected by 2020. The company is already adding new partners to their platform.


AI, “worst event in the history of our civilisation” says Stephen Hawking

Stephen Hawking has sent a stark warning out to the world, stating that the invention of artificial intelligence (AI) could be the “worst event in the history of our civilisation”. Speaking at the Web Summit technology conference in Lisbon, Portugal, the theoretical physicist reiterated his warning against the rise of powerful, conscious machines.
While Prof Hawking admitted that AI could be used for good, he also stated that humans need to find a way to control it so that it does not become more powerful than us as “computers can, in theory, emulate human intelligence, and exceed it.” Looking at the positives, the 75-year old said AI could help undo some of the damage that humans have inflicted on the natural world, help beat disease and “transform” every aspect of society. But, there are negatives that come with it.

Success in creating effective AI, could be the biggest event in the history of our civilisation. Or the worst. We just don’t know. “So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it. “Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilisation. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy,” explains the University of Cambridge alumni.

Prof Hawking added that to make sure AI is in line with our goals, creators need to “employ best practice and effective management.” But he still has hope: “I am an optimist and I believe that we can create AI for the good of the world. “That it can work in harmony with us. We simply need to be aware of the dangers, identify them, employ the best possible practice and management, and prepare for its consequences well in advance.”

Just last week, Prof Hawking warned that AI will replace us as the dominant being on the planet.


AI-controlled Greenhouse Uses 90 Percent Less Water To Produce Salads

Californian startup  Iron Ox runs an indoor farm complete with a few hundred plants—and two robot farmers. Instead of using technology to grow genetically modified food, a former Google engineer partnered with one of his friends who had a PhD in robotics to open a technology-based farm where they plant, seed, and grow heads of lettuce.


Iron Ox’s goal is to provide quality produce to everyone without a premium price. According to Natural Society the average head of lettuce travels 2,055 miles from farm to market, which is why fresh lettuce is often so expensive. Currently, Iron Ox only provides produce to restaurants and grocery stores in the Bay Area of California, which is why after a daily harvest, their products are hours fresh as opposed to shipped in. The company aims to open greenhouses near other major cities, guaranteeing same-day delivery from their trucks at a fraction of the price of the current supply chain.

So why the robots? Lettuce has always been a testing ground for farming innovation, from early greenhouses to closed aquaponic ecosystems. According to Iron Ox, their AI-controlled greenhouse uses 90 percent less water than traditional farms, and because of the technology, each head of lettuce receives intimate individualized attention that is not realistic with human labor. Iron Ox also says that because they grow their products indoors with no pesticides, they don’t have to worry about typical farming issues like stray animals eating their product.

Iron Ox has yet to launch a fully-functioning automated greenhouse, but hope to build their first by the end of 2017. However, Iron Ox is not the only company to experiment with robot farming. Spread, a sustainable farming organization, broke ground on their first techno-farm, which will be fully automated and operated by robots growing lettuce, in May. They have plans to expand to the Middle East next and then continue growing.

Does this mean the future of produce is automation? Not exactly. Agriculture is complex business, and not all produce can be greenhouse-grown as efficiently and effectively as lettuce. But it’s one more reason for farmers to be aware of how the robots are coming for us all.


Computer Reads Body Language

Researchers at Carnegie Mellon University‘s Robotics Institute have enabled a computer to understand body poses and movements of multiple people from video in real time — including, for the first time, the pose of each individual’s hands and fingers. This new method was developed with the help of the Panoptic Studio — a two-story dome embedded with 500 video cameras — and the insights gained from experiments in that facility now make it possible to detect the pose of a group of people using a single camera and a laptop computer.

Yaser Sheikh, associate professor of robotics, said these methods for tracking 2-D human form and motion open up new ways for people and machines to interact with each other and for people to use machines to better understand the world around them. The ability to recognize hand poses, for instance, will make it possible for people to interact with computers in new and more natural ways, such as communicating with computers simply by pointing at things.

Detecting the nuances of nonverbal communication between individuals will allow robots to serve in social spaces, allowing robots to perceive what people around them are doing, what moods they are in and whether they can be interrupted. A self-driving car could get an early warning that a pedestrian is about to step into the street by monitoring body language. Enabling machines to understand human behavior also could enable new approaches to behavioral diagnosis and rehabilitation, for conditions such as autism, dyslexia and depression.


We communicate almost as much with the movement of our bodies as we do with our voice,” Sheikh said. “But computers are more or less blind to it.”

In sports analytics, real-time pose detection will make it possible for computers to track not only the position of each player on the field of play, as is now the case, but to know what players are doing with their arms, legs and heads at each point in time. The methods can be used for live events or applied to existing videos.

To encourage more research and applications, the researchers have released their computer code for both multi-person and hand pose estimation. It is being widely used by research groups, and more than 20 commercial groups, including automotive companies, have expressed interest in licensing the technology, Sheikh said.

Sheikh and his colleagues have presented reports on their multi-person and hand pose detection methods at CVPR 2017, the Computer Vision and Pattern Recognition Conference  in Honolulu.


Optical Computer

Researchers at the University of Sydney (Australia) have dramatically slowed digital information carried as light waves by transferring the data into sound waves in an integrated circuit, or microchipTransferring information from the optical to acoustic domain and back again inside a chip is critical for the development of photonic integrated circuits: microchips that use light instead of electrons to manage data.

These chips are being developed for use in telecommunications, optical fibre networks and cloud computing data centers where traditional electronic devices are susceptible to electromagnetic interference, produce too much heat or use too much energy.

The information in our chip in acoustic form travels at a velocity five orders of magnitude slower than in the optical domain,” said Dr Birgit Stiller, research fellow at the University of Sydney and supervisor of the project.

It is like the difference between thunder and lightning,” she said.

This delay allows for the data to be briefly stored and managed inside the chip for processing, retrieval and further transmission as light wavesLight is an excellent carrier of information and is useful for taking data over long distances between continents through fibre-optic cables.

But this speed advantage can become a nuisance when information is being processed in computers and telecommunication systems.