AI Improves Heart Disease Diagnosis

Researchers from the University of Oxford are using artificial intelligence (AI) to improve diagnostic accuracy for heart disease. The team hope to roll out the system across the NHS later this year, helping to improve patient outcomes and saving millions is misdiagnoses. The research, led by Prof Paul Leeson and RDM DPhil student Ross Upton (Cardiovascular Clinical Research Facility), took place in the Oxford University Hospitals Foundation Trust and is the basis of spin-out company Ultromics.


Thousands of people every year have an echocardiogram – a type of heart scan – after visiting hospital suffering with chest pain. Clinicians currently assess these scans by eye, taking into account many features that could indicate whether someone has heart disease and if they are likely to go on to have a heart attack. But even the most well trained cardiologist can misdiagnose patients. Currently, 1 in 5 scans are misdiagnosed each year – the equivalent to 12,000 patients. This means that people are either not being treated to prevent a heart attack, or they are undergoing unnecessary operations to stave off a heart attack they won’t have.

The new system uses machine learning – a form of artificial intelligence – to tap into the rich information provided in an echocardiogram. Using the new system, AI can detect 80,000 subtle changes inviable to the naked eye, improving the accuracy of diagnosis to 90%. The machine learning system was trained using scans from previous patients, alongside data about whether they went on to have a heart attack. The team hope that the improved diagnostic accuracy will not only improve patient care and outcomes, but save the NHS £300million a year in avoidable operations and treatment.  So far the system has been trialled in six cardiology units in the UK. Further implementation of the technology is now being led by Ultromics – a spin-out company co-founded by Ross Upton and Paul Leeson (Cardiovascular Clinical Research Facility). The software will be made available for free throughout the NHS later this year.


AI Machine Beats Champion Chess Program

, the game-playing AI created by Google sibling DeepMind, has beaten the world’s best chess-playing computer program, having taught itself how to play in under four hours. The repurposed AI, which has repeatedly beaten the world’s best Go players as AlphaGo, has been generalised so that it can now learn other games. It took just four hours to learn the rules to chess before beating the world champion chess program, Stockfish 8, in a 100-game match up. AlphaZero won or drew all 100 games, according to a non-peer-reviewed research paper published with Cornell University Library’s arXiv.


Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi [a similar Japanese board game] as well as Go, and convincingly defeated a world-champion program in each case,” said the paper’s authors that include DeepMind founder Demis Hassabis, who was a child chess prodigy reaching master standard at the age of 13.

“It’s a remarkable achievement, even if we should have expected it after AlphaGo,” former world chess champion Garry Kasparov told “We have always assumed that chess required too much empirical knowledge for a machine to play so well from scratch, with no human knowledge added at all.

Computer programs have been able to beat the best human chess players ever since IBM’s Deep Blue supercomputer defeated Kasparov on 12 May 1997DeepMind said the difference between AlphaZero and its competitors is that its machine-learning approach is given no human input apart from the basic rules of chess. The rest it works out by playing itself over and over with self-reinforced knowledge. The result, according to DeepMind, is that AlphaZero took an “arguably more human-like approach” to the search for moves, processing around 80,000 positions per second in chess compared to Stockfish 8’s 70m.

After winning 25 games of chess versus Stockfish 8 starting as white, with first-mover advantage, a further three starting with black and drawing a further 72 games, AlphaZero also learned shogi in two hours before beating the leading program Elmo in a 100-game matchup. AlphaZero won 90 games, lost eight and drew 2. The new generalised AlphaZero was also able to beat the “super human” former version of itself AlphaGo at the Chinese game of Go after only eight-hours of self-training, winning 60 games and losing 40 games.

While experts said the results are impressive, and have potential across a wide-range of applications to complement human knowledge, professor Joanna Bryson, a computer scientist and AI researcher at the University of Bath, warned that it was “still a discrete task“.


Artificial Intelligence Chip Analyzes Molecular-level Data In Real Time

Nano Global, an Austin-based molecular data company, today announced that it is developing a chip using intellectual property (IP) from Arm, the world’s leading semiconductor IP company. The technology will help redefine how global health challenges – from superbugs to infectious diseases, and cancer are conquered.

The pioneering system-on-chip (SoC) will yield highly-secure molecular data that can be used in the recognition and analysis of health threats caused by pathogens and other living organisms. Combined with the company’s scientific technology platform, the chip leverages advances in nanotechnology, optics, artificial intelligence (AI), blockchain authentication, and edge computing to access and analyze molecular-level data in real time.

In partnership with Arm, we’re tackling the vast frontier of molecular data to unlock the unlimited potential of this universe,” said Steve Papermaster, Chairman and CEO of Nano Global. “The data our technology can acquire and process will enable us to create a safer and healthier world.”

We believe the technology Nano Global is delivering will be an important step forward in the collective pursuit of care that improves lives through the application of technology,” explained Rene Haas, executive vice president and president of IPG, Arm. “By collaborating with Nano Global, Arm is taking an active role in developing and deploying the technologies that will move us one step closer to solving complex health challenges.”

Additionally, Nano Global will be partnering with several leading institutions, including Baylor College of Medicine and National University of Singapore, on broad research initiatives in clinical, laboratory, and population health environments to accelerate data collection, analysis, and product development.
The initial development of the chip is in process with first delivery expected by 2020. The company is already adding new partners to their platform.


AI, “worst event in the history of our civilisation” says Stephen Hawking

Stephen Hawking has sent a stark warning out to the world, stating that the invention of artificial intelligence (AI) could be the “worst event in the history of our civilisation”. Speaking at the Web Summit technology conference in Lisbon, Portugal, the theoretical physicist reiterated his warning against the rise of powerful, conscious machines.
While Prof Hawking admitted that AI could be used for good, he also stated that humans need to find a way to control it so that it does not become more powerful than us as “computers can, in theory, emulate human intelligence, and exceed it.” Looking at the positives, the 75-year old said AI could help undo some of the damage that humans have inflicted on the natural world, help beat disease and “transform” every aspect of society. But, there are negatives that come with it.

Success in creating effective AI, could be the biggest event in the history of our civilisation. Or the worst. We just don’t know. “So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it. “Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilisation. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy,” explains the University of Cambridge alumni.

Prof Hawking added that to make sure AI is in line with our goals, creators need to “employ best practice and effective management.” But he still has hope: “I am an optimist and I believe that we can create AI for the good of the world. “That it can work in harmony with us. We simply need to be aware of the dangers, identify them, employ the best possible practice and management, and prepare for its consequences well in advance.”

Just last week, Prof Hawking warned that AI will replace us as the dominant being on the planet.


AI-controlled Greenhouse Uses 90 Percent Less Water To Produce Salads

Californian startup  Iron Ox runs an indoor farm complete with a few hundred plants—and two robot farmers. Instead of using technology to grow genetically modified food, a former Google engineer partnered with one of his friends who had a PhD in robotics to open a technology-based farm where they plant, seed, and grow heads of lettuce.


Iron Ox’s goal is to provide quality produce to everyone without a premium price. According to Natural Society the average head of lettuce travels 2,055 miles from farm to market, which is why fresh lettuce is often so expensive. Currently, Iron Ox only provides produce to restaurants and grocery stores in the Bay Area of California, which is why after a daily harvest, their products are hours fresh as opposed to shipped in. The company aims to open greenhouses near other major cities, guaranteeing same-day delivery from their trucks at a fraction of the price of the current supply chain.

So why the robots? Lettuce has always been a testing ground for farming innovation, from early greenhouses to closed aquaponic ecosystems. According to Iron Ox, their AI-controlled greenhouse uses 90 percent less water than traditional farms, and because of the technology, each head of lettuce receives intimate individualized attention that is not realistic with human labor. Iron Ox also says that because they grow their products indoors with no pesticides, they don’t have to worry about typical farming issues like stray animals eating their product.

Iron Ox has yet to launch a fully-functioning automated greenhouse, but hope to build their first by the end of 2017. However, Iron Ox is not the only company to experiment with robot farming. Spread, a sustainable farming organization, broke ground on their first techno-farm, which will be fully automated and operated by robots growing lettuce, in May. They have plans to expand to the Middle East next and then continue growing.

Does this mean the future of produce is automation? Not exactly. Agriculture is complex business, and not all produce can be greenhouse-grown as efficiently and effectively as lettuce. But it’s one more reason for farmers to be aware of how the robots are coming for us all.


Artificial Intelligence Checks Identity Using Any Smartphone

Checking your identity using simulated human cognition aiThenticate say their system goes way beyond conventional facial recognition systems or the biometrics of passwords, fingerprints and eyescans.


We need to have a much greater level of a certainty who somebody actually is. In order to answer that question, we appealed to deep science, deep learning, to develop an AI method, artificial intelligence method, in other words to replicate or to mimic or to simulate the way that we as humans, intuitively and instinctively go by recognizing somebody’s head, is very different to the conventional traditional way of face recognition, finger print recognition, for that reason really represents the next generation of authentication technologies or methods,” says AiTthenticate CEO André Immelman.

aiDX uses 16 distinct tests to recognise someone – including eye prints using a standard off the shelf smart phone to access encrypted data stored in the cloud it can operate in active mode – asking the user taking a simple selfie or discreetly in the background.

André Immelman explains: “It has applications in the security sense, it has applications in a customer services sense, you know this kind of things the bank calls you up and says: this is your bank calling, please, where you live, what is your mother’s name, what’s your dog favourite hobby, whatever the case it may be. It takes that kind of guess work out of the equation completely and it answers the, “who” question to much greater levels of confidence or certainty, than what traditional or conventional biometrics have been able to do in the past.”

Billions of dollars a year are lost to identity theft globally. aiThenticate hope their new system can help stop at least some of that illegal trade.


Building Brain-Inspired AI Supercomputing System

IBM (NYSE: IBM) and the U.S. Air Force Research Laboratory (AFRL) today announced they are collaborating on a first-of-a-kind brain-inspired supercomputing system powered by a 64-chip array of the IBM TrueNorth Neurosynaptic System. The scalable platform IBM is building for AFRL will feature an end-to-end software ecosystem designed to enable deep neural-network learning and information discovery. The system’s advanced pattern recognition and sensory processing power will be the equivalent of 64 million neurons and 16 billion synapses, while the processor component will consume the energy equivalent of a dim light bulb – a mere 10 watts to power.
IBM researchers believe the brain-inspired, neural network design of TrueNorth will be far more efficient for pattern recognition and integrated sensory processing than systems powered by conventional chips. AFRL is investigating applications of the system in embedded, mobile, autonomous settings where, today, size, weight and power (SWaP) are key limiting factors. The IBM TrueNorth Neurosynaptic System can efficiently convert data (such as images, video, audio and text) from multiple, distributed sensors into symbols in real time. AFRL will combine this “right-brain perception capability of the system with the “left-brain” symbol processing capabilities of conventional computer systems. The large scale of the system will enable both “data parallelism” where multiple data sources can be run in parallel against the same neural network and “model parallelism” where independent neural networks form an ensemble that can be run in parallel on the same data.


AFRL was the earliest adopter of TrueNorth for converting data into decisions,” said Daniel S. Goddard, director, information directorate, U.S. Air Force Research Lab. “The new neurosynaptic system will be used to enable new computing capabilities important to AFRL’s mission to explore, prototype and demonstrate high-impact, game-changing technologies that enable the Air Force and the nation to maintain its superior technical advantage.”

“The evolution of the IBM TrueNorth Neurosynaptic System is a solid proof point in our quest to lead the industry in AI hardware innovation,” said Dharmendra S. Modha, IBM Fellow, chief scientist, brain-inspired computing, IBM Research – Almaden. “Over the last six years, IBM has expanded the number of neurons per system from 256 to more than 64 million – an 800 percent annual increase over six years.’’


Artificial Intelligence At The Hospital

Diagnosing cancer is a slow and laborious process. Here researchers at University Hospital Zurich painstakingly make up biopsy slides – up to 50 for each patient – for the pathologist to examine for signs of prostate cancer. A pathologist takes around an hour and a half per patient – a task IBMs Watson supercomputer is now doing in fractions of a second.

“If the pathologist becomes faster by using such a system I think it will pay off. Because my time is also worth something. If I sit here one and a half hours looking at slides, screening all these slides, instead of just signing out the two or three positive ones, and taking into account that there may be a .1 error rate, percent error rate. this will pay off, because I can do in one and a half hours at the end five patients,” says Dr. Peter Wild, University Hospital Zürich.

The hospital’s archive of biopsy images is being slowly fed into Watson – a process that will take years. But maybe one day pathologists won’t have to view slides through a microscope at all. Diagnosis is not the only area benefiting from AI. The technology is helping this University of Sheffield team design a new drug that could slow down the progress of motor neurone disease. A system built by British start-up BenevolentAI is identifying new areas for further exploration far faster than a person could ever hope to.

Benevolent basically uses their artificial intelligence system to scan the whole medical and biomedical literature. It’s not really easy for us to stay on top of millions of publications that come out every year. So they can interrogate that information, using artificial intelligence and come up with ideas for new drugs that might be used in a completely different disease, but may be applicable on motor neurone disease. So that’s the real benefit in their system, the kind of novel ideas that they come up with,” explains Dr. Richard Mead, Sitran, University of Sheffield. BenevolentAI has raised one hundred million dollars in investment to develop its AI system, and help revolutionise the pharmaceutical industry.


30 Billion Switches Onto The New IBM Nano-based Chip

IBM is clearly not buying into the idea that Moore’s Law is dead after it unveiled a tiny new transistor that could revolutionise the design, and size, of future devices. Along with Samsung and Globalfoundries, the tech firm has created a ‘breakthrough’ semiconducting unit made using stacks of nanosheets. The companies say they intend to use the transistors on new five nanometer (nm) chips that feature 30 billion switches on an area the size of a fingernail. When fully developed, the new chip will help with artificial intelligence, the Internet of Things, and cloud computing.

For business and society to meet the demands of cognitive and cloud computing in the coming years, advancement in semiconductor technology is essential,” said Arvind Krishna, senior vice president, Hybrid Cloud, and director, IBM Research.

IBM has been developing nanometer sheets for the past 10 years and combined stacks of these tiny sheets using a process called Extreme Ultraviolet (EUV) lithography to build the structure of the transistor.

Using EUV lithography, the width of the nanosheets can be adjusted continuously, all within a single manufacturing process or chip design,” IBM and the other firms said. This allows the transistors to be adjusted for the specific circuits they are to be used in.


Startup Promises Immortality Through AI, Nanotechnology, and Cloning

One of the things humans have plotted for centuries is escaping death, with little to show for it, until now. One startup called Humai has a plan to make immortality a reality. The CEO, Josh Bocanegra says when the time comes and all the necessary advancements are in place, we’ll be able to freeze your brain, create a new, artificial body, repair any damage to your brain, and transfer it into your new body. This process could then be repeated in perpetuityHUMAI stands for: Human Resurrection through Artificial Intelligence. The technology to accomplish this isn’t here now, but on the horizon. Bocanegra says they’ll reach this Promethean feat within 30 years. 2045 is currently their target date. So how do they plan to do it?

We’re using artificial intelligence and nanotechnology to store data of conversational styles, behavioral patterns, thought processes and information about how your body functions from the inside-out. This data will be coded into multiple sensor technologies, which will be built into an artificial body with the brain of a deceased human, explains the website.


Artificial Intelligence Tracks In Real Time Everybody In The Crowd

Artificial Intelligence that can pick you out in a crowd and then track your every move. Japanese firm Hitachi‘s new imaging system locks on to at least 100 different characteristics of an individual … including gender, age, hair style, clothes, and mannerisms. Hitachi says it provides real-time tracking and monitoring of crowded areas.


Until now, we need a lot of security guards and people to review security camera footage. We developed this AI software in the hopes it would help them do just that,” says Tomokazu Murakami, Hitachi researcher.

The system can help spot a suspicious individual or find a missing child, the makers say. So, an eyewitness could provide a limited description, with the AI software quickly scanning its database for a match.
In Japan, the demand for such technology is increasing because of the Tokyo 2020 Olympics, but for us we’re developing it in a way so that it can be utilized in many different places such as train stations, stadiums, and even shopping malls,” comments Tomokazu Murakami.

High-speed tracking of individuals such as this will undoubtedly have its critics. But as Japan prepares to host the 2020 Olympics, Hitachi insists its system can contribute to public safety and security.


A Brain-computer Interface To Combat The Rise of AI

Elon Musk is attempting to combat the rise of artificial intelligence (AI) with the launch of his latest venture, brain-computer interface company NeuralinkLittle is known about the startup, aside from what has been revealed in a Wall Street Journal report, but says sources have described it as “neural lace” technology that is being engineered by the company to allow humans to seamlessly communicate with technology without the need for an actual, physical interface. The company has also been registered in California as a medical research entity because Neuralink’s initial focus will be on using the described interface to help with the symptoms of chronic conditions, from epilepsy to depression. This is said to be similar to how deep brain stimulation controlled by an implant helps  Matt Eagles, who has Parkinson’s, manage his symptoms effectively. This is far from the first time Musk has shown an interest in merging man and machine. At a Tesla launch in Dubai earlier this year, the billionaire spoke about the need for humans to become cyborgs if we are to survive the rise of artificial intelligence.

cyborg woman

Over time I think we will probably see a closer merger of biological intelligence and digital intelligence,”CNBC reported him as saying at the time. “It’s mostly about the bandwidth, the speed of the connection between your brain and the digital version of yourself, particularly output.” Transhumanism, the enhancement of humanity’s capabilities through science and technology, is already a living reality for many people, to varying degrees. Documentary-maker Rob Spence replaced one of his own eyes with a video camera in 2008; amputees are using prosthetics connected to their own nerves and controlled using electrical signals from the brain; implants are helping tetraplegics regain independence through the BrainGate project.

Former director of the United States Defense Advanced Research Projects Agency (DARPA), Arati Prabhakar, comments: “From my perspective, which embraces a wide swathe of research disciplines, it seems clear that we humans are on a path to a more symbiotic union with our machines.