Posts belonging to Category Artificial Intelligence

Artificial Intelligence At The Hospital

Diagnosing cancer is a slow and laborious process. Here researchers at University Hospital Zurich painstakingly make up biopsy slides – up to 50 for each patient – for the pathologist to examine for signs of prostate cancer. A pathologist takes around an hour and a half per patient – a task IBMs Watson supercomputer is now doing in fractions of a second.

“If the pathologist becomes faster by using such a system I think it will pay off. Because my time is also worth something. If I sit here one and a half hours looking at slides, screening all these slides, instead of just signing out the two or three positive ones, and taking into account that there may be a .1 error rate, percent error rate. this will pay off, because I can do in one and a half hours at the end five patients,” says Dr. Peter Wild, University Hospital Zürich.

The hospital’s archive of biopsy images is being slowly fed into Watson – a process that will take years. But maybe one day pathologists won’t have to view slides through a microscope at all. Diagnosis is not the only area benefiting from AI. The technology is helping this University of Sheffield team design a new drug that could slow down the progress of motor neurone disease. A system built by British start-up BenevolentAI is identifying new areas for further exploration far faster than a person could ever hope to.

Benevolent basically uses their artificial intelligence system to scan the whole medical and biomedical literature. It’s not really easy for us to stay on top of millions of publications that come out every year. So they can interrogate that information, using artificial intelligence and come up with ideas for new drugs that might be used in a completely different disease, but may be applicable on motor neurone disease. So that’s the real benefit in their system, the kind of novel ideas that they come up with,” explains Dr. Richard Mead, Sitran, University of Sheffield. BenevolentAI has raised one hundred million dollars in investment to develop its AI system, and help revolutionise the pharmaceutical industry.


30 Billion Switches Onto The New IBM Nano-based Chip

IBM is clearly not buying into the idea that Moore’s Law is dead after it unveiled a tiny new transistor that could revolutionise the design, and size, of future devices. Along with Samsung and Globalfoundries, the tech firm has created a ‘breakthrough’ semiconducting unit made using stacks of nanosheets. The companies say they intend to use the transistors on new five nanometer (nm) chips that feature 30 billion switches on an area the size of a fingernail. When fully developed, the new chip will help with artificial intelligence, the Internet of Things, and cloud computing.

For business and society to meet the demands of cognitive and cloud computing in the coming years, advancement in semiconductor technology is essential,” said Arvind Krishna, senior vice president, Hybrid Cloud, and director, IBM Research.

IBM has been developing nanometer sheets for the past 10 years and combined stacks of these tiny sheets using a process called Extreme Ultraviolet (EUV) lithography to build the structure of the transistor.

Using EUV lithography, the width of the nanosheets can be adjusted continuously, all within a single manufacturing process or chip design,” IBM and the other firms said. This allows the transistors to be adjusted for the specific circuits they are to be used in.


Startup Promises Immortality Through AI, Nanotechnology, and Cloning

One of the things humans have plotted for centuries is escaping death, with little to show for it, until now. One startup called Humai has a plan to make immortality a reality. The CEO, Josh Bocanegra says when the time comes and all the necessary advancements are in place, we’ll be able to freeze your brain, create a new, artificial body, repair any damage to your brain, and transfer it into your new body. This process could then be repeated in perpetuityHUMAI stands for: Human Resurrection through Artificial Intelligence. The technology to accomplish this isn’t here now, but on the horizon. Bocanegra says they’ll reach this Promethean feat within 30 years. 2045 is currently their target date. So how do they plan to do it?

We’re using artificial intelligence and nanotechnology to store data of conversational styles, behavioral patterns, thought processes and information about how your body functions from the inside-out. This data will be coded into multiple sensor technologies, which will be built into an artificial body with the brain of a deceased human, explains the website.


Legally Blind People Can See With A New Kind Of Glasses

A Canadian company based in Toronto has suceeded to build a kind of Google glass that is able to give back full sight to legally blind people.  The eSight is an augmented reality headset that houses a high-speed, high-definition camera that captures everything the user is looking at.


Algorithms enhance the video feed and display it on two, OLED screens in front of the user’s eyes. Full color video images are clearly seen by the eSight user with unprecedented visual clarity and virtually no lag. With eSight’s patented Bioptic Tilt capability, users can adjust the device to the precise position that, for them, presents the best view of the video while maximizing side peripheral vision. This ensures a user’s balance and prevents nausea – common problems with other immersive technologies. A blind individual can use both of their hands while they use eSight to see. It is lightweight, worn comfortably around the eyes and designed for various environments and for use throughout the day.

eSight is a comprehensive customized medical device that can replace all the many single-task assistive devices that are currently available but do not provide actual sight (e.g. white canes, magnifying devices, service animals, Braille machines, CCTV scanners, text-to-speech software). It allows a user to instantly auto-focus between short-range vision (reading a book or text on a smartphone) to mid-range vision (seeing faces or watching TV) to long-range vision (looking down a hallway or outsidea window). It is the only device for the legally blind that enables mobility without causing issues of imbalance or nausea (common with other immersive options). A legally blind individual can use eSight not just to see while sitting down but while being independently mobile (e.g. walking, exercising, commuting, travelling, etc).

According to The Wall Street Journal, the company is taking advantages of recent improvements in technology from VR headsets and smartphones that have trickled down to improve the latest version of the eSight. So far, the company has sold roughly a thousand units, but at $10,000 apiece, they’re not cheap (and most insurances apparently don’t cover the product), although eSight’s chief executive Brian Mech notes to the WSJ that getting devices to users is “a battle we are starting to wage.”


Super-material Bends, Shapes And Focuses Sound Waves

These tiny 3D-printed bricks could one day allow people to create their own acoustics. That’s the plan of scientists from the universities of Bristol and Sussex. They’ve invented a metamaterial which bends and manipulates sound in any way the user wants. It’s helped scientists create what they call a ‘sonic alphabet‘.


We have discovered that you just need 16 bricks to make any type of sound that you can imagine. You can shape the sound just with 16 of them, just like you create any words with just 26 letters,” says Dr. Gianluca Memoli, researcher at Interact Lab at University of Sussex.

DIY kits like this, full of batches of the 16 aural letters, could help users create a sound library, or even help people in the same car to hear separate things.

With our device what you can have is you can strap a static piece on top of existing speakers and they can direct sound in two different directions without any overlap. So the passengers can hear completely different information from the driver,” explains Professor Sri Subramanian Interact Lab at University of Sussex. This technology is more than five years away, but smaller versions could be used to direct medical ultrasound devices far sooner.  “In a year we could have a sleeve that we can put on top of already existing projects in the market and make them just a little bit better. For example, we can have a sleeve that goes on top of ultrasound pain relieving devices that are used for therapeutic pain,” he adds.
Researchers say spatial sound modulators will one day allow us to perform audible tasks previously unheard of.


Stephen Hawking Warns: Only 100 Years Left For Humankind Before Extinction

It’s no secret that physicist Stephen Hawking thinks humans are running out of time on planet Earth.

In a new BBC documentary, Hawking will test his theory that humankind must colonize another planet or perish in the next 100 years. The documentary Stephen Hawking: Expedition New Earth, will air this summer as part of BBC’s Tomorrow’s World season and will showcase that Hawking‘s aspiration “isn’t as fantastical as it sounds,” according to BBC.

For years, Hawking has warned that humankind faces a slew of threats ranging from climate change to destruction from nuclear war and genetically engineered viruses.

While things look bleak, there is some hope, according to Hawking. Humans must set their sights on another planet or perish on Earth.

We must also continue to go into space for the future of humanity,” Hawking said during a 2016 speech at Britain’s Oxford University Union. In the past, Hawking has suggested that humankind might not survive another 1000 years without escaping beyond our fragile planet.” The BBC documentary hints at an adjusted timeframe for colonization, which many may see in their lifetime.

Artificial Intelligence Tracks In Real Time Everybody In The Crowd

Artificial Intelligence that can pick you out in a crowd and then track your every move. Japanese firm Hitachi‘s new imaging system locks on to at least 100 different characteristics of an individual … including gender, age, hair style, clothes, and mannerisms. Hitachi says it provides real-time tracking and monitoring of crowded areas.


Until now, we need a lot of security guards and people to review security camera footage. We developed this AI software in the hopes it would help them do just that,” says Tomokazu Murakami, Hitachi researcher.

The system can help spot a suspicious individual or find a missing child, the makers say. So, an eyewitness could provide a limited description, with the AI software quickly scanning its database for a match.
In Japan, the demand for such technology is increasing because of the Tokyo 2020 Olympics, but for us we’re developing it in a way so that it can be utilized in many different places such as train stations, stadiums, and even shopping malls,” comments Tomokazu Murakami.

High-speed tracking of individuals such as this will undoubtedly have its critics. But as Japan prepares to host the 2020 Olympics, Hitachi insists its system can contribute to public safety and security.


A Brain-computer Interface To Combat The Rise of AI

Elon Musk is attempting to combat the rise of artificial intelligence (AI) with the launch of his latest venture, brain-computer interface company NeuralinkLittle is known about the startup, aside from what has been revealed in a Wall Street Journal report, but says sources have described it as “neural lace” technology that is being engineered by the company to allow humans to seamlessly communicate with technology without the need for an actual, physical interface. The company has also been registered in California as a medical research entity because Neuralink’s initial focus will be on using the described interface to help with the symptoms of chronic conditions, from epilepsy to depression. This is said to be similar to how deep brain stimulation controlled by an implant helps  Matt Eagles, who has Parkinson’s, manage his symptoms effectively. This is far from the first time Musk has shown an interest in merging man and machine. At a Tesla launch in Dubai earlier this year, the billionaire spoke about the need for humans to become cyborgs if we are to survive the rise of artificial intelligence.

cyborg woman

Over time I think we will probably see a closer merger of biological intelligence and digital intelligence,”CNBC reported him as saying at the time. “It’s mostly about the bandwidth, the speed of the connection between your brain and the digital version of yourself, particularly output.” Transhumanism, the enhancement of humanity’s capabilities through science and technology, is already a living reality for many people, to varying degrees. Documentary-maker Rob Spence replaced one of his own eyes with a video camera in 2008; amputees are using prosthetics connected to their own nerves and controlled using electrical signals from the brain; implants are helping tetraplegics regain independence through the BrainGate project.

Former director of the United States Defense Advanced Research Projects Agency (DARPA), Arati Prabhakar, comments: “From my perspective, which embraces a wide swathe of research disciplines, it seems clear that we humans are on a path to a more symbiotic union with our machines.


Artificial Intelligence Writes Code By Looting

Artificial intelligence (AI) has taught itself to create its own encryption and produced its own universal ‘language. Now it’s writing its own code using similar techniques to humans. A neural network, called DeepCoder, developed by Microsoft and University of Cambridge computer scientists, has learnt how to write programs without a prior knowledge of code.  DeepCoder solved basic challenges of the kind set by programming competitions. This kind of approach could make it much easier for people to build simple programs without knowing how to write code.

deep coder

All of a sudden people could be so much more productive,” says Armando Solar-Lezama at the Massachusetts Institute of Technology, who was not involved in the work. “They could build systems that it [would be] impossible to build before.”

Ultimately, the approach could allow non-coders to simply describe an idea for a program and let the system build it, says Marc Brockschmidt, one of DeepCoder’s creators at Microsoft Research in Cambridge. UK.DeepCoder uses a technique called program synthesis: creating new programs by piecing together lines of code taken from existing software – just like a programmer might. Given a list of inputs and outputs for each code fragment, DeepCoder learned which pieces of code were needed to achieve the desired result overall.


How To Fine-Tune NanoFabrication

Daniel Packwood, Junior Associate Professor at Kyoto University’s Institute for Integrated Cell-Material Sciences (iCeMS), is improving methods for constructing tiny “nanomaterials” using a “bottom-up” approach called “molecular self-assembly”. Using this method, molecules are chosen according to their ability to spontaneously interact and combine to form shapes with specific functions. In the future, this method may be used to produce tiny wires with diameters 1/100,000th that of a piece of hair, or tiny electrical circuits that can fit on the tip of a needle.


Molecular self-assembly is a spontaneous process that cannot be controlled directly by laboratory equipment, so it must be controlled indirectly. This is done by carefully choosing the direction of the intermolecular interactions, known as “chemical control”, and carefully choosing the temperature at which these interactions happen, known as “entropic control”. Researchers know that when entropic control is very weak, for example, molecules are under chemical control and assemble in the direction of the free sites available for molecule-to-molecule interaction. On the other hand, self-assembly does not occur when entropic control is much stronger than the chemical control, and the molecules remain randomly dispersed.

Packwood teamed up with colleagues in Japan and the U.S. to develop a computational method that allows them to simulate molecular self-assembly on metal surfaces while separating the effects of chemical and entropic controls. This new computational method makes use of artificial intelligence to simulate how molecules behave when placed on a metal surface. Specifically, a “machine learning” technique is used to analyse a database of intermolecular interactions. This machine learning technique builds a model that encodes the information contained in the database, and in turn this model can predict the outcome of the molecular self-assembly process with high accuracy.


A ”NaNose” Device Identifies 17 Types Of Diseases With A Single Sniff

The future of early diagnoses of disease could be this simple, according to a team of researchers in Israel. The ‘NaNose‘ as they call it can differentiate between 17 types of diseases with a single sniff identifying so-called smelly compounds in anything from cancers to Parkinson’s.


Indeed, what we have found in our most recent research in this regard, that 17 types of disease have 13 common compounds that are found in all different types of disease, but the mixture of the compounds and the composition of these compounds changes from one disease to another disease. And this is what is really unique and what really we expect to see and utilize in order to make the diagnosis from exhaled breat,” says Professor Hossam  Haick ftom the Institute of Technology- Technion.

The NaNose uses “artificially intelligent nanoarraysensors to analyze the data obtained from receptors that “smell” the patient’s breath.

So our main idea is to try an imitate what’s going on in nature. So like we can take a canine, a dog and train it to scent the smell of drugs, of explosives or a missing person, we are trying to do it artificially. And we can do that by using these nano-materials and we build these nano material-based sensors. And of course there are many advantages and one of them of course is going all the way from sensors big as this to really small devices like this that have that have on them eight sensors and which can be incorporated to systems like this, or even smaller,” explains Doctor Yoav Broza from Technion .

Several companies are now trying to commercialize the technology – and encourage its use in healthcare systems… or see it incorporated into your smartphone.


First Driverless Electric Bus Line Opened In Paris

Shuttling their way to a greener city. Paris opening its first driverless buses to the public on Monday. Fully electric and fully autonomous, the ‘EZ 10‘ transports up to 10 passengers across the Seine between two main stations. The buses use laser sensors to analyse their surroundings on the road and for now they don’t have to share it with any other vehicles.


“Fewer people come on board, its slower, its electric, it doesn’t pollute and it can be stored away more easily but it will never replace a traditional bus“, says Jose Gomes, who has been driving buses here for 26 years. He’ll oversee the smooth operation of the autonomous bus.

The shuttles come as Paris faces high pollution levels. City mayor Anna Hidalgo wants to reduce the number of cars, while authorities crack down on traffic restrictions. It may be a short 130m stretch for the buses but for Paris, it’s a big step towards promoting cleaner transport.