Posts belonging to Category Artificial Intelligence



The Ultra Smart Community Of The Future

Japan’s largest electronics show CEATEC – showcasing its version of our future – in a connected world with intelligent robots And cars that know when the driver is falling asleep. This is Omron‘s “Onboard Driving Monitoring Sensor,” checking its driver isn’t distracted.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

We are developing sensors that help the car judge what state the driver is in, with regards to driving. For example, if the driver has his eyes open and set on things he should be looking at, if the driver is distracted or looking at smartphones, and these types of situations,” explains Masaki Suwa, Omron Corp. Chief Technologist.

After 18 years of consumer electronics, CEATEC is changing focus to the Internet of Things and what it calls ‘the ultra-smart community of the future‘ A future where machines take on more important roles – machines like Panasonic‘s CaloRieco – pop in your plate and know exactly what you are about to consume.

By placing freshly cooked food inside the machine, you can measure total calories and the three main nutrients: protein, fat and carbohydrate. By using this machine, you can easily manage your diet,” says Panasonic staff engineer Ryota Sato.

Even playtime will see machines more involved – like Forpheus the ping playing robot – here taking on a Olympic bronze medalist – and learning with every stroke.
Rio Olympics Table Tennis player , Jun Mizutani, Bronze Medalist, reports: “It wasn’t any different from playing with a human being. The robot kept improving and getting better as we played, and to be honest, I wanted to play with it when it had reached its maximum level, to see how good it is.”

Computer Reads Body Language

Researchers at Carnegie Mellon University‘s Robotics Institute have enabled a computer to understand body poses and movements of multiple people from video in real time — including, for the first time, the pose of each individual’s hands and fingers. This new method was developed with the help of the Panoptic Studio — a two-story dome embedded with 500 video cameras — and the insights gained from experiments in that facility now make it possible to detect the pose of a group of people using a single camera and a laptop computer.

Yaser Sheikh, associate professor of robotics, said these methods for tracking 2-D human form and motion open up new ways for people and machines to interact with each other and for people to use machines to better understand the world around them. The ability to recognize hand poses, for instance, will make it possible for people to interact with computers in new and more natural ways, such as communicating with computers simply by pointing at things.

Detecting the nuances of nonverbal communication between individuals will allow robots to serve in social spaces, allowing robots to perceive what people around them are doing, what moods they are in and whether they can be interrupted. A self-driving car could get an early warning that a pedestrian is about to step into the street by monitoring body language. Enabling machines to understand human behavior also could enable new approaches to behavioral diagnosis and rehabilitation, for conditions such as autism, dyslexia and depression.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

We communicate almost as much with the movement of our bodies as we do with our voice,” Sheikh said. “But computers are more or less blind to it.”

In sports analytics, real-time pose detection will make it possible for computers to track not only the position of each player on the field of play, as is now the case, but to know what players are doing with their arms, legs and heads at each point in time. The methods can be used for live events or applied to existing videos.

To encourage more research and applications, the researchers have released their computer code for both multi-person and hand pose estimation. It is being widely used by research groups, and more than 20 commercial groups, including automotive companies, have expressed interest in licensing the technology, Sheikh said.

Sheikh and his colleagues have presented reports on their multi-person and hand pose detection methods at CVPR 2017, the Computer Vision and Pattern Recognition Conference  in Honolulu.

Source: https://www.cmu.edu/

Optical Computer

Researchers at the University of Sydney (Australia) have dramatically slowed digital information carried as light waves by transferring the data into sound waves in an integrated circuit, or microchipTransferring information from the optical to acoustic domain and back again inside a chip is critical for the development of photonic integrated circuits: microchips that use light instead of electrons to manage data.

These chips are being developed for use in telecommunications, optical fibre networks and cloud computing data centers where traditional electronic devices are susceptible to electromagnetic interference, produce too much heat or use too much energy.

The information in our chip in acoustic form travels at a velocity five orders of magnitude slower than in the optical domain,” said Dr Birgit Stiller, research fellow at the University of Sydney and supervisor of the project.

It is like the difference between thunder and lightning,” she said.

This delay allows for the data to be briefly stored and managed inside the chip for processing, retrieval and further transmission as light wavesLight is an excellent carrier of information and is useful for taking data over long distances between continents through fibre-optic cables.

But this speed advantage can become a nuisance when information is being processed in computers and telecommunication systems.

Source: https://sydney.universty.au/

China, Global Leader In NanoScience

Mobile phones, computers, cosmetics, bicyclesnanoscience is hiding in so many everyday items, wielding a huge influence on our lives at a microscale level. Scientists and engineers from around the world exchanged new findings and perceptions on nanotechnology at the recent 7th International Conference on Nanoscience and Technology (ChinaNANO 2017) in Beijing last week. China has become a nanotechnology powerhouse, according to a report released at the conference. China’s applied nanoscience research and the industrialization of nanotechnology have been developing steadily, with the number of nano-related patent applications ranking among the top in the world.

According to Bai Chunli, president of the Chinese Academy of Sciences (CAS), China faces new opportunities for nanoscience research and development as it builds the National Center for Nanoscience and Technology  (NCNST) and globally influential national science centers.

We will strengthen the strategic landscape and top-down design for developing nanoscience, which will contribute greatly to the country’s economy and society,” said Bai.

Nanoscience can be defined as the study of the interaction, composition, properties and manufacturing methods of materials at a nanometer scale. At such tiny scales, the physical, chemical and biological properties of materials are different from those at larger scales — often profoundly so.

For example, alloys that are weak or brittle become strong and ductile; compounds that are chemically inert become powerful catalysts. It is estimated that there are more than 1,600 nanotechnology-based consumer products on the market, including lightweight but sturdy tennis rackets, bicycles, suitcases, automobile parts and rechargeable batteries. Nanomaterials are used in hairdryers or straighteners to make them lighter and more durable. The secret of how sunscreens protect skin from sunburn lies in the nanometer-scale titanium dioxide or zinc oxide they contain.

In 2016, the world’s first one-nanometer transistor was created. It was made from carbon nanotubes and molybdenum disulphide, rather than silicon.
Carbon nanotubes or silver nanowires enable touch screens on computers and televisions to be flexible, said Zhu Xing, chief scientist (CNST). Nanotechnology is also having an increasing impact on healthcare, with progress in drug delivery, biomaterials, imaging, diagnostics, active implants and other therapeutic applications. The biggest current concern is the health threats of nanoparticles, which can easily enter body via airways or skin. Construction workers exposed to nanopollutants face increased health risks.

The report was co-produced by Springer Nature, National Center for Nanoscience and Technology (NCNST) and the National Science Library of the Chinese Academy of Sciences (CAS).

Source: http://www.shanghaidaily.com/

No More Visit To The Doctor’s Office

A visit to the doctor’s office can feel like the worst thing when you’re already sick. This small device is aimed at replacing physical face-to-face check ups. It’s made by Israel’s Tytocare, a leading telemedicine company. Their Tyto device allows patients to conduct examinations of organs and be diagnosed by remote clinicians.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

We basically replicate a face-to-face interaction with a remote clinician while allowing him to do a full physical examination, analysis and the diagnosis of a patient at home,” said Dedi Gilad, CEO of Tytocare.

The associated TytoApp guides users through complicated examinations. It can be used to check heart rate or temperature — as well as conduct examinations of the ears, throat and lungs. And it allows a clinician to interact with patients online or offline. It also represents a significant cost saving – in the US a basic primary care visit costs around 170 dollars, three times the cost of telemedicine appointments. The system was tested at Israel’s Schneider children’s hospital.

What we found was really remarkable, that there was almost no difference between the two types of examinations…But we must be careful about the use. There are certain diseases, certain complaints, that can not be answered by this kind of device and we should carefully judge case by case and be aware of the limitations of this device,”  explains Prof. Yehezkel Waisman, Director of The Emergency Medicine department at Schneider children hospital.

Telemedecine does have its critics, who believe that real-time encounters with a doctor will always be superior. But those behind it say it could drastically cut the number of face-to-face doctors’ visits and save money for healthcare providers and insurers.

Source: http://www.tytocare.com/
AND
http://www.reuters.com/

Pilotless Cargo Flights By 2025

Pilotless planes would save airlines $35bn (£27bn) a year and could lead to substantial fare cuts – if passengers were able to stomach the idea of remote-controlled flying, according to new research. The savings for carriers could be huge, said investment bank UBS, even though it may take until the middle of the century for passengers to have enough confidence to board a pilotless plane. UBS estimated that pilots cost the industry $31bn a year, plus another $3bn in training, and that fully automated planes would fly more efficiently, saving another $1bn a year in fuel.

Passengers could benefit from a reduction in ticket prices of about a tenth, the report said. “The average percentage of total cost and average benefit that could be passed onto passengers in price reduction for the US airlines is 11%,” it said, although the savings in Europe would be less, at 4% on average but rising to 8% at RyanairAircraft costs and fuel make up a much larger proportion of costs at airlines than pilot salaries, but UBS said profits at some major airlines could double if they switched to pilotless.

More than half of the 8,000 people UBS surveyed, however, said they would refuse to travel in a pilotless plane, even if fares were cut. “Some 54% of respondents said they were unlikely to take a pilotless flight, while only 17% said they would likely undertake a pilotless flight. Perhaps surprisingly, half of the respondents said that they would not buy the pilotless flight ticket even if it was cheaper,” the report said. It added, however, that younger and more educated respondents were more willing to fly on a pilotless plane. “This bodes well for the technology as the population ages,” it said.

Source: https://www.theguardian.com/

Use The Phone And See 3D Content Without 3D Glasses

RED, the company known for making some truly outstanding high-end cinema cameras, is set to release a smartphone in Q1 of 2018 called the HYDROGEN ONE. RED says that it is a standalone, unlocked and fully-featured smartphone “operating on Android OS that just happens to add a few additional features that shatter the mold of conventional thinking.” Yes, you read that right. This phone will blow your mind, or something – and it will even make phone calls.

In a press release riddled with buzzwords broken up by linking verbs, RED praises their yet-to-be smartphone with some serious adjectives. If we were just shown this press release outside of living on RED‘s actual server, we would swear it was satire. Here are a smattering of phrases found in the release.

Incredible retina-riveting display
Nanotechnology
Holographic multi-view content
RED Hydrogen 4-View content
Assault your senses
Proprietary H3O algorithm
Multi-dimentional audio

  • There are two models of the phone, which run at different prices. The Aluminum model will cost $1,195, but anyone worth their salt is going to go for the $1,595 Titanium version. Gotta shed that extra weight, you know?

Those are snippets from just the first three sections, of which there are nine. I get hyping a product, but this reads like a catalog seen in the background of a science-fiction comedy, meant to sound ridiculous – especially in the context of a ficticious universe.

Except that this is real life.

After spending a few minutes removing all the glitter words from this release, it looks like it will be a phone using a display similar to what you get with the Nintendo 3DS, or what The Verge points out as perhaps better than the flopped Amazon Fire Phone. Essentially, you should be able to use the phone and see 3D content without 3D glasses. Nintendo has already proven that can work, however it can really tire out your eyes. As an owner of three different Nintendo 3DS consoles, I can say that I rarely use the 3D feature because of how it makes my eyes hurt. It’s an odd sensation. It is probalby why Nintendo has released a new handheld that has the same power as the 3DS, but dropping the 3D feature altogether.

Anyway, back to the HYDROGEN ONE, RED says that it will work in tandem with their cameras as a user interface and monitor. It will also display what RED is calling “holographic content,” which isn’t well-described by RED in this release. We can assume it is some sort of mixed-dimensional view that makes certain parts of a video or image stand out over the others.

Source: http://www.red.com/
AND
http://www.imaging-resource.com/

Nanoweapons Against North Korea

Unless you’re working in the field, you probably never heard about U.S. nanoweapons. This is intentional. The United States, as well as Russia and China, are spending billions of dollars per year developing nanoweapons, but all development is secret. Even after Pravda.ru’s June 6, 2016 headline, “US nano weapon killed Venezuela’s Hugo Chavez, scientists say,” the U.S. offered no response.

Earlier this year, May 5, 2017, North Korea claimed the CIA plotted to kill Kim Jong Un using a radioactive nano poison, similar to the nanoweapon Venezuelan scientists claim the U.S. used to assassinate former Venezuelan President Hugo Chavez. All major media covered North Korea’s claim. These accusations are substantial, but are they true? Let’s address this question.

Unfortunately, until earlier this year, nanoweapons gleaned little media attention. However, in March 2017 that changed with the publication of the book, Nanoweapons: A Growing Threat to Humanity (2017 Potomac Books), which inspired two articles. On March 9, 2017, American Security Today published “Nanoweapons: A Growing Threat to Humanity – Louis A. Del Monte,” and on March 17, 2017, CNBC published “Mini-nukes and mosquito-like robot weapons being primed for future warfare.” Suddenly, the genie was out of the bottle. The CNBC article became the most popular on their website for two days following its publication and garnered 6.5K shares. Still compared to other classes of military weapons, nanoweapons remain obscure. Factually, most people never even heard the term. If you find this surprising, recall most people never heard of stealth aircraft until their highly publicized use during the first Iraq war in 1990. Today, almost everyone that reads the news knows about stealth aircraft. This may become the case with nanoweapons, but for now, it remains obscure to the public.

Given their relative obscurity, we’ll start by defining nanoweapons. A nanoweapon is any military weapon that exploits the power of nanotechnology. This, of course, begs another question: What is nanotechnology? According to the United States National Nanotechnology Initiative’s website, nano.gov, “Nanotechnology is science, engineering, and technology conducted at the nanoscale, which is about 1 to 100 nanometers.” To put this in simple terms, the diameter of a typical human hair equals 100,000 nanometers. This means nanotechnology is invisible to the naked eye or even under an optical microscope.

Source: http://www.huffingtonpost.com/

Artificial Intelligence Checks Identity Using Any Smartphone

Checking your identity using simulated human cognition aiThenticate say their system goes way beyond conventional facial recognition systems or the biometrics of passwords, fingerprints and eyescans.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

We need to have a much greater level of a certainty who somebody actually is. In order to answer that question, we appealed to deep science, deep learning, to develop an AI method, artificial intelligence method, in other words to replicate or to mimic or to simulate the way that we as humans, intuitively and instinctively go by recognizing somebody’s head, is very different to the conventional traditional way of face recognition, finger print recognition, for that reason really represents the next generation of authentication technologies or methods,” says AiTthenticate CEO André Immelman.

aiDX uses 16 distinct tests to recognise someone – including eye prints using a standard off the shelf smart phone to access encrypted data stored in the cloud it can operate in active mode – asking the user taking a simple selfie or discreetly in the background.

André Immelman explains: “It has applications in the security sense, it has applications in a customer services sense, you know this kind of things the bank calls you up and says: this is your bank calling, please, where you live, what is your mother’s name, what’s your dog favourite hobby, whatever the case it may be. It takes that kind of guess work out of the equation completely and it answers the, “who” question to much greater levels of confidence or certainty, than what traditional or conventional biometrics have been able to do in the past.”

Billions of dollars a year are lost to identity theft globally. aiThenticate hope their new system can help stop at least some of that illegal trade.

Source: http://www.eyethenticate.za.com/
AND
http://www.reuters.com/

Building Brain-Inspired AI Supercomputing System

IBM (NYSE: IBM) and the U.S. Air Force Research Laboratory (AFRL) today announced they are collaborating on a first-of-a-kind brain-inspired supercomputing system powered by a 64-chip array of the IBM TrueNorth Neurosynaptic System. The scalable platform IBM is building for AFRL will feature an end-to-end software ecosystem designed to enable deep neural-network learning and information discovery. The system’s advanced pattern recognition and sensory processing power will be the equivalent of 64 million neurons and 16 billion synapses, while the processor component will consume the energy equivalent of a dim light bulb – a mere 10 watts to power.
IBM researchers believe the brain-inspired, neural network design of TrueNorth will be far more efficient for pattern recognition and integrated sensory processing than systems powered by conventional chips. AFRL is investigating applications of the system in embedded, mobile, autonomous settings where, today, size, weight and power (SWaP) are key limiting factors. The IBM TrueNorth Neurosynaptic System can efficiently convert data (such as images, video, audio and text) from multiple, distributed sensors into symbols in real time. AFRL will combine this “right-brain perception capability of the system with the “left-brain” symbol processing capabilities of conventional computer systems. The large scale of the system will enable both “data parallelism” where multiple data sources can be run in parallel against the same neural network and “model parallelism” where independent neural networks form an ensemble that can be run in parallel on the same data.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

AFRL was the earliest adopter of TrueNorth for converting data into decisions,” said Daniel S. Goddard, director, information directorate, U.S. Air Force Research Lab. “The new neurosynaptic system will be used to enable new computing capabilities important to AFRL’s mission to explore, prototype and demonstrate high-impact, game-changing technologies that enable the Air Force and the nation to maintain its superior technical advantage.”

“The evolution of the IBM TrueNorth Neurosynaptic System is a solid proof point in our quest to lead the industry in AI hardware innovation,” said Dharmendra S. Modha, IBM Fellow, chief scientist, brain-inspired computing, IBM Research – Almaden. “Over the last six years, IBM has expanded the number of neurons per system from 256 to more than 64 million – an 800 percent annual increase over six years.’’

Source: https://www-03.ibm.com/

Artificial Intelligence At The Hospital

Diagnosing cancer is a slow and laborious process. Here researchers at University Hospital Zurich painstakingly make up biopsy slides – up to 50 for each patient – for the pathologist to examine for signs of prostate cancer. A pathologist takes around an hour and a half per patient – a task IBMs Watson supercomputer is now doing in fractions of a second.

CLICK ON THE IMAGE TO ENJOY THE VIDEO
“If the pathologist becomes faster by using such a system I think it will pay off. Because my time is also worth something. If I sit here one and a half hours looking at slides, screening all these slides, instead of just signing out the two or three positive ones, and taking into account that there may be a .1 error rate, percent error rate. this will pay off, because I can do in one and a half hours at the end five patients,” says Dr. Peter Wild, University Hospital Zürich.

The hospital’s archive of biopsy images is being slowly fed into Watson – a process that will take years. But maybe one day pathologists won’t have to view slides through a microscope at all. Diagnosis is not the only area benefiting from AI. The technology is helping this University of Sheffield team design a new drug that could slow down the progress of motor neurone disease. A system built by British start-up BenevolentAI is identifying new areas for further exploration far faster than a person could ever hope to.

Benevolent basically uses their artificial intelligence system to scan the whole medical and biomedical literature. It’s not really easy for us to stay on top of millions of publications that come out every year. So they can interrogate that information, using artificial intelligence and come up with ideas for new drugs that might be used in a completely different disease, but may be applicable on motor neurone disease. So that’s the real benefit in their system, the kind of novel ideas that they come up with,” explains Dr. Richard Mead, Sitran, University of Sheffield. BenevolentAI has raised one hundred million dollars in investment to develop its AI system, and help revolutionise the pharmaceutical industry.

Source: http://www.reuters.com/

30 Billion Switches Onto The New IBM Nano-based Chip

IBM is clearly not buying into the idea that Moore’s Law is dead after it unveiled a tiny new transistor that could revolutionise the design, and size, of future devices. Along with Samsung and Globalfoundries, the tech firm has created a ‘breakthrough’ semiconducting unit made using stacks of nanosheets. The companies say they intend to use the transistors on new five nanometer (nm) chips that feature 30 billion switches on an area the size of a fingernail. When fully developed, the new chip will help with artificial intelligence, the Internet of Things, and cloud computing.

For business and society to meet the demands of cognitive and cloud computing in the coming years, advancement in semiconductor technology is essential,” said Arvind Krishna, senior vice president, Hybrid Cloud, and director, IBM Research.

IBM has been developing nanometer sheets for the past 10 years and combined stacks of these tiny sheets using a process called Extreme Ultraviolet (EUV) lithography to build the structure of the transistor.

Using EUV lithography, the width of the nanosheets can be adjusted continuously, all within a single manufacturing process or chip design,” IBM and the other firms said. This allows the transistors to be adjusted for the specific circuits they are to be used in.

Source: http://www.wired.co.uk/