Posts belonging to Category Artificial Intelligence



Pilotless Cargo Flights By 2025

Pilotless planes would save airlines $35bn (£27bn) a year and could lead to substantial fare cuts – if passengers were able to stomach the idea of remote-controlled flying, according to new research. The savings for carriers could be huge, said investment bank UBS, even though it may take until the middle of the century for passengers to have enough confidence to board a pilotless plane. UBS estimated that pilots cost the industry $31bn a year, plus another $3bn in training, and that fully automated planes would fly more efficiently, saving another $1bn a year in fuel.

Passengers could benefit from a reduction in ticket prices of about a tenth, the report said. “The average percentage of total cost and average benefit that could be passed onto passengers in price reduction for the US airlines is 11%,” it said, although the savings in Europe would be less, at 4% on average but rising to 8% at RyanairAircraft costs and fuel make up a much larger proportion of costs at airlines than pilot salaries, but UBS said profits at some major airlines could double if they switched to pilotless.

More than half of the 8,000 people UBS surveyed, however, said they would refuse to travel in a pilotless plane, even if fares were cut. “Some 54% of respondents said they were unlikely to take a pilotless flight, while only 17% said they would likely undertake a pilotless flight. Perhaps surprisingly, half of the respondents said that they would not buy the pilotless flight ticket even if it was cheaper,” the report said. It added, however, that younger and more educated respondents were more willing to fly on a pilotless plane. “This bodes well for the technology as the population ages,” it said.

Source: https://www.theguardian.com/

Use The Phone And See 3D Content Without 3D Glasses

RED, the company known for making some truly outstanding high-end cinema cameras, is set to release a smartphone in Q1 of 2018 called the HYDROGEN ONE. RED says that it is a standalone, unlocked and fully-featured smartphone “operating on Android OS that just happens to add a few additional features that shatter the mold of conventional thinking.” Yes, you read that right. This phone will blow your mind, or something – and it will even make phone calls.

In a press release riddled with buzzwords broken up by linking verbs, RED praises their yet-to-be smartphone with some serious adjectives. If we were just shown this press release outside of living on RED‘s actual server, we would swear it was satire. Here are a smattering of phrases found in the release.

Incredible retina-riveting display
Nanotechnology
Holographic multi-view content
RED Hydrogen 4-View content
Assault your senses
Proprietary H3O algorithm
Multi-dimentional audio

  • There are two models of the phone, which run at different prices. The Aluminum model will cost $1,195, but anyone worth their salt is going to go for the $1,595 Titanium version. Gotta shed that extra weight, you know?

Those are snippets from just the first three sections, of which there are nine. I get hyping a product, but this reads like a catalog seen in the background of a science-fiction comedy, meant to sound ridiculous – especially in the context of a ficticious universe.

Except that this is real life.

After spending a few minutes removing all the glitter words from this release, it looks like it will be a phone using a display similar to what you get with the Nintendo 3DS, or what The Verge points out as perhaps better than the flopped Amazon Fire Phone. Essentially, you should be able to use the phone and see 3D content without 3D glasses. Nintendo has already proven that can work, however it can really tire out your eyes. As an owner of three different Nintendo 3DS consoles, I can say that I rarely use the 3D feature because of how it makes my eyes hurt. It’s an odd sensation. It is probalby why Nintendo has released a new handheld that has the same power as the 3DS, but dropping the 3D feature altogether.

Anyway, back to the HYDROGEN ONE, RED says that it will work in tandem with their cameras as a user interface and monitor. It will also display what RED is calling “holographic content,” which isn’t well-described by RED in this release. We can assume it is some sort of mixed-dimensional view that makes certain parts of a video or image stand out over the others.

Source: http://www.red.com/
AND
http://www.imaging-resource.com/

Nanoweapons Against North Korea

Unless you’re working in the field, you probably never heard about U.S. nanoweapons. This is intentional. The United States, as well as Russia and China, are spending billions of dollars per year developing nanoweapons, but all development is secret. Even after Pravda.ru’s June 6, 2016 headline, “US nano weapon killed Venezuela’s Hugo Chavez, scientists say,” the U.S. offered no response.

Earlier this year, May 5, 2017, North Korea claimed the CIA plotted to kill Kim Jong Un using a radioactive nano poison, similar to the nanoweapon Venezuelan scientists claim the U.S. used to assassinate former Venezuelan President Hugo Chavez. All major media covered North Korea’s claim. These accusations are substantial, but are they true? Let’s address this question.

Unfortunately, until earlier this year, nanoweapons gleaned little media attention. However, in March 2017 that changed with the publication of the book, Nanoweapons: A Growing Threat to Humanity (2017 Potomac Books), which inspired two articles. On March 9, 2017, American Security Today published “Nanoweapons: A Growing Threat to Humanity – Louis A. Del Monte,” and on March 17, 2017, CNBC published “Mini-nukes and mosquito-like robot weapons being primed for future warfare.” Suddenly, the genie was out of the bottle. The CNBC article became the most popular on their website for two days following its publication and garnered 6.5K shares. Still compared to other classes of military weapons, nanoweapons remain obscure. Factually, most people never even heard the term. If you find this surprising, recall most people never heard of stealth aircraft until their highly publicized use during the first Iraq war in 1990. Today, almost everyone that reads the news knows about stealth aircraft. This may become the case with nanoweapons, but for now, it remains obscure to the public.

Given their relative obscurity, we’ll start by defining nanoweapons. A nanoweapon is any military weapon that exploits the power of nanotechnology. This, of course, begs another question: What is nanotechnology? According to the United States National Nanotechnology Initiative’s website, nano.gov, “Nanotechnology is science, engineering, and technology conducted at the nanoscale, which is about 1 to 100 nanometers.” To put this in simple terms, the diameter of a typical human hair equals 100,000 nanometers. This means nanotechnology is invisible to the naked eye or even under an optical microscope.

Source: http://www.huffingtonpost.com/

Artificial Intelligence Checks Identity Using Any Smartphone

Checking your identity using simulated human cognition aiThenticate say their system goes way beyond conventional facial recognition systems or the biometrics of passwords, fingerprints and eyescans.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

We need to have a much greater level of a certainty who somebody actually is. In order to answer that question, we appealed to deep science, deep learning, to develop an AI method, artificial intelligence method, in other words to replicate or to mimic or to simulate the way that we as humans, intuitively and instinctively go by recognizing somebody’s head, is very different to the conventional traditional way of face recognition, finger print recognition, for that reason really represents the next generation of authentication technologies or methods,” says AiTthenticate CEO André Immelman.

aiDX uses 16 distinct tests to recognise someone – including eye prints using a standard off the shelf smart phone to access encrypted data stored in the cloud it can operate in active mode – asking the user taking a simple selfie or discreetly in the background.

André Immelman explains: “It has applications in the security sense, it has applications in a customer services sense, you know this kind of things the bank calls you up and says: this is your bank calling, please, where you live, what is your mother’s name, what’s your dog favourite hobby, whatever the case it may be. It takes that kind of guess work out of the equation completely and it answers the, “who” question to much greater levels of confidence or certainty, than what traditional or conventional biometrics have been able to do in the past.”

Billions of dollars a year are lost to identity theft globally. aiThenticate hope their new system can help stop at least some of that illegal trade.

Source: http://www.eyethenticate.za.com/
AND
http://www.reuters.com/

Building Brain-Inspired AI Supercomputing System

IBM (NYSE: IBM) and the U.S. Air Force Research Laboratory (AFRL) today announced they are collaborating on a first-of-a-kind brain-inspired supercomputing system powered by a 64-chip array of the IBM TrueNorth Neurosynaptic System. The scalable platform IBM is building for AFRL will feature an end-to-end software ecosystem designed to enable deep neural-network learning and information discovery. The system’s advanced pattern recognition and sensory processing power will be the equivalent of 64 million neurons and 16 billion synapses, while the processor component will consume the energy equivalent of a dim light bulb – a mere 10 watts to power.
IBM researchers believe the brain-inspired, neural network design of TrueNorth will be far more efficient for pattern recognition and integrated sensory processing than systems powered by conventional chips. AFRL is investigating applications of the system in embedded, mobile, autonomous settings where, today, size, weight and power (SWaP) are key limiting factors. The IBM TrueNorth Neurosynaptic System can efficiently convert data (such as images, video, audio and text) from multiple, distributed sensors into symbols in real time. AFRL will combine this “right-brain perception capability of the system with the “left-brain” symbol processing capabilities of conventional computer systems. The large scale of the system will enable both “data parallelism” where multiple data sources can be run in parallel against the same neural network and “model parallelism” where independent neural networks form an ensemble that can be run in parallel on the same data.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

AFRL was the earliest adopter of TrueNorth for converting data into decisions,” said Daniel S. Goddard, director, information directorate, U.S. Air Force Research Lab. “The new neurosynaptic system will be used to enable new computing capabilities important to AFRL’s mission to explore, prototype and demonstrate high-impact, game-changing technologies that enable the Air Force and the nation to maintain its superior technical advantage.”

“The evolution of the IBM TrueNorth Neurosynaptic System is a solid proof point in our quest to lead the industry in AI hardware innovation,” said Dharmendra S. Modha, IBM Fellow, chief scientist, brain-inspired computing, IBM Research – Almaden. “Over the last six years, IBM has expanded the number of neurons per system from 256 to more than 64 million – an 800 percent annual increase over six years.’’

Source: https://www-03.ibm.com/

Artificial Intelligence At The Hospital

Diagnosing cancer is a slow and laborious process. Here researchers at University Hospital Zurich painstakingly make up biopsy slides – up to 50 for each patient – for the pathologist to examine for signs of prostate cancer. A pathologist takes around an hour and a half per patient – a task IBMs Watson supercomputer is now doing in fractions of a second.

CLICK ON THE IMAGE TO ENJOY THE VIDEO
“If the pathologist becomes faster by using such a system I think it will pay off. Because my time is also worth something. If I sit here one and a half hours looking at slides, screening all these slides, instead of just signing out the two or three positive ones, and taking into account that there may be a .1 error rate, percent error rate. this will pay off, because I can do in one and a half hours at the end five patients,” says Dr. Peter Wild, University Hospital Zürich.

The hospital’s archive of biopsy images is being slowly fed into Watson – a process that will take years. But maybe one day pathologists won’t have to view slides through a microscope at all. Diagnosis is not the only area benefiting from AI. The technology is helping this University of Sheffield team design a new drug that could slow down the progress of motor neurone disease. A system built by British start-up BenevolentAI is identifying new areas for further exploration far faster than a person could ever hope to.

Benevolent basically uses their artificial intelligence system to scan the whole medical and biomedical literature. It’s not really easy for us to stay on top of millions of publications that come out every year. So they can interrogate that information, using artificial intelligence and come up with ideas for new drugs that might be used in a completely different disease, but may be applicable on motor neurone disease. So that’s the real benefit in their system, the kind of novel ideas that they come up with,” explains Dr. Richard Mead, Sitran, University of Sheffield. BenevolentAI has raised one hundred million dollars in investment to develop its AI system, and help revolutionise the pharmaceutical industry.

Source: http://www.reuters.com/

30 Billion Switches Onto The New IBM Nano-based Chip

IBM is clearly not buying into the idea that Moore’s Law is dead after it unveiled a tiny new transistor that could revolutionise the design, and size, of future devices. Along with Samsung and Globalfoundries, the tech firm has created a ‘breakthrough’ semiconducting unit made using stacks of nanosheets. The companies say they intend to use the transistors on new five nanometer (nm) chips that feature 30 billion switches on an area the size of a fingernail. When fully developed, the new chip will help with artificial intelligence, the Internet of Things, and cloud computing.

For business and society to meet the demands of cognitive and cloud computing in the coming years, advancement in semiconductor technology is essential,” said Arvind Krishna, senior vice president, Hybrid Cloud, and director, IBM Research.

IBM has been developing nanometer sheets for the past 10 years and combined stacks of these tiny sheets using a process called Extreme Ultraviolet (EUV) lithography to build the structure of the transistor.

Using EUV lithography, the width of the nanosheets can be adjusted continuously, all within a single manufacturing process or chip design,” IBM and the other firms said. This allows the transistors to be adjusted for the specific circuits they are to be used in.

Source: http://www.wired.co.uk/

Startup Promises Immortality Through AI, Nanotechnology, and Cloning

One of the things humans have plotted for centuries is escaping death, with little to show for it, until now. One startup called Humai has a plan to make immortality a reality. The CEO, Josh Bocanegra says when the time comes and all the necessary advancements are in place, we’ll be able to freeze your brain, create a new, artificial body, repair any damage to your brain, and transfer it into your new body. This process could then be repeated in perpetuityHUMAI stands for: Human Resurrection through Artificial Intelligence. The technology to accomplish this isn’t here now, but on the horizon. Bocanegra says they’ll reach this Promethean feat within 30 years. 2045 is currently their target date. So how do they plan to do it?

We’re using artificial intelligence and nanotechnology to store data of conversational styles, behavioral patterns, thought processes and information about how your body functions from the inside-out. This data will be coded into multiple sensor technologies, which will be built into an artificial body with the brain of a deceased human, explains the website.

Source: https://www.facebook.com/humaitech/
AND
http://bigthink.com/

Legally Blind People Can See With A New Kind Of Glasses

A Canadian company based in Toronto has suceeded to build a kind of Google glass that is able to give back full sight to legally blind people.  The eSight is an augmented reality headset that houses a high-speed, high-definition camera that captures everything the user is looking at.

CLICK ON THE IMAGE TO ENJOY THE VIDEO


Algorithms enhance the video feed and display it on two, OLED screens in front of the user’s eyes. Full color video images are clearly seen by the eSight user with unprecedented visual clarity and virtually no lag. With eSight’s patented Bioptic Tilt capability, users can adjust the device to the precise position that, for them, presents the best view of the video while maximizing side peripheral vision. This ensures a user’s balance and prevents nausea – common problems with other immersive technologies. A blind individual can use both of their hands while they use eSight to see. It is lightweight, worn comfortably around the eyes and designed for various environments and for use throughout the day.

eSight is a comprehensive customized medical device that can replace all the many single-task assistive devices that are currently available but do not provide actual sight (e.g. white canes, magnifying devices, service animals, Braille machines, CCTV scanners, text-to-speech software). It allows a user to instantly auto-focus between short-range vision (reading a book or text on a smartphone) to mid-range vision (seeing faces or watching TV) to long-range vision (looking down a hallway or outsidea window). It is the only device for the legally blind that enables mobility without causing issues of imbalance or nausea (common with other immersive options). A legally blind individual can use eSight not just to see while sitting down but while being independently mobile (e.g. walking, exercising, commuting, travelling, etc).

According to The Wall Street Journal, the company is taking advantages of recent improvements in technology from VR headsets and smartphones that have trickled down to improve the latest version of the eSight. So far, the company has sold roughly a thousand units, but at $10,000 apiece, they’re not cheap (and most insurances apparently don’t cover the product), although eSight’s chief executive Brian Mech notes to the WSJ that getting devices to users is “a battle we are starting to wage.”

Source: https://www.esighteyewear.com/

Super-material Bends, Shapes And Focuses Sound Waves

These tiny 3D-printed bricks could one day allow people to create their own acoustics. That’s the plan of scientists from the universities of Bristol and Sussex. They’ve invented a metamaterial which bends and manipulates sound in any way the user wants. It’s helped scientists create what they call a ‘sonic alphabet‘.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

We have discovered that you just need 16 bricks to make any type of sound that you can imagine. You can shape the sound just with 16 of them, just like you create any words with just 26 letters,” says Dr. Gianluca Memoli, researcher at Interact Lab at University of Sussex.

DIY kits like this, full of batches of the 16 aural letters, could help users create a sound library, or even help people in the same car to hear separate things.

With our device what you can have is you can strap a static piece on top of existing speakers and they can direct sound in two different directions without any overlap. So the passengers can hear completely different information from the driver,” explains Professor Sri Subramanian Interact Lab at University of Sussex. This technology is more than five years away, but smaller versions could be used to direct medical ultrasound devices far sooner.  “In a year we could have a sleeve that we can put on top of already existing projects in the market and make them just a little bit better. For example, we can have a sleeve that goes on top of ultrasound pain relieving devices that are used for therapeutic pain,” he adds.
Researchers say spatial sound modulators will one day allow us to perform audible tasks previously unheard of.

Source: http://www.sussex.ac.uk/

Stephen Hawking Warns: Only 100 Years Left For Humankind Before Extinction

It’s no secret that physicist Stephen Hawking thinks humans are running out of time on planet Earth.

In a new BBC documentary, Hawking will test his theory that humankind must colonize another planet or perish in the next 100 years. The documentary Stephen Hawking: Expedition New Earth, will air this summer as part of BBC’s Tomorrow’s World season and will showcase that Hawking‘s aspiration “isn’t as fantastical as it sounds,” according to BBC.

For years, Hawking has warned that humankind faces a slew of threats ranging from climate change to destruction from nuclear war and genetically engineered viruses.

While things look bleak, there is some hope, according to Hawking. Humans must set their sights on another planet or perish on Earth.

We must also continue to go into space for the future of humanity,” Hawking said during a 2016 speech at Britain’s Oxford University Union. In the past, Hawking has suggested that humankind might not survive another 1000 years without escaping beyond our fragile planet.” The BBC documentary hints at an adjusted timeframe for colonization, which many may see in their lifetime.

Artificial Intelligence Tracks In Real Time Everybody In The Crowd

Artificial Intelligence that can pick you out in a crowd and then track your every move. Japanese firm Hitachi‘s new imaging system locks on to at least 100 different characteristics of an individual … including gender, age, hair style, clothes, and mannerisms. Hitachi says it provides real-time tracking and monitoring of crowded areas.

Person of interestCLICK ON THE IMAGE TO ENJOY THE VIDEO

Until now, we need a lot of security guards and people to review security camera footage. We developed this AI software in the hopes it would help them do just that,” says Tomokazu Murakami, Hitachi researcher.

The system can help spot a suspicious individual or find a missing child, the makers say. So, an eyewitness could provide a limited description, with the AI software quickly scanning its database for a match.
In Japan, the demand for such technology is increasing because of the Tokyo 2020 Olympics, but for us we’re developing it in a way so that it can be utilized in many different places such as train stations, stadiums, and even shopping malls,” comments Tomokazu Murakami.

High-speed tracking of individuals such as this will undoubtedly have its critics. But as Japan prepares to host the 2020 Olympics, Hitachi insists its system can contribute to public safety and security.

Source: http://uk.reuters.com/