Use The Phone And See 3D Content Without 3D Glasses

RED, the company known for making some truly outstanding high-end cinema cameras, is set to release a smartphone in Q1 of 2018 called the HYDROGEN ONE. RED says that it is a standalone, unlocked and fully-featured smartphone “operating on Android OS that just happens to add a few additional features that shatter the mold of conventional thinking.” Yes, you read that right. This phone will blow your mind, or something – and it will even make phone calls.

In a press release riddled with buzzwords broken up by linking verbs, RED praises their yet-to-be smartphone with some serious adjectives. If we were just shown this press release outside of living on RED‘s actual server, we would swear it was satire. Here are a smattering of phrases found in the release.

Incredible retina-riveting display
Holographic multi-view content
RED Hydrogen 4-View content
Assault your senses
Proprietary H3O algorithm
Multi-dimentional audio

  • There are two models of the phone, which run at different prices. The Aluminum model will cost $1,195, but anyone worth their salt is going to go for the $1,595 Titanium version. Gotta shed that extra weight, you know?

Those are snippets from just the first three sections, of which there are nine. I get hyping a product, but this reads like a catalog seen in the background of a science-fiction comedy, meant to sound ridiculous – especially in the context of a ficticious universe.

Except that this is real life.

After spending a few minutes removing all the glitter words from this release, it looks like it will be a phone using a display similar to what you get with the Nintendo 3DS, or what The Verge points out as perhaps better than the flopped Amazon Fire Phone. Essentially, you should be able to use the phone and see 3D content without 3D glasses. Nintendo has already proven that can work, however it can really tire out your eyes. As an owner of three different Nintendo 3DS consoles, I can say that I rarely use the 3D feature because of how it makes my eyes hurt. It’s an odd sensation. It is probalby why Nintendo has released a new handheld that has the same power as the 3DS, but dropping the 3D feature altogether.

Anyway, back to the HYDROGEN ONE, RED says that it will work in tandem with their cameras as a user interface and monitor. It will also display what RED is calling “holographic content,” which isn’t well-described by RED in this release. We can assume it is some sort of mixed-dimensional view that makes certain parts of a video or image stand out over the others.


Algorithm helps patients to choose a new nose

Having plastic surgery of any kind is a major decision, but knowing how you’ll look in advance of going under the knife can help dispel some of the anxiety. Surgeons have been using imaging software for some time to help patients visualise the results of prospective work. But researchers from Belgium have developed software they say can help surgeons deliver even better results, while increasing the interaction with their patients.

At the Meaningful Interactions Lab (mintlab), a research group of the University of Leuven (Belgium) and research institute IMEC, they’ve collaborated with a consortium of research partners and companies to develop a 3D tool to accurately simulate the outcome of nose surgery. The tool uses a combination of facial modelling statistics with morphing algorithms. The ‘average nose‘ is used as a baseline, computed based on the characteristics of a couple of hundreds of faces in their database.


The new algorithm delivers more realistic results for rhinoplasty, commonly called a nose job. First, a 3D model is built using off-the-shelf components. Once imported into their software, it creates the most appropriate looking nose, using hundreds of previously scanned faces as a baseline.

We combined this with an algorithm that was based on faces that were scanned – a lot of faces were scanned – so that the algorithm could calculate what a realistic nose could look like. So in Photoshop you could very easily make like a Pinocchio nose and that’s really unrealistic, but with this software we’ve managed to keep the boundaries to what’s really realistic“, says Arne Jansen, resarcher at the Mintlab.
The computer-created nose can still be adjusted to the patient’s liking. The team says it also has important applications for designing prosthetic replacements for patients whose noses have been amputated, often due to cancer. It uses facial characteristics to ‘predict‘ a perfectly fitting whole new nose – even though there is no existing nasal structure to base it on. Key landmarks on the face are pinpointed; such as cheekbones, tip of the nose and corners of the eyes to help it design a well-suited nose.  “And the software can look at the same characteristics of the face and use that to calculate a nose that is fitting for this particular face. And so what the software won’t do is make a general nose; make on nose for all – it will make a characteristic nose that you can still alter towards the needs of the patients“, he adds.


Algorithm Replicates Perfectly Your Handwriting

In a world increasingly dominated by the QWERTY keyboard, computer scientists from University College London (UCL) have developed software which may spark the comeback of the handwritten word by analysing the handwriting of any individual and accurately replicating it.

CLICK ON THE IMAGE TO ENJOY THE VIDEOcomputer-program-replicates-handwriting

The scientists have created ‘My Text in Your Handwriting’, a programme which semi-automatically examines a sample of a person’s handwriting, which can be as little as one paragraph, and generates new text saying whatever the user wishes, as if the author had handwritten it themselves. “Our software has lots of valuable applications. Stroke victims, for example, may be able to formulate letters without the concern of illegibility, or someone sending flowers as a gift could include a handwritten note without even going into the florist. It could also be used in comic books where a piece of handwritten text can be translated into different languages without losing the author’s original style”, said First author, Dr Tom Haines (UCL Computer Science).

Co-author, Dr Oisin Mac Aodha (UCL Computer Science), adds: “Up until now, the only way to produce computer-generated text that resembles a specific person’s handwriting would be to use a relevant font. The problem with such fonts is that it is often clear that the text has not been penned by hand, which loses the character and personal touch of a handwritten piece of text. What we’ve developed removes this problem and so could be used in a wide variety of commercial and personal circumstances.”

Published in ACM Transactions on Graphics, the machine learning algorithm is built around glyphs – a specific instance of a character.


Robots Can Speak Like Real Humans

Generating speech from a piece of text is a common and important task undertaken by computers, but it’s pretty rare that the result could be mistaken for ordinary speech. A new technique from researchers at Alphabet’s DeepMind  (Google) takes a completely different approach, producing speech and even music that sounds eerily like the real thing.


Early systems used a large library of the parts of speech (phonemes and morphemes) and a large ruleset that described all the ways letters combined to produce those sounds. The pieces were joined, or concatenated, creating functional speech synthesis that can handle most words, albeit with unconvincing cadence and tone. Later systems parameterized the generation of sound, making a library of speech fragments unnecessary. More compact — but often less effective.

WaveNet, as the system is called, takes things deeper. It simulates the sound of speech at as low a level as possible: one sample at a time. That means building the waveform from scratch16,000 samples per second.

milliwavenetEach dot is a separately calculated sample; the aggregate is the digital waveform.

You already know from the headline, but if you don’t, you probably would have guessed what makes this possible: neural networks. In this case, the researchers fed a ton of ordinary recorded speech to a convolutional neural network, which created a complex set of rules that determined which tones follow other tones in every common context of speech.

Each sample is determined not just by the sample before it, but the thousands of samples that came before it. They all feed into the neural network’s algorithm; it knows that certain tones or samples will almost always follow each other, and certain others will almost never. People don’t speak in square waves, for instance.


Give Deaf People A New Voice

A smart device that translates sign language while being worn on the wrist could bridge the communications gap between the deaf and those who don’t know sign language, says a Texas A&M University biomedical engineering researcher who is developing the technology. The wearable technology combines motion sensors and the measurement of electrical activity generated by muscles to interpret hand gestures, explains Roozbeh Jafari, associate professor in the university’s Department of Biomedical Engineering and researcher at the Center for Remote Health Technologies and Systems. Although the device is still in its prototype stage, it can already recognize 40 American Sign Language words with nearly 96 percent accuracy, notes Jafari who presented his research at the Institute of Electrical and Electronics Engineers (IEEE) 12th Annual Body Sensor Networks Conference this past June. The technology was among the top award winners in the Texas Instruments Innovation Challenge this past summer.

sign language

We decode the muscle activities we are capturing from the wrist. Some of it is coming from the fingers indirectly because if I happen to keep my fist like this versus this the muscle activation is going to be a little different“, said Jafari.  It’s those differences that present the researchers with their biggest challenge. Fine tuning the device to process and translate the different signals accurately, in real time, requires sophisticated algorithms. The other problem is that no two people sign exactly alike, which is why they designed the system to learn from its user.  “When you wear the system for the first time the system operates with some level of accuracy. But as you start using the system more often, the system learns from your behavior and it will adapt its own learning models to fit you“,  he added.

Going forward the team hope to miniaturize the device so it can be worn on a users’ wrist like a watch and program it to decipher complete sentences rather than just individual words. The researchers also want to incorporate a synthetic voice speaker, an upgrade that could potentially give the 70 million deaf people around the world…a new voice.


Children Learn To Write By Teaching Robots

The CoWriter Project aims at exploring how a robot can help children with the acquisition of handwriting, with an original approach: the children are the teachers who help the robot to better write! This paradigm, known as learning by teaching, has several powerful effects: it boosts the children’ self-esteem (which is especially important for children with handwriting difficulties), it get them to practise hand-wrtiing without even noticing, and engage them into a particular interaction with the robot called the Protégé effect: because they unconsciously feel that they are somehow responsible if the robot does not succeed in improving its writing skills, they commit to the interaction, and make particular efforts to figure out what is difficult for the robot, thus developing their metacognitive skills and reflecting on their own errors. Séverin Lemaignan, one of the authors of the study, said the research was based on a recognized principle in pedagogy known as ‘the protégé effect‘. The prototype system, called CoWriter, was developed by researchers at the Ecole Polytechnique Fédérale de Lausanne  (EPFL) (Switzerland). A humanoid robot, designed to be likeable and interact with humans, is presented with a word that the child spells out in plastic letters. The robot recognizes the word and tries to write it, with its attempt appearing on a tablet. The child then identifies and corrects the robot’s errors by re-writing the word or specific letters.

Children teach a robot

The robot is facing difficulties to write. So the child as a teacher tends to commit itself to help the robot. And this is what we call in psychology ‘the protégé effect’; the child will try to protect this robot and help him to progress. And it’s a pretty well known fact that if the robot fails and keeps on failing and not improve its handwriting, the child will feel responsible for that. And by just relying on this effect we can really engage the children into a sustained interaction with the robot,” explained Lemaignan.
The team hopes their research will be the basis for an innovative use for robotics which addresses a widespread challenge in education.


Face Recognition Approaches One Hundred Percent Accuracy

A research team at the Chinese University of Hong Kong, led by Professor Xiaoou Tang, announced 99.15% face recognition accuracy achieved in Labeled Faces in the Wild (LFW) database (a database of face photographs designed for studying the problem of unconstrained face recognition).
The technology developed by Xiaoou Chen’s team is called DeepID, which is more accurate than visual identification.

face recognition
LFW is the most widely used face recognition benchmarks. Experimental results show that, if only the central region of the face is given, with the naked eye in the LFW person recognition rate is 97.52%

The three face recognition algorithms developed by Xiaoou Chen’s team now occupies the top three LFW recognition accuracy rate, followed by Facebook’s Deepface.

His lab has been based on the latest technological breakthroughs to produce a complete set of facial image processing system (SDK), including face detection, face alignment of key points, face recognition, expression recognition, gender recognition, age estimation

Xiaoou Tang plans to provide face recognition technology for free to Android, iOS and Windows Phone developers; with the help of this FreeFace-SDK, the developer can develop a variety of applications based on face recognition on the phone.


Cancer: The Promises Of Nanotechnology

In preclinical trials, nanomaterials have produced safer and more effective imaging and drug delivery, and they have enabled researchers to precisely target tumors while sparing patients’ healthy tissue. In addition, nanotechnology has significantly improved the sensitivity of magnetic resonance imaging, making hard-to-find cancers easier to detect.

A broad spectrum of innovative vehicles is being developed by the cancer nanomedicine community for targeted drug delivery and imaging systems,” said Dr. Ho, author of a new research review published online bye the journal Science Translational Medicine. Ho is co-director of the Jane and Jerry Weintraub Center for Reconstructive Biotechnology at the UCLA School of Dentistry. “It is important to address regulatory issues, overcome manufacturing challenges and outline a strategy for implementing nanomedicine therapies — both individually and in combination — to help achieve widespread acceptance for the clinical use of cancer nanomedicine.

Ho new report features multiple studies in which the use of nanoparticles was translated from the preclinical to the clinical stage. In several of the highlighted studies, nanotechnology-modified drugs showed improvements over conventional, drug-only approaches because of their ability to overcome drug resistance (which occurs when tumors reject the drug and stop responding to treatment), to more effective tumor reduction, among other advantages.

Also described is how algorithm-based methods that rapidly determine the best drug combinations, and computation-based methods that draw information from databases of drug interactions and side effects, to help rationally design drug combinations could potentially be paired with nanomedicine to deliver multiple nano-therapies together to further improve the potency and safety of cancer treatments.
This research review by Dr. Ho and his colleagues lays the groundwork for nanomedicine to become a widely accepted cancer therapy,” said Dr. No-Hee Park, dean of the UCLA School of Dentistry. “This blueprint for navigating the process from bench research to mainstream clinical use is invaluable to the nanotechnology community.”


Better Than GOOGLE!

Aurora Clark, an associate professor of chemistry at Washington State University, has adapted Google’s PageRank software to create moleculaRnetworks, which scientists can use to determine molecular shapes and chemical reactions without the expense, logistics and occasional danger of lab experiments.”What’s most cool about this work is we can take technology from a totally separate realm of science, computer science, and apply it to understanding our natural world,” says Clark.
What Aurora Clarck probably did not know is that the algorithm used successfully by Google founders is based mostly on a free formula developed by an Italian Professor of mathematics from the Univeristy of Padua. Now professor Massimo Marchiori has opened a new search engine on the web, with specific features that will surpass the accuracy of Google search engine. At this moment, the new search engine address is

Google’s PageRank software, developed by its founders at Stanford University, uses an algorithm—a set of mathematical formulas—to measure and prioritize the relevance of various Web pages to a user’s search.