Computer Reads Body Language

Researchers at Carnegie Mellon University‘s Robotics Institute have enabled a computer to understand body poses and movements of multiple people from video in real time — including, for the first time, the pose of each individual’s hands and fingers. This new method was developed with the help of the Panoptic Studio — a two-story dome embedded with 500 video cameras — and the insights gained from experiments in that facility now make it possible to detect the pose of a group of people using a single camera and a laptop computer.

Yaser Sheikh, associate professor of robotics, said these methods for tracking 2-D human form and motion open up new ways for people and machines to interact with each other and for people to use machines to better understand the world around them. The ability to recognize hand poses, for instance, will make it possible for people to interact with computers in new and more natural ways, such as communicating with computers simply by pointing at things.

Detecting the nuances of nonverbal communication between individuals will allow robots to serve in social spaces, allowing robots to perceive what people around them are doing, what moods they are in and whether they can be interrupted. A self-driving car could get an early warning that a pedestrian is about to step into the street by monitoring body language. Enabling machines to understand human behavior also could enable new approaches to behavioral diagnosis and rehabilitation, for conditions such as autism, dyslexia and depression.


We communicate almost as much with the movement of our bodies as we do with our voice,” Sheikh said. “But computers are more or less blind to it.”

In sports analytics, real-time pose detection will make it possible for computers to track not only the position of each player on the field of play, as is now the case, but to know what players are doing with their arms, legs and heads at each point in time. The methods can be used for live events or applied to existing videos.

To encourage more research and applications, the researchers have released their computer code for both multi-person and hand pose estimation. It is being widely used by research groups, and more than 20 commercial groups, including automotive companies, have expressed interest in licensing the technology, Sheikh said.

Sheikh and his colleagues have presented reports on their multi-person and hand pose detection methods at CVPR 2017, the Computer Vision and Pattern Recognition Conference  in Honolulu.


Virtual Reality Will Help Autistic Children At Home

Autistic children can quickly lose interest in conventional therapy techniques. But in the 3D cave at Poland’s Silesian University Of Technology that’s not the case. Scientists led by Piotr Wodarski created this virtual world, similar to combat simulators used to train soldiers.

A child entering our application activates certain motion sequences which allow the optical system to measure where the individual segments of the body are, and on this basis calculate the appropriate modules of the application so that they match the location of the objects with the reach of a palm or the position of the head of the person in our system,” says Piotr Wodarski, researcher at the Silesian University of Technology. Therapeutic activities, like moving colourful blocks around, are programmed into the system. Professor Marek Gzik says it’s helping both children with autism and Down’s Syndrome focus better on their therapy. Autistic patients, in particular, can find human interaction difficult.
Getting through to these children can be difficult. But thanks to this technology they open up and we can diagnose their problems properly, in detail, objectively, measuring the mobility in their joints for instance, and then see which methods of rehabilitation are most efficient,” says Professor Marek Gzik. Engineers are tweaking the system to meet children’s varying levels of physical and mental development. They hope that children could soon use the program at home with virtual reality headsets.

Socializing: Just A Brain Circuit To Stimulate

A team of Stanford University investigators has linked a particular brain circuit to mammals’ tendency to interact socially. Stimulating this circuitone among millions in the brain — instantly increases a mouse’s appetite for getting to know a strange mouse, while inhibiting it shuts down its drive to socialize with the stranger.

The new findings, published June 19 in Cell, may throw light on psychiatric disorders marked by impaired social interaction such as autism, social anxiety, schizophrenia and depression, said the study’s senior author, Karl Deisseroth, MD, PhD, a professor of bioengineering and of psychiatry and behavioral sciences.

People with autism, for example, often have an outright aversion to social interaction,” says Deisseroth, a practicing psychiatrist who sees patients with severe social deficits. They can find socializing — even mere eye contactpainful.

Deisseroth pioneered a brain-exploration technique, optogenetics, that involves selectively introducing light-receptor molecules to the surfaces of particular nerve cells in a living animal’s brain and then carefully positioning, near the circuit in question, the tip of a lengthy, ultra-thin optical fiber (connected to a laser diode at the other end) so that the photosensitive cells and the circuits they compose can be remotely stimulated or inhibited at the turn of a light switch while the animal remains free to move around in its cage.