Computer Reads Body Language

Researchers at Carnegie Mellon University‘s Robotics Institute have enabled a computer to understand body poses and movements of multiple people from video in real time — including, for the first time, the pose of each individual’s hands and fingers. This new method was developed with the help of the Panoptic Studio — a two-story dome embedded with 500 video cameras — and the insights gained from experiments in that facility now make it possible to detect the pose of a group of people using a single camera and a laptop computer.

Yaser Sheikh, associate professor of robotics, said these methods for tracking 2-D human form and motion open up new ways for people and machines to interact with each other and for people to use machines to better understand the world around them. The ability to recognize hand poses, for instance, will make it possible for people to interact with computers in new and more natural ways, such as communicating with computers simply by pointing at things.

Detecting the nuances of nonverbal communication between individuals will allow robots to serve in social spaces, allowing robots to perceive what people around them are doing, what moods they are in and whether they can be interrupted. A self-driving car could get an early warning that a pedestrian is about to step into the street by monitoring body language. Enabling machines to understand human behavior also could enable new approaches to behavioral diagnosis and rehabilitation, for conditions such as autism, dyslexia and depression.


We communicate almost as much with the movement of our bodies as we do with our voice,” Sheikh said. “But computers are more or less blind to it.”

In sports analytics, real-time pose detection will make it possible for computers to track not only the position of each player on the field of play, as is now the case, but to know what players are doing with their arms, legs and heads at each point in time. The methods can be used for live events or applied to existing videos.

To encourage more research and applications, the researchers have released their computer code for both multi-person and hand pose estimation. It is being widely used by research groups, and more than 20 commercial groups, including automotive companies, have expressed interest in licensing the technology, Sheikh said.

Sheikh and his colleagues have presented reports on their multi-person and hand pose detection methods at CVPR 2017, the Computer Vision and Pattern Recognition Conference  in Honolulu.


3D Printed Hair

3-D printers typically produce hard plastic objects, but researchers at Carnegie Mellon University (CMU) have found a way to produce hair-like strands, fibers and bristles using a common, low-cost printer. The technique for producing 3-D-printed hair is similar to — and inspired by — the way that gossamer plastic strands are extruded when a person uses a hot glue gun.

3D printed hair

You just squirt a little bit of material and pull away,” said Gierad Laput, a Ph.D. student in Carnegie Mellon’s Human-Computer Interaction Institute (HCII). “It’s a very simple idea, really.” The plastic hair is produced strand by strand, so the process isn’t fast — it takes about 20-25 minutes to generate hair on 10 square millimeters. But it requires no special hardware, just a set of parameters that can be added to a 3-D print job.

The resulting hair can be cut, curled with hot air, or braided. Dense, close-cropped strands can form a brush.

The researchers developed their technique using a fused deposition modeling (FDM) printer. FDM printers are inexpensive; the one Laput and his colleagues use cost $300.


Mimicking Nature’s Tiniest Patterns

Our world is full of patterns, from the twist of a DNA molecule to the spiral of the Milky Way. New research from Carnegie Mellon (CMU) chemists has revealed that tiny, synthetic gold nanoparticles exhibit some of nature’s most intricate patterns.
Unveiling the kaleidoscope of these patterns was a Herculean task, and it marks the first time that a nanoparticle of this size has been crystallized and its structure mapped out atom by atom.
gold nanoparticle structure
The x-ray crystallographic structure of the gold nanoparticle is shown.
Gold atoms = magenta;
sulfur atoms = yellow;
carbon atoms = gray;
hydrogen atoms = white.

As you broadly think about different research areas or even our everyday lives, these kinds of patterns, these hierarchical patterns, are universal,” said Rongchao Jin, associate professor of chemistry. “Our universe is really beautiful and when you see this kind of information in something as small as a 133-atom nanoparticle and as big as the Milky Way, it’s really amazing.”
Gold nanoparticles, which can vary in size from 1 to 100 nanometers, are a promising technology that has applications in a wide range of fields including catalysis, electronics, materials science and health care. But, in order to use gold nanoparticles in practical applications, scientists must first understand the tiny particles’ structure.
Structure essentially determines the particle’s properties, so without knowing the structure, you wouldn’t be able to understand the properties and you wouldn’t be able to functionalize them for specific applications,” said Jin, an expert in creating atomically precise gold nanoparticles.
With this latest research, Jin and his colleagues, including graduate student Chenjie Zeng, have solved the structure of a nanoparticle, Au133, made up of 133 gold atoms and 52 surface-protecting molecules — the biggest nanoparticle structure ever resolved with X-ray crystallography.
The researchers report their work in the March 20 issue of Science Advances.

Self-Assembled Nanofibers Mimic Living Cells Fibers

Researchers from Carnegie Mellon University have developed a novel method for creating self-assembled protein/polymer nanostructures that are reminiscent of fibers found in living cells. The work offers a promising new way to fabricate materials for drug delivery and tissue engineering applications.

The building blocks of the fibers are a few modified green fluorescent protein (GFP) molecules linked together using a process called click chemistry. An ordinary GFP molecule does not normally bind with other GFP molecules to form fibers.
We have demonstrated that, by adding flexible linkers to protein molecules, we can form completely new types of aggregates. These aggregates can act as a structural material to which you can attach different payloads, such as drugs. In nature, this protein isn’t close to being a structural material,” said Tomasz Kowalewski, professor of chemistry in Carnegie Mellon‘s Mellon College of Science.
But when Carnegie Mellon graduate student Saadyah Averick, working under the guidance of Krzysztof Matyjaszewski, Professor of Chemistry, modified the GFP molecules and attached PEO-dialkyne linkers to them, they noticed something strange — the GFP molecules appeared to self-assemble into long fibers. Importantly, the fibers disassembled after being exposed to sound waves, and then reassembled within a few days. Systems that exhibit this type of reversible fibrous self-assembly have been long sought by scientists for use in applications such as tissue engineering, drug delivery, nanoreactors and imaging.
This was purely curiosity-driven and serendipity-driven work,” Kowalewski said. “But where controlled polymerization and organic chemistry meet biology, interesting things can happen“.
The findings were published in the July 28 issue of Angewandte Chemie International Edition.

WildCat,The Robot that Runs Up To 50 mph

The US company Boston Dynamics has presented a new robot called WILDCAT, with a current top speed of 16 mph (25 km/h) but it is designed to reach 50 mph (80km/h). Boston Dynamics builds advanced robots with remarkable behavior: mobility, agility, dexterity and speed using sensor-based controls and computation to unlock the capabilities of complex mechanisms.

Another robot: LS3 is a rough-terrain robot designed to go anywhere Marines and Soldiers go on foot, helping carry their load. Each LS3 carries up to 400 lbs of gear and enough fuel for a 20-mile mission lasting 24 hours. LS3 automatically follows its leader using computer vision, so it does not need a dedicated driver. It also travels to designated locations using terrain sensing andGPS. LS3 began a 2-year field testing phase in 2012. LS3 isfunded by DARPA and the US Marine Corps.
Organizations worldwide, from DARPA, the US Army, Navy and Marine Corps to Sony Corporation turn to Boston Dynamics for help creating the most advanced robots on Earth.
Boston Dynamics has assembled a solid team to develop the LS3, including engineers and scientists from Boston Dynamics, Carnegie Mellon, the Jet Propulsion Laboratory, Bell Helicopter, AAI Corporation and Woodward HRT.