The Ultra Smart Community Of The Future

Japan’s largest electronics show CEATEC – showcasing its version of our future – in a connected world with intelligent robots And cars that know when the driver is falling asleep. This is Omron‘s “Onboard Driving Monitoring Sensor,” checking its driver isn’t distracted.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

We are developing sensors that help the car judge what state the driver is in, with regards to driving. For example, if the driver has his eyes open and set on things he should be looking at, if the driver is distracted or looking at smartphones, and these types of situations,” explains Masaki Suwa, Omron Corp. Chief Technologist.

After 18 years of consumer electronics, CEATEC is changing focus to the Internet of Things and what it calls ‘the ultra-smart community of the future‘ A future where machines take on more important roles – machines like Panasonic‘s CaloRieco – pop in your plate and know exactly what you are about to consume.

By placing freshly cooked food inside the machine, you can measure total calories and the three main nutrients: protein, fat and carbohydrate. By using this machine, you can easily manage your diet,” says Panasonic staff engineer Ryota Sato.

Even playtime will see machines more involved – like Forpheus the ping playing robot – here taking on a Olympic bronze medalist – and learning with every stroke.
Rio Olympics Table Tennis player , Jun Mizutani, Bronze Medalist, reports: “It wasn’t any different from playing with a human being. The robot kept improving and getting better as we played, and to be honest, I wanted to play with it when it had reached its maximum level, to see how good it is.”

Computer Reads Body Language

Researchers at Carnegie Mellon University‘s Robotics Institute have enabled a computer to understand body poses and movements of multiple people from video in real time — including, for the first time, the pose of each individual’s hands and fingers. This new method was developed with the help of the Panoptic Studio — a two-story dome embedded with 500 video cameras — and the insights gained from experiments in that facility now make it possible to detect the pose of a group of people using a single camera and a laptop computer.

Yaser Sheikh, associate professor of robotics, said these methods for tracking 2-D human form and motion open up new ways for people and machines to interact with each other and for people to use machines to better understand the world around them. The ability to recognize hand poses, for instance, will make it possible for people to interact with computers in new and more natural ways, such as communicating with computers simply by pointing at things.

Detecting the nuances of nonverbal communication between individuals will allow robots to serve in social spaces, allowing robots to perceive what people around them are doing, what moods they are in and whether they can be interrupted. A self-driving car could get an early warning that a pedestrian is about to step into the street by monitoring body language. Enabling machines to understand human behavior also could enable new approaches to behavioral diagnosis and rehabilitation, for conditions such as autism, dyslexia and depression.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

We communicate almost as much with the movement of our bodies as we do with our voice,” Sheikh said. “But computers are more or less blind to it.”

In sports analytics, real-time pose detection will make it possible for computers to track not only the position of each player on the field of play, as is now the case, but to know what players are doing with their arms, legs and heads at each point in time. The methods can be used for live events or applied to existing videos.

To encourage more research and applications, the researchers have released their computer code for both multi-person and hand pose estimation. It is being widely used by research groups, and more than 20 commercial groups, including automotive companies, have expressed interest in licensing the technology, Sheikh said.

Sheikh and his colleagues have presented reports on their multi-person and hand pose detection methods at CVPR 2017, the Computer Vision and Pattern Recognition Conference  in Honolulu.

Source: https://www.cmu.edu/

Nanocomputer Packed Into a 50 Nanometers Block

In 1959 renowned physicist Richard Feynman, in his talk “Plenty of Room at the Bottom,” spoke of a future in which tiny machines could perform huge feats. Like many forward-looking concepts, his molecule and atom-sized world remained for years in the realm of science fiction. And then, scientists and other creative thinkers began to realize Feynman’s nanotechnological visions.

In the spirit of Feynman’s insight, and in response to the challenges he issued as a way to inspire scientific and engineering creativity, electrical and computer engineers at UC Santa Barbara (UCSB) have developed a design for a nanocomputer, with functional nanoscale computing. The concept involves a dense, three-dimensional circuit operating on an unconventional type of logic that could, theoretically, be packed into a block no bigger than 50 nanometers on any side.

nanochipenlarged

Novel computing paradigms are needed to keep up with the demand for faster, smaller and more energy-efficient devices,” said Gina Adam, postdoctoral researcher at UCSB’s Department of Electrical and Computer Engineering and lead author of the paper “Optimized stateful material implication logic for three dimensional data manipulation,” published in the journal Nano Research. “In a regular computer, data processing and memory storage are separated, which slows down computation. Processing data directly inside a three-dimensional memory structure would allow more data to be stored and processed much faster.

However, the continuing development and fabrication of progressively smaller components is bringing this virus-sized computing device closer to reality, said Dmitri Strukov, a UCSB professor of computer science.  “Our contribution is that we improved the specific features of that logic and designed it so it could be built in three dimensions,” he said.

Key to this development is the use of a logic system called material implication logic combined with memristors — circuit elements whose resistance depends on the most recent charges and the directions of those currents that have flowed through them. Unlike the conventional computing logic and circuitry found in our present computers and other devices, in this form of computing, logic operation and information storage happen simultaneously and locally. This greatly reduces the need for components and space typically used to perform logic operations and to move data back and forth between operation and memory storage. The result of the computation is immediately stored in a memory element, which prevents data loss in the event of power outages — a critical function in autonomous systems such as robotics.

Source: http://www.news.ucsb.edu/

Smart Textile Senses And Moves Like A Muscle

The ARC Center of Excellence for Electromaterials Science (ACES – Australia) researchers have for the first time, developed a smart textile from carbon nanotube and spandex fibres that can both sense and move in response to a stimulus like a muscle or joint.

Lead researcher Dr Javad Foroughi explains that the key difference between this, and previous ACES work, is the textile’s dual functionality.

katharina_schirmer_graduates

We have already made intelligent materials as sensors and integrated them into devices such as a knee sleeve that can be used to monitor the movement of the joint, providing valuable data that can be used to create a personalised training or rehabilitation program for the wearer,” Dr Foroughi said. “Our recent work allowed us to develop smart clothing that simultaneously monitors the wearer’s movements, senses strain, and adjusts the garment to support or correct the movement,” he adds.

The smart textile, which is easily scalable for the fabrication of industrial quantities, generates a mechanical work capacity and a power output which higher than that produced by human muscles. It has many potential applications ranging from smart textiles to robotics and sensors for lab on a chip devices. The team, having already created the knee sleeve prototype, is now working on using the smart textile as a wearable antenna, as well as in other biomedical applications.

Source: http://www.electromaterials.edu.au/

Smart Threads For Clothing And Robots

Fabrics containing flexible electronics are appearing in many novel products, such as clothes with in-built screens and solar panels. More impressively, these fabrics can act as electronic skins that can sense their surroundings and could have applications in robotics and prosthetic medicine. King Abdullah University of Science and Technology (KAUST – Saudi Arabia) researchers have now developed smart threads that detect the strength and location of pressures exerted on them1. Most flexible sensors function by detecting changes in the electrical properties of materials in response to pressure, temperature, humidity or the presence of gases. Electronic skins are built up as arrays of several individual sensors. These arrays currently need complex wiring and data analysis, which makes them too heavy, large or expensive for large-scale production.

Yanlong Tai and Gilles Lubineau from the University’s Division of Physical Science and Engineering have found a different approach. They built their smart threads from cotton threads coated with layers of one of the miracle materials of nanotechnology: single-walled carbon nanotubes (SWCNTs).

smart threadsThe twisted smart threads developed by KAUST researchers can be woven into pressure-sensitive electronic skin fabrics for use in novel clothing, robots or medical prosthetics

Cotton threads are a classic material for fabrics, so they seemed a logical choice,” said Lubineau. “Networks of nanotubes are also known to have piezoresistive properties, meaning their electrical resistance depends on the applied pressure.”

The researchers showed their threads had decreased resistance when subjected to stronger mechanical strains, and crucially the amplitude of the resistance change also depended on the thickness of the SWCNT coating.

These findings led the researchers to their biggest breakthrough: they developed threads of graded thickness with a thick SWCNT layer at one end tapering to a thin layer at the other end. Then, by combining threads in pairs—one with graded thickness and one of uniform thickness—the researchers could not only detect the strength of an applied pressure load, but also the position of the load along the threads.

Our system is not the first technology to sense both the strength and position of applied pressures, but our graded structure avoids the need for complicated electrode wirings, heavy data recording and analysis,” said Tai.

The researchers have used their smart threads to build two- and three-dimensional arrays that accurately detect pressures similar to those that real people and robots might be exposed to.
We hope that electronic skins made from our smart threads could benefit any robot or medical prosthetic in which pressure sensing is important, such as artificial hands,” said Lubineau.

https://discovery.kaust.edu.sa/

Root, the Code-Teaching Robot

In the digital age, computing fuels some of the fastest-growing segments of the economy, making programming an increasingly important part of an American education. But the words “computer literacy” do not exactly excite the imaginations of most grade schoolers. So how to engage young minds with coding? One answer, say researchers at Harvard’s Wyss Institute for Biologically Inspired Engineering, is a robot named Root.

Root teaches kids codingCLICK ON THE IMAGE TO ENJOY THE VIDEO

“Right now, coding is taught at a computer keyboard. It’s an abstract process that doesn’t have a relationship to the real world,” said Raphael Cherney, a research associate at the institute. “What Root does is bring coding to life in an extremely fun and approachable way. Kids with no experience in coding can be programming robots in a matter of minutes.

Fitting somewhere between old-time remote-controlled toy trains and today’s video games, Root is a robot that is programmed using a tablet interface called Square. Root has light and color sensors, bumpers, and a touch surface that enable it to respond to the physical world. In a classroom setting, Root would “drive” along a magnetic dry-erase whiteboard at the front of the class, giving the young programmers an “instant, physical manifestation” of the code, according to Zivthan Dubrovsky, who leads the robotics platform at Wyss.
Source: http://news.harvard.edu/

Robots Replace Human Hand To Pick Fruits

Fruit is delicate, so picking it is still often done by human hand. But this robotic system is smart enough to autonomously sort and move different fruits without damaging them. Developers Cambridge Consultants say it has the cognitive ability to work out how to best handle items that vary in shape.

robot fruit pickerCLICK ON THE IMAGE TO ENJOY THE VIDEO

Traditional robotic systems typically pick up exactly the same object from exactly the same place and move it to somewhere new; always doing the same action over and over again. But there are places, there are applications where robotics aren’t used at the moment where they could be if you can build in this capability of dealing with natural variations and small changes in the environment into the robotic system itself“, says Chris Roberts, head of industrial robotics at Cambridge Consultants.

The robot uses low-cost and easily available hardware, such as Microsoft‘s Kinect image sensor, that takes into account not only size and shape, but also colour. Its intuitive algorithms help it recognise the correct objects and calculate the order in which to pick them. The claw-like gripper uses sensor-packed vacuum tubes that adapt to handle the fruit securely without damaging it.
Roberts explains: “And only applying a vacuum to the ones that gripped, the ones where there’s a seal, we can spread the pressure across the fruit so we’re not bruising it but we still apply a consistent pressure that allows us to pick up heavier objects.” Similar ‘smart’ robots could transform many industrial and commercial processes, and collaborate better with humans.  “When robots come to interact with people, people aren’t as predictable as a production line. So the robot needs to be able to deal with changes in the environment and if someone moves an object from one place to another the robot needs to cope with that,” he adds.
Humans co-operating with robots in the workplace might still be some way off. But ever more advanced processing power means it’s closer than ever to being within our grasp.

Source: http://www.reuters.com/

How To Interact With Virtual Reality

An interactive swarm of flying 3D pixels (voxels) developed at Queen’s University’s Human Media Lab (Canada) is set to revolutionize the way people interact with virtual reality. The system, called BitDrones, allows users to explore virtual 3D information by interacting with physical self-levitating building blocks.

Queen’s professor Roel Vertegaal and his students have unveiled the BitDrones system  at the ACM Symposium on User Interface Software and Technology in Charlotte, North Carolina. BitDrones is the first step towards creating interactive self-levitating programmable mattermaterials capable of changing their 3D shape in a programmable fashion – using swarms of nano quadcopters. The work highlights many possible applications for the new technology, including real-reality 3D modeling, gaming, molecular modeling, medical imaging, robotics and online information visualization.

interact with virtual realityCLICK ON THE IMAGE TO ENJOY THE VIDEO

BitDrones brings flying programmable matter, such as featured in the futuristic Disney movie Big Hero 6, closer to reality,” says Dr. Vertegaal. “It is a first step towards allowing people to interact with virtual 3D objects as real physical objects.

Dr. Vertegaal and his team at the Human Media Lab created three types of BitDrones, each representing self-levitating displays of distinct resolutions. “PixelDrones” are equipped with one LED and a small dot matrix display. “ShapeDrones” are augmented with a light-weight mesh and a 3D printed geometric frame, and serve as building blocks for complex 3D models. “DisplayDrones” are fitted with a curved flexible high resolution touchscreen, a forward-facing video camera and Android smartphone board.  All three BitDrone types are equipped with reflective markers, allowing them to be individually tracked and positioned in real time via motion capture technology. The system also tracks the user’s hand motion and touch, allowing users to manipulate the voxels in space.

We call this a Real Reality interface rather than a Virtual Reality interface. This is what distinguishes it from technologies such as Microsoft HoloLens and the Oculus Rift: you can actually touch these pixels, and see them without a headset,” says Dr. Vertegaal.

Source: http://www.hml.queensu.ca/

Electronics You Can Bend And Stretch

Stretchable material technologies have enabled an emerging range of applications that are impossible to achieve using conventional rigid or flexible technologies. Examples can be found in diverse application domains such as roboticS and automation, health care and biomedical technologies, and consumer electronics. The ability to deform a functional substrate so that it can be wrapped around a curved or moving surface, allows for example creating an artificial (robot) skin, wearable on-body sensing systems, or even monitoring moving machine parts or electronic systems that conform to their environment.
Now a team from Gent University in Belgium has developed the first optical circuit that uses interconnections that are not only bendable, but also stretchable using a stretchable material, polydimethylsiloxane (PDMS). Nowadays, more and more (sensing) systems are implemented using optical instead of electrical technologies and it can therefore be expected that in addition to stretchable electrical interconnections, there will also be a need for stretchable optical interconnections.

electronics to bend and stretch
The researchers introduce the concept of stretchable optical interconnections based on multimode PDMS waveguides. To adopt a widely applicable and cost-efficient technology, only commercially available materials are used and the waveguides are patterned using a replication technology based on the capillary filling of PDMS microchannels.

Source: http://www.opticsinfobase.org/

WildCat,The Robot that Runs Up To 50 mph

The US company Boston Dynamics has presented a new robot called WILDCAT, with a current top speed of 16 mph (25 km/h) but it is designed to reach 50 mph (80km/h). Boston Dynamics builds advanced robots with remarkable behavior: mobility, agility, dexterity and speed using sensor-based controls and computation to unlock the capabilities of complex mechanisms.

wildcat_concept
Another robot: LS3 is a rough-terrain robot designed to go anywhere Marines and Soldiers go on foot, helping carry their load. Each LS3 carries up to 400 lbs of gear and enough fuel for a 20-mile mission lasting 24 hours. LS3 automatically follows its leader using computer vision, so it does not need a dedicated driver. It also travels to designated locations using terrain sensing andGPS. LS3 began a 2-year field testing phase in 2012. LS3 isfunded by DARPA and the US Marine Corps.
Organizations worldwide, from DARPA, the US Army, Navy and Marine Corps to Sony Corporation turn to Boston Dynamics for help creating the most advanced robots on Earth.
Boston Dynamics has assembled a solid team to develop the LS3, including engineers and scientists from Boston Dynamics, Carnegie Mellon, the Jet Propulsion Laboratory, Bell Helicopter, AAI Corporation and Woodward HRT.

Source: http://www.bostondynamics.com/

Nano-Machines Mimic Human Muscle

Nature manufactures numerous machines known as “molecular”. Highly complex assemblies of proteins, they are involved in essential functions of living beings such as the transport of ions, the synthesis of ATP (the “energy molecule”), and cell division. Our muscles are thus controlled by the coordinated movement of these thousands of protein nano-machines, which only function individually over distances of the order of a nanometer. However, when combined in their thousands, such nano-machines amplify this telescopic movement until they reach our scale and do so in a perfectly coordinated manner.
For the first time, an assembly of thousands of nano-machines capable of producing a coordinated contraction movement extending up to around ten micrometers – thereby amplifying the movement by a factor of 10,000, like the movements of muscular fibers, has been synthesized by a CNRS team from the Institut Charles Sadron – France.

This discovery opens up perspectives for a multitude of applications in robotics, in nanotechnology for the storage of information, in the medical field for the synthesis of artificial muscles or in the design of other materials incorporating nano-machines (endowed with novel mechanical properties).
Source: http://www2.cnrs.fr/en/2117.htm