Posts belonging to Category vritual reality



How Brain Waves Can Control VR Video Games

Virtual reality is still so new that the best way for us to interact within it is not yet clear. One startup wants you to use your head, literally: it’s tracking brain waves and using the result to control VR video games.

Boston-based startup Neurable is focused on deciphering brain activity to determine a person’s intention, particularly in virtual and augmented reality. The company uses dry electrodes to record brain activity via electroencephalography (EEG); then software analyzes the signal and determines the action that should occur.

neurons2

You don’t really have to do anything,” says cofounder and CEO Ramses Alcaide, who developed the technology as a graduate student at the University of Michigan. “It’s a subconscious response, which is really cool.”

Neurable, which raised $2 million in venture funding late last year, is still in the early stages: its demo hardware looks like a bunch of electrodes attached to straps that span a user’s head, worn along with an HTC Vive virtual-reality headset. Unlike the headset, Neurable’s contraption is wireless—it sends data to a computer via Bluetooth. The startup expects to offer software tools for game development later this year, and it isn’t planning to build its own hardware; rather, Neurable hopes companies will be making headsets with sensors to support its technology in the next several years.

Source; https://www.technologyreview.com/
AND
http://neurable.com/

Virtual Images that Blend In And Interact With The Real-World

Avegant, a Silicon Valley startup that sells a pair of headphones equipped with a VR-like portable screen, is breaking into augmented reality. The company today announced that it’s developed a new type of headset technology powered by a so-called light field display.

Avegant ARCLICK ON THE IMAGE TO ENJOY THE VIDEO

The research prototype, which Avegant eventually plans on turning into a consumer product, is based on the company’s previous work with its Glyph projector. That device was a visor of sorts that floats a virtual movie screen in front of your eyes, and developing it gave Avegant insight into how to build an AR headset of its own.

Like Microsoft’s HoloLens and the supposed prototype from secretive AR startup Magic Leap, Avegant’s new headset creates virtual images that blend in and interact with the real-world environment. In a demo, the company’s wired prototype proved to be superior in key ways to the developer version of the HoloLens. Avegant attributes this not to the power of its tethered PC, but to the device’s light field display — a technology Magic Leap also claims to have developed, yet has never been shown off to the public.

The demo I experienced featured a tour of a virtual Solar System, an immersion within an ocean environment, and a conversation with a virtual life-sized human being standing in the same room. To be fair, Avegant was using a tethered and bulky headset that wasn’t all that comfortable, while the HoloLens developer version is a refined wireless device. Yet with that said, Avegant’s prototype managed to expand the field of view, so you’re looking through a window more the size of a Moleskine notebook instead of a pack of playing cards. The images it produced also felt sharper, richer, and more realistic.

In the Solar System demo, I was able to observe a satellite orbiting an Earth no larger than a bocce ball and identify the Big Red Spot on Jupiter. Avegant constructed its demo to show off how these objects could exist at different focal lengths in a fixed environment — in this case a converted conference room at the company’s Belmont, California office. So I was able to stand behind the Sun and squint until the star went out of focus in one corner of my vision and a virtual Saturn and its rings became crystal clear in the distance.

Source: http://www.theverge.com/

Artificial Intelligence Writes Code By Looting

Artificial intelligence (AI) has taught itself to create its own encryption and produced its own universal ‘language. Now it’s writing its own code using similar techniques to humans. A neural network, called DeepCoder, developed by Microsoft and University of Cambridge computer scientists, has learnt how to write programs without a prior knowledge of code.  DeepCoder solved basic challenges of the kind set by programming competitions. This kind of approach could make it much easier for people to build simple programs without knowing how to write code.

deep coder

All of a sudden people could be so much more productive,” says Armando Solar-Lezama at the Massachusetts Institute of Technology, who was not involved in the work. “They could build systems that it [would be] impossible to build before.”

Ultimately, the approach could allow non-coders to simply describe an idea for a program and let the system build it, says Marc Brockschmidt, one of DeepCoder’s creators at Microsoft Research in Cambridge. UK.DeepCoder uses a technique called program synthesis: creating new programs by piecing together lines of code taken from existing software – just like a programmer might. Given a list of inputs and outputs for each code fragment, DeepCoder learned which pieces of code were needed to achieve the desired result overall.

Source: https://www.newscientist.com/

No More Speakers For Television

Sony has created the world’s first television which can emit sound from the screen itself, removing the need for separate speakers. Unveiled at CES 2017 in Las Vegas, the A1 BRAVIA OLED series features a unique “Acoustic Surface“, which sees the sound being emitted from the whole of the screen.

Sony Bravia

Sony creates a 3D sound scape by pairing the objects you’re viewing on the screen to the sound that they are making. For example, if you were watching a movie where a car drives across the screen, the sound will follow the movement of the car, adding a whole new level of immersion to your home entertainment experience. The screen transmits sound through two transducers which are located on the back of screen. These generate vibrations onto the area of the screen that’s required to transmit the sound. Despite the BRAVIA screen working as both a screen and a speaker, it remains impressively streamline. The display also comes with clean cable management to keep wires out of view. The technology could eventually expand to include LED screens, but Sony don’t have any plans do this just yet, as the multiple layers that make up a LED screen makes it harder to retain the picture and audio quality.

By truly fusing together the image and sound, Sony’s new BRAVIA TV gives a heightened TV viewing experience without you having to set up a complex system of surround sound speakers.

Source: http://www.mirror.co.uk/

Apple Testing Augmented Reality ‘Smart Glasses’

As part of its effort to expand further into wearable devices, Apple is working on a set of smart glasses, reports Bloomberg. Citing sources familiar with Apple‘s plans, the site says the smart glasses would connect wirelessly to the iPhone, much like the Apple Watch, and would display “images and other information” to the wearer. Apple has contacted potential suppliers about its glasses project and has ordered “small quantities” of near-eye displays, suggesting the project is in the exploratory prototyping phase of development. If work on the glasses progresses, they could be released in 2018.

apple-iglass

AR can be really great,” says Tim Cook, CEO of Apple in July. “We have been and continue to invest a lot in this. We’re high on AR in the long run.

Apple‘s glasses sound similar to Google Glass, the head-mounted display that Google first introduced in 2013. Google Glass used augmented reality and voice commands to allow users to do things like check the weather, make phone calls, and capture photographs. Apple‘s product could be similar in functionality. The glasses may be Apple‘s first hardware product targeted directly at AR, one of the people said. Cook has beefed up AR capabilities through acquisitions. In 2013, Apple bought PrimeSense, which developed motion-sensing technology in Microsoft Corp.’s Kinect gaming system. Purchases of software startups in the field, Metaio Inc. and Flyby Media Inc., followed in 2015 and 2016.

Google Glass was highly criticized because of privacy concerns, and as a result, it never really caught on with consumers. Google eventually stopped developing Google Glass in January of 2015. It is not clear how Apple would overcome the privacy and safety issues that Google faced, nor if the project will progress, but Apple CEO Tim Cook has expressed Apple‘s deep interest in augmented reality multiple times over the last few months, suggesting something big is in the works.

Past rumors have also indicated Apple is exploring a number of virtual and augmented reality projects, including a full VR headset. Apple has a full team dedicated to AR and VR research and how the technologies can be incorporated into future Apple products. Cook recently said that he believes augmented reality would be more useful and interesting to people than virtual reality.

Source: http://www.macrumors.com/

Virtual Hug

Skin care giant Nivea has allowed a mother and son to have a ‘virtual hug’ from two different countries thanks to its ‘Second Skin Project’ involving nanotechnology. However, all is not as it seems.

second skin
CLICK ON THE IMAGE TO ENJOY THE VIDEO

A video was created with Leo Burnett Madrid to highlight the importance of the human touch and initially discloses how nanotechnology helped the company recreate the effect from thousands of miles apart. A mother and son who were based in Uruguay and Spain were selected for the experiment, with Beiersdorf-owned Nivea using a ground-breaking fabric that is said to simulate human skin. According to the video, the material is woven with a number of sensors and can retain electrical impulses. As a result of this, when one person touches it, the other can feel the touch from thousands of miles away.

However, at the end of the video the project is ousted as not being real, and is instead a shrewd marketing campaign for the importance of the human touch, and, in effect, its skin cream. Watch the video, and get your tissues at the ready, to see it unfold.

Source: https://globalcosmeticsnews.com/

Google Glass Used For Arteries Surgery

Doctors in Poland used a virtual reality system combining a custom mobile application and Google Glass to clear a blocked coronary artery, one of the first uses of the technology to assist with surgery. The imaging system was used with a patient who had chronic total occlusion, a complete blockage of the artery, which doctors said is difficult to clear using standard catheter-based percutaneous coronary intervention, or PCI.

The system provides three-dimensional reconstructions of the artery and includes a hands-free voice recognition system allowing for zoom and changes of the images. The head-mounted display system allows doctors to capture images and video while also interacting with the environment around them. In patients with chronic total occlusion, the standard procedure is not always successful, at least partially because of difficulty visualizing the blockage with conventional coronary tomography angiography, or CTA, imaging.

Doctors-use-virtual-reality-imaging-to-treat-blocked-coronary-artery

This case demonstrates the novel application of wearable devices for display of CTA data sets in the catheterization laboratory that can be used for better planning and guidance of interventional procedures, and provides proof of concept that wearable devices can improve operator comfort and procedure efficiency in interventional cardiology,” Dr. Maksymilian Opolski, of the Department of Interventional Cardiology and Angiology at the Institute of Cardiology in Warsaw (Poland), said in a press release.

Source: http://www.onlinecjc.ca/
AND
http://www.upi.com/

How To Interact With Virtual Reality

An interactive swarm of flying 3D pixels (voxels) developed at Queen’s University’s Human Media Lab (Canada) is set to revolutionize the way people interact with virtual reality. The system, called BitDrones, allows users to explore virtual 3D information by interacting with physical self-levitating building blocks.

Queen’s professor Roel Vertegaal and his students have unveiled the BitDrones system  at the ACM Symposium on User Interface Software and Technology in Charlotte, North Carolina. BitDrones is the first step towards creating interactive self-levitating programmable mattermaterials capable of changing their 3D shape in a programmable fashion – using swarms of nano quadcopters. The work highlights many possible applications for the new technology, including real-reality 3D modeling, gaming, molecular modeling, medical imaging, robotics and online information visualization.

interact with virtual realityCLICK ON THE IMAGE TO ENJOY THE VIDEO

BitDrones brings flying programmable matter, such as featured in the futuristic Disney movie Big Hero 6, closer to reality,” says Dr. Vertegaal. “It is a first step towards allowing people to interact with virtual 3D objects as real physical objects.

Dr. Vertegaal and his team at the Human Media Lab created three types of BitDrones, each representing self-levitating displays of distinct resolutions. “PixelDrones” are equipped with one LED and a small dot matrix display. “ShapeDrones” are augmented with a light-weight mesh and a 3D printed geometric frame, and serve as building blocks for complex 3D models. “DisplayDrones” are fitted with a curved flexible high resolution touchscreen, a forward-facing video camera and Android smartphone board.  All three BitDrone types are equipped with reflective markers, allowing them to be individually tracked and positioned in real time via motion capture technology. The system also tracks the user’s hand motion and touch, allowing users to manipulate the voxels in space.

We call this a Real Reality interface rather than a Virtual Reality interface. This is what distinguishes it from technologies such as Microsoft HoloLens and the Oculus Rift: you can actually touch these pixels, and see them without a headset,” says Dr. Vertegaal.

Source: http://www.hml.queensu.ca/

3D Hologram From Pop-Up Floating Display

Moving holograms like those used in 3D science fiction movies such as Avatar and Elysium have to date only been seen in their full glory by viewers wearing special glasses.
Now researchers at Swinburne University of Technology (Australia) have shown the capacity of a technique using graphene oxide and complex laser physics to create a pop-up floating display without the need for 3D glasses. Graphene is a two dimensional carbon material with extraordinary electronic and optical properties that offers a new material platform for next-generation nanophototonic devices.

Through a photonic process without involving heat or a change in temperature, the researchers were able to create nanoscale pixels of refractive index – the measure of the bending of light as it passes through a medium – of reduced graphene oxide. This is crucial for the subsequent recording of the individual pixels for holograms and hence naked eye 3D viewing.
3D graphene
If you can change the refractive index you can create lots of optical effects,” Director of Swinburne’s Centre for Micro-Photonics, Professor Min Gu, said.
Our technique can be leveraged to achieve compact and versatile optical components for controlling light. We can create the wide angle display necessary for mobile phones and tablets.

Source: http://www.nature.com/

Brain Waves Command Drones Flight

Researchers demonstrate technology that allows unmanned aircraft to be controlled from the ground using only signals from the pilot’s brain.
An impressive example of mind control – a drone in the air, flown using the power of human thought. Portuguese tech company Tekever uses a special EEG cap to turn pilot’s brainwaves into commands for the drone. CEO Pedro Sinogas explains. “The brain approach that Tekever is using is based on collecting the signals from the brain, then a set of algorithms process all the brain signals and transform them into actual controls to multiple devices,” says Sinoga.
brain wavesWhile the pilot controls the drone’s flight path Tekever‘s researchers determine the mission before take-off. Tekever‘s Chief Operations Officer Ricardo Mendes is keen to apply the technology to commercial aviation – although this could take a while. “What we want to do is to get the technology more mature, prove it on the ground, work with the authorities to bring it to the aerospace and to the aviation world and that will take something like 10 years probably.” he says. And the Brainflight technology could have uses beyond flying. “If you have this technology available to you, you can enter your home and connect and disconnect devices with your mind or if you are a disabled person, for example you would be able to control your wheelchair by only using your mind, that’s our goal,” Mendes adds.Tekever engineers say their project will eventually allow pilots to free up their brains and bodies while flying a plane. In the future, pilotless planes could be more than just a flight of fancy.
Source: http://www.reuters.com/

Electric Car Race: The Rise Of Formula E

Downtown Miami has been converted into a race track. Cement blocks, fencing and grandstands are all in place for the first electric car race ever held on U.S. soil. Miami is the fifth of ten cities around the world to host during the inaugural year of the Formula E Championship, a fully electric race car series. Teams of mechanics are preparing their electric cars for Saturday’s race. Mark Schneider from Team Audi ABT says Formula E is in many ways similar to Formula 1. The cars are fast, the suspense on race day is high, but instead of the roar of a gasoline powered engines, these electric cars let out a high pitched hum as they barrel down the track. Schneider says pits stop are a bit different as well.
mazda-kaan-electric-car2
We do a pit stops like other racing series but when formula 1 changes tires we change cars. So we have two cars for each driver and after roughly half an hour the driver gets into the pits, jumps out of the car, jumps into another car and goes out again“, says Mark Schneider. Each car is powered by a massive lithium ion battery that makes up a third of the cars overall weight. Formula E CEO Alejandro Agag says with time those batteries will become more efficient and smaller allowing them to power a single car for an entire race. He says the concept behind formula E is to drive research and development in the electric automotive space to new heights.

Formula 1, Indy Car, NASCAR are places where new technologies have been developed that then have been used on road cars and we want Formula E to be the place that happens for the electric car,” he noticed. Along with innovations on the track, Agag says he wants to attract young fans to Formula E by utilizing technology off the track as well. He says plans are in the works to develop an interactive virtual track that will allow people to compete on race day from their homes. He concludes: “So if you are a kid at home you can play with the virtual car, a shadow car, against the real racers in real time.
Source: http://www.reuters.com/

A.I., Nanotechnology ‘threaten civilisation’

A report from the Global Challenges Foundation created the first list of global risks with impacts that for all practical purposes can be called infinite. It is also the first structured overview of key events related to such risks and has tried to provide initial rough quantifications for the probabilities of these impacts.
Besides the usual major risks such as extreme climate change, nuclear war, super volcanoes or asteroids impact there are 3 emerging new global risks: Synthetic Biology, Nanotechnology and Artificial Intelligence (A.I.).
terminator
The real focus is not on the almost unimaginable impacts of the risks the report outlines. Its fundamental purpose is to encourage global collaboration and to use this new category of risk as a driver for innovation.

In the case of AI, the report suggests that future machines and software with “human-level intelligence” could create new, dangerous challenges for humanity – although they could also help to combat many of the other risks cited in the report. “Such extreme intelligences could not easily be controlled (either by the groups creating them, or by some international regulatory regime), and would probably act to boost their own intelligence and acquire maximal resources for almost all initial AI motivations,” suggest authors Dennis Pamlin and Stuart Armstrong.
In the case of nanotechnology, the report notes that “atomically precise manufacturing” could have a range of benefits for humans. It could help to tackle challenges including depletion of natural resources, pollution and climate change. But it foresees risks too.
It could create new products – such as smart or extremely resilient materials – and would allow many different groups or even individuals to manufacture a wide range of things,” suggests the report. “This could lead to the easy construction of large arsenals of conventional or more novel weapons made possible by atomically precise manufacturing.”

Source: http://globalchallenges.org/