Posts belonging to Category vritual reality



Use The Phone And See 3D Content Without 3D Glasses

RED, the company known for making some truly outstanding high-end cinema cameras, is set to release a smartphone in Q1 of 2018 called the HYDROGEN ONE. RED says that it is a standalone, unlocked and fully-featured smartphone “operating on Android OS that just happens to add a few additional features that shatter the mold of conventional thinking.” Yes, you read that right. This phone will blow your mind, or something – and it will even make phone calls.

In a press release riddled with buzzwords broken up by linking verbs, RED praises their yet-to-be smartphone with some serious adjectives. If we were just shown this press release outside of living on RED‘s actual server, we would swear it was satire. Here are a smattering of phrases found in the release.

Incredible retina-riveting display
Nanotechnology
Holographic multi-view content
RED Hydrogen 4-View content
Assault your senses
Proprietary H3O algorithm
Multi-dimentional audio

  • There are two models of the phone, which run at different prices. The Aluminum model will cost $1,195, but anyone worth their salt is going to go for the $1,595 Titanium version. Gotta shed that extra weight, you know?

Those are snippets from just the first three sections, of which there are nine. I get hyping a product, but this reads like a catalog seen in the background of a science-fiction comedy, meant to sound ridiculous – especially in the context of a ficticious universe.

Except that this is real life.

After spending a few minutes removing all the glitter words from this release, it looks like it will be a phone using a display similar to what you get with the Nintendo 3DS, or what The Verge points out as perhaps better than the flopped Amazon Fire Phone. Essentially, you should be able to use the phone and see 3D content without 3D glasses. Nintendo has already proven that can work, however it can really tire out your eyes. As an owner of three different Nintendo 3DS consoles, I can say that I rarely use the 3D feature because of how it makes my eyes hurt. It’s an odd sensation. It is probalby why Nintendo has released a new handheld that has the same power as the 3DS, but dropping the 3D feature altogether.

Anyway, back to the HYDROGEN ONE, RED says that it will work in tandem with their cameras as a user interface and monitor. It will also display what RED is calling “holographic content,” which isn’t well-described by RED in this release. We can assume it is some sort of mixed-dimensional view that makes certain parts of a video or image stand out over the others.

Source: http://www.red.com/
AND
http://www.imaging-resource.com/

Legally Blind People Can See With A New Kind Of Glasses

A Canadian company based in Toronto has suceeded to build a kind of Google glass that is able to give back full sight to legally blind people.  The eSight is an augmented reality headset that houses a high-speed, high-definition camera that captures everything the user is looking at.

CLICK ON THE IMAGE TO ENJOY THE VIDEO


Algorithms enhance the video feed and display it on two, OLED screens in front of the user’s eyes. Full color video images are clearly seen by the eSight user with unprecedented visual clarity and virtually no lag. With eSight’s patented Bioptic Tilt capability, users can adjust the device to the precise position that, for them, presents the best view of the video while maximizing side peripheral vision. This ensures a user’s balance and prevents nausea – common problems with other immersive technologies. A blind individual can use both of their hands while they use eSight to see. It is lightweight, worn comfortably around the eyes and designed for various environments and for use throughout the day.

eSight is a comprehensive customized medical device that can replace all the many single-task assistive devices that are currently available but do not provide actual sight (e.g. white canes, magnifying devices, service animals, Braille machines, CCTV scanners, text-to-speech software). It allows a user to instantly auto-focus between short-range vision (reading a book or text on a smartphone) to mid-range vision (seeing faces or watching TV) to long-range vision (looking down a hallway or outsidea window). It is the only device for the legally blind that enables mobility without causing issues of imbalance or nausea (common with other immersive options). A legally blind individual can use eSight not just to see while sitting down but while being independently mobile (e.g. walking, exercising, commuting, travelling, etc).

According to The Wall Street Journal, the company is taking advantages of recent improvements in technology from VR headsets and smartphones that have trickled down to improve the latest version of the eSight. So far, the company has sold roughly a thousand units, but at $10,000 apiece, they’re not cheap (and most insurances apparently don’t cover the product), although eSight’s chief executive Brian Mech notes to the WSJ that getting devices to users is “a battle we are starting to wage.”

Source: https://www.esighteyewear.com/

How Brain Waves Can Control VR Video Games

Virtual reality is still so new that the best way for us to interact within it is not yet clear. One startup wants you to use your head, literally: it’s tracking brain waves and using the result to control VR video games.

Boston-based startup Neurable is focused on deciphering brain activity to determine a person’s intention, particularly in virtual and augmented reality. The company uses dry electrodes to record brain activity via electroencephalography (EEG); then software analyzes the signal and determines the action that should occur.

neurons2

You don’t really have to do anything,” says cofounder and CEO Ramses Alcaide, who developed the technology as a graduate student at the University of Michigan. “It’s a subconscious response, which is really cool.”

Neurable, which raised $2 million in venture funding late last year, is still in the early stages: its demo hardware looks like a bunch of electrodes attached to straps that span a user’s head, worn along with an HTC Vive virtual-reality headset. Unlike the headset, Neurable’s contraption is wireless—it sends data to a computer via Bluetooth. The startup expects to offer software tools for game development later this year, and it isn’t planning to build its own hardware; rather, Neurable hopes companies will be making headsets with sensors to support its technology in the next several years.

Source; https://www.technologyreview.com/
AND
http://neurable.com/

Virtual Images that Blend In And Interact With The Real-World

Avegant, a Silicon Valley startup that sells a pair of headphones equipped with a VR-like portable screen, is breaking into augmented reality. The company today announced that it’s developed a new type of headset technology powered by a so-called light field display.

Avegant ARCLICK ON THE IMAGE TO ENJOY THE VIDEO

The research prototype, which Avegant eventually plans on turning into a consumer product, is based on the company’s previous work with its Glyph projector. That device was a visor of sorts that floats a virtual movie screen in front of your eyes, and developing it gave Avegant insight into how to build an AR headset of its own.

Like Microsoft’s HoloLens and the supposed prototype from secretive AR startup Magic Leap, Avegant’s new headset creates virtual images that blend in and interact with the real-world environment. In a demo, the company’s wired prototype proved to be superior in key ways to the developer version of the HoloLens. Avegant attributes this not to the power of its tethered PC, but to the device’s light field display — a technology Magic Leap also claims to have developed, yet has never been shown off to the public.

The demo I experienced featured a tour of a virtual Solar System, an immersion within an ocean environment, and a conversation with a virtual life-sized human being standing in the same room. To be fair, Avegant was using a tethered and bulky headset that wasn’t all that comfortable, while the HoloLens developer version is a refined wireless device. Yet with that said, Avegant’s prototype managed to expand the field of view, so you’re looking through a window more the size of a Moleskine notebook instead of a pack of playing cards. The images it produced also felt sharper, richer, and more realistic.

In the Solar System demo, I was able to observe a satellite orbiting an Earth no larger than a bocce ball and identify the Big Red Spot on Jupiter. Avegant constructed its demo to show off how these objects could exist at different focal lengths in a fixed environment — in this case a converted conference room at the company’s Belmont, California office. So I was able to stand behind the Sun and squint until the star went out of focus in one corner of my vision and a virtual Saturn and its rings became crystal clear in the distance.

Source: http://www.theverge.com/

Artificial Intelligence Writes Code By Looting

Artificial intelligence (AI) has taught itself to create its own encryption and produced its own universal ‘language. Now it’s writing its own code using similar techniques to humans. A neural network, called DeepCoder, developed by Microsoft and University of Cambridge computer scientists, has learnt how to write programs without a prior knowledge of code.  DeepCoder solved basic challenges of the kind set by programming competitions. This kind of approach could make it much easier for people to build simple programs without knowing how to write code.

deep coder

All of a sudden people could be so much more productive,” says Armando Solar-Lezama at the Massachusetts Institute of Technology, who was not involved in the work. “They could build systems that it [would be] impossible to build before.”

Ultimately, the approach could allow non-coders to simply describe an idea for a program and let the system build it, says Marc Brockschmidt, one of DeepCoder’s creators at Microsoft Research in Cambridge. UK.DeepCoder uses a technique called program synthesis: creating new programs by piecing together lines of code taken from existing software – just like a programmer might. Given a list of inputs and outputs for each code fragment, DeepCoder learned which pieces of code were needed to achieve the desired result overall.

Source: https://www.newscientist.com/

No More Speakers For Television

Sony has created the world’s first television which can emit sound from the screen itself, removing the need for separate speakers. Unveiled at CES 2017 in Las Vegas, the A1 BRAVIA OLED series features a unique “Acoustic Surface“, which sees the sound being emitted from the whole of the screen.

Sony Bravia

Sony creates a 3D sound scape by pairing the objects you’re viewing on the screen to the sound that they are making. For example, if you were watching a movie where a car drives across the screen, the sound will follow the movement of the car, adding a whole new level of immersion to your home entertainment experience. The screen transmits sound through two transducers which are located on the back of screen. These generate vibrations onto the area of the screen that’s required to transmit the sound. Despite the BRAVIA screen working as both a screen and a speaker, it remains impressively streamline. The display also comes with clean cable management to keep wires out of view. The technology could eventually expand to include LED screens, but Sony don’t have any plans do this just yet, as the multiple layers that make up a LED screen makes it harder to retain the picture and audio quality.

By truly fusing together the image and sound, Sony’s new BRAVIA TV gives a heightened TV viewing experience without you having to set up a complex system of surround sound speakers.

Source: http://www.mirror.co.uk/

Apple Testing Augmented Reality ‘Smart Glasses’

As part of its effort to expand further into wearable devices, Apple is working on a set of smart glasses, reports Bloomberg. Citing sources familiar with Apple‘s plans, the site says the smart glasses would connect wirelessly to the iPhone, much like the Apple Watch, and would display “images and other information” to the wearer. Apple has contacted potential suppliers about its glasses project and has ordered “small quantities” of near-eye displays, suggesting the project is in the exploratory prototyping phase of development. If work on the glasses progresses, they could be released in 2018.

apple-iglass

AR can be really great,” says Tim Cook, CEO of Apple in July. “We have been and continue to invest a lot in this. We’re high on AR in the long run.

Apple‘s glasses sound similar to Google Glass, the head-mounted display that Google first introduced in 2013. Google Glass used augmented reality and voice commands to allow users to do things like check the weather, make phone calls, and capture photographs. Apple‘s product could be similar in functionality. The glasses may be Apple‘s first hardware product targeted directly at AR, one of the people said. Cook has beefed up AR capabilities through acquisitions. In 2013, Apple bought PrimeSense, which developed motion-sensing technology in Microsoft Corp.’s Kinect gaming system. Purchases of software startups in the field, Metaio Inc. and Flyby Media Inc., followed in 2015 and 2016.

Google Glass was highly criticized because of privacy concerns, and as a result, it never really caught on with consumers. Google eventually stopped developing Google Glass in January of 2015. It is not clear how Apple would overcome the privacy and safety issues that Google faced, nor if the project will progress, but Apple CEO Tim Cook has expressed Apple‘s deep interest in augmented reality multiple times over the last few months, suggesting something big is in the works.

Past rumors have also indicated Apple is exploring a number of virtual and augmented reality projects, including a full VR headset. Apple has a full team dedicated to AR and VR research and how the technologies can be incorporated into future Apple products. Cook recently said that he believes augmented reality would be more useful and interesting to people than virtual reality.

Source: http://www.macrumors.com/

Virtual Hug

Skin care giant Nivea has allowed a mother and son to have a ‘virtual hug’ from two different countries thanks to its ‘Second Skin Project’ involving nanotechnology. However, all is not as it seems.

second skin
CLICK ON THE IMAGE TO ENJOY THE VIDEO

A video was created with Leo Burnett Madrid to highlight the importance of the human touch and initially discloses how nanotechnology helped the company recreate the effect from thousands of miles apart. A mother and son who were based in Uruguay and Spain were selected for the experiment, with Beiersdorf-owned Nivea using a ground-breaking fabric that is said to simulate human skin. According to the video, the material is woven with a number of sensors and can retain electrical impulses. As a result of this, when one person touches it, the other can feel the touch from thousands of miles away.

However, at the end of the video the project is ousted as not being real, and is instead a shrewd marketing campaign for the importance of the human touch, and, in effect, its skin cream. Watch the video, and get your tissues at the ready, to see it unfold.

Source: https://globalcosmeticsnews.com/

Google Glass Used For Arteries Surgery

Doctors in Poland used a virtual reality system combining a custom mobile application and Google Glass to clear a blocked coronary artery, one of the first uses of the technology to assist with surgery. The imaging system was used with a patient who had chronic total occlusion, a complete blockage of the artery, which doctors said is difficult to clear using standard catheter-based percutaneous coronary intervention, or PCI.

The system provides three-dimensional reconstructions of the artery and includes a hands-free voice recognition system allowing for zoom and changes of the images. The head-mounted display system allows doctors to capture images and video while also interacting with the environment around them. In patients with chronic total occlusion, the standard procedure is not always successful, at least partially because of difficulty visualizing the blockage with conventional coronary tomography angiography, or CTA, imaging.

Doctors-use-virtual-reality-imaging-to-treat-blocked-coronary-artery

This case demonstrates the novel application of wearable devices for display of CTA data sets in the catheterization laboratory that can be used for better planning and guidance of interventional procedures, and provides proof of concept that wearable devices can improve operator comfort and procedure efficiency in interventional cardiology,” Dr. Maksymilian Opolski, of the Department of Interventional Cardiology and Angiology at the Institute of Cardiology in Warsaw (Poland), said in a press release.

Source: http://www.onlinecjc.ca/
AND
http://www.upi.com/

How To Interact With Virtual Reality

An interactive swarm of flying 3D pixels (voxels) developed at Queen’s University’s Human Media Lab (Canada) is set to revolutionize the way people interact with virtual reality. The system, called BitDrones, allows users to explore virtual 3D information by interacting with physical self-levitating building blocks.

Queen’s professor Roel Vertegaal and his students have unveiled the BitDrones system  at the ACM Symposium on User Interface Software and Technology in Charlotte, North Carolina. BitDrones is the first step towards creating interactive self-levitating programmable mattermaterials capable of changing their 3D shape in a programmable fashion – using swarms of nano quadcopters. The work highlights many possible applications for the new technology, including real-reality 3D modeling, gaming, molecular modeling, medical imaging, robotics and online information visualization.

interact with virtual realityCLICK ON THE IMAGE TO ENJOY THE VIDEO

BitDrones brings flying programmable matter, such as featured in the futuristic Disney movie Big Hero 6, closer to reality,” says Dr. Vertegaal. “It is a first step towards allowing people to interact with virtual 3D objects as real physical objects.

Dr. Vertegaal and his team at the Human Media Lab created three types of BitDrones, each representing self-levitating displays of distinct resolutions. “PixelDrones” are equipped with one LED and a small dot matrix display. “ShapeDrones” are augmented with a light-weight mesh and a 3D printed geometric frame, and serve as building blocks for complex 3D models. “DisplayDrones” are fitted with a curved flexible high resolution touchscreen, a forward-facing video camera and Android smartphone board.  All three BitDrone types are equipped with reflective markers, allowing them to be individually tracked and positioned in real time via motion capture technology. The system also tracks the user’s hand motion and touch, allowing users to manipulate the voxels in space.

We call this a Real Reality interface rather than a Virtual Reality interface. This is what distinguishes it from technologies such as Microsoft HoloLens and the Oculus Rift: you can actually touch these pixels, and see them without a headset,” says Dr. Vertegaal.

Source: http://www.hml.queensu.ca/

3D Hologram From Pop-Up Floating Display

Moving holograms like those used in 3D science fiction movies such as Avatar and Elysium have to date only been seen in their full glory by viewers wearing special glasses.
Now researchers at Swinburne University of Technology (Australia) have shown the capacity of a technique using graphene oxide and complex laser physics to create a pop-up floating display without the need for 3D glasses. Graphene is a two dimensional carbon material with extraordinary electronic and optical properties that offers a new material platform for next-generation nanophototonic devices.

Through a photonic process without involving heat or a change in temperature, the researchers were able to create nanoscale pixels of refractive index – the measure of the bending of light as it passes through a medium – of reduced graphene oxide. This is crucial for the subsequent recording of the individual pixels for holograms and hence naked eye 3D viewing.
3D graphene
If you can change the refractive index you can create lots of optical effects,” Director of Swinburne’s Centre for Micro-Photonics, Professor Min Gu, said.
Our technique can be leveraged to achieve compact and versatile optical components for controlling light. We can create the wide angle display necessary for mobile phones and tablets.

Source: http://www.nature.com/

Brain Waves Command Drones Flight

Researchers demonstrate technology that allows unmanned aircraft to be controlled from the ground using only signals from the pilot’s brain.
An impressive example of mind control – a drone in the air, flown using the power of human thought. Portuguese tech company Tekever uses a special EEG cap to turn pilot’s brainwaves into commands for the drone. CEO Pedro Sinogas explains. “The brain approach that Tekever is using is based on collecting the signals from the brain, then a set of algorithms process all the brain signals and transform them into actual controls to multiple devices,” says Sinoga.
brain wavesWhile the pilot controls the drone’s flight path Tekever‘s researchers determine the mission before take-off. Tekever‘s Chief Operations Officer Ricardo Mendes is keen to apply the technology to commercial aviation – although this could take a while. “What we want to do is to get the technology more mature, prove it on the ground, work with the authorities to bring it to the aerospace and to the aviation world and that will take something like 10 years probably.” he says. And the Brainflight technology could have uses beyond flying. “If you have this technology available to you, you can enter your home and connect and disconnect devices with your mind or if you are a disabled person, for example you would be able to control your wheelchair by only using your mind, that’s our goal,” Mendes adds.Tekever engineers say their project will eventually allow pilots to free up their brains and bodies while flying a plane. In the future, pilotless planes could be more than just a flight of fancy.
Source: http://www.reuters.com/