Posts belonging to Category quantglass

Nanotechnology: A Treasure Trove With 1000 New 2D Materials

2D materials, which consist of a few layers of atoms, may well be the future of nanotechnology. They offer potential new applications and could be used in small, higher-performance and more energy-efficient devices. 2D materials were first discovered almost 15 years ago, but only a few dozen of them have been synthesized so far. Now, thanks to an approach developed by researchers from EPFL‘s Theory and Simulation of Materials Laboratory (THEOS) and from NCCR-MARVEL for Computational Design and Discovey of Novel Materials, many more promising 2D materials may now be identified. Their work was recently published in the journal Nature Nanotechnology, and even got a mention on the cover page.

The first 2D material to be isolated was graphene, in 2004, earning the researchers who discovered it a Nobel Prize in 2010. This marked the start of a whole new era in electronics, as graphene is light, transparent and resilient and, above all, a good conductor of electricity. It paved the way to new applications in numerous fields such as photovoltaics and optoelectronics.

A team from EPFL (Ecole Polytechnique Fédérale de Lausanne) and NCCR Marvel in Switzerland has identified more than 1,000 materials with a particularly interesting 2D structure. Their research, which made the cover page of Nature Nanotechnology, paves the way for groundbreaking technological applications.

To find other materials with similar properties, we focused on the feasibility of exfoliation,” explains Nicolas Mounet, a researcher in the THEOS lab and lead author of the study. “But instead of placing adhesive strips on graphite to see if the layers peeled off, like the Nobel Prize winners did, we used a digital method.”


Not just speed: 7 incredible things you can do with 5G

You can’t walk around Mobile World Congress  without 5G slapping you in the face. If there’s a phenomenon that’s dominated this week’s trade show besides the return of a 17-year-old phone, it’s the reality that the next generation of cellular technology has arrived. Well, at least it’s real in the confines of the Fira Gran Via convention center in Barcelona.

Above the Qualcomm booth flashed the slogan: “5G: From the company that brought you 3G and 4G.” If you took a few more steps, you could hear an Intel representative shout about the benefits of 5G. If you hopped over to Ericsson, you’d find a “5G avenue” with multiple exhibits demonstrating the benefits of the technology. Samsung kicked off its press conference not with its new tablets, but with a chat about 5G.

Remote surgery via a special glove, virtual reality and 5G

(click on the image to enjoy the video)

The hype around 5G has been brewing for more than a year, but we’re finally starting to see the early research and development bear fruit. The technology promises to change our lives by connecting everything around us to a network that is 100 times faster than our cellular connection and 10 times faster than our speediest home broadband service.

But it’s not just about speed for speed’s sake. While the move from 3G to 4G LTE was about faster connections, the evolution to 5G is so much more. The combination of speed, responsiveness and reach could unlock the full capabilities of other hot trends in technology, offering a boost to self-driving cars, drones, virtual reality and the internet of things. “If you just think of speed, you don’t see the magic of all it can do,” said Jefferson Wang, who follows the mobile industry for IBB Consulting.

The bad news: 5G is still a while away for consumers, and the industry is still fighting over the nitty-gritty details of the technology itself. The good news: There’s a chance it’s coming sooner than we thought. It’s clear why the wireless carriers are eager to move to 5G. With the core phone business slowing down, companies are eager for new tech to spark excitement and connect more devices. “We are absolutely convinced that 5G is the next revolution,” Tim Baxter, president of Samsung’s US unit, said during a press conference.


World’s First Virtual Reality Surgery

Doctors at the Avicenne hospital  (city of Bobigny in the Paris area) have successfully completed the world’s first ever augmented-reality surgical operation using 3D models and a virtual reality (VR) headset.

Doctor Thomas Grégory, head of orthopedic and traumatic surgery at the university teaching hospital, was able to “see through the skin of his patient” before the shoulder operation, through the use of 3D imaging technology and models created from the 80-year-old patient ahead of time.

During the key part of the operation, which lasted for 45 minutes, the doctors in France were joined by video link by four surgeons from South Korea, the USA, and the UK, who provided help via online call programme Skype.

Dr Grégory also performed the procedure while wearing a “mixed reality headset from Microsoft’s Hololens, which he could control with his movements and his voice, allowing him to see 3D images projected onto the anatomy of the patient during the operation, as well as enabling him to consult advisory videos and supporting medical documents. He had begun to practice on the device two months previously, he said.

It was a global first for this kind of operation, and purported to help the surgeons understand – to a much higher degree than normal – what they would find during the surgery, allowing them to prepare more and improve the quality of care overall. The headset also allowed the surgeons to operate with a previously unprecedentedlevel of precision”, that was less invasive, more effective, and less prone to infection after the fact.

The holy grail for a doctor is to [find a way] to see what we cannot see with our own eyes; the patient’s skeleton in every detail. That is what [this allows] us to do,” explained Grégory.


Memristors Retain Data 10 Years Without Power

The internet of things ( IoT) is coming, that much we know. But still it won’t; not until we have components and chips that can handle the explosion of data that comes with IoT. In 2020, there will already be 50 billion industrial internet sensors in place all around us. A single autonomous device – a smart watch, a cleaning robot, or a driverless car – can produce gigabytes of data each day, whereas an airbus may have over 10 000 sensors in one wing alone.

Two hurdles need to be overcome. First, current transistors in computer chips must be miniaturized to the size of only few nanometres; the problem is they won’t work anymore then. Second, analysing and storing unprecedented amounts of data will require equally huge amounts of energy. Sayani Majumdar, Academy Fellow at Aalto University (Finland), along with her colleagues, is designing technology to tackle both issues.

Majumdar has with her colleagues designed and fabricated the basic building blocks of future components in what are called “neuromorphiccomputers inspired by the human brain. It’s a field of research on which the largest ICT companies in the world and also the EU are investing heavily. Still, no one has yet come up with a nano-scale hardware architecture that could be scaled to industrial manufacture and use.

The probe-station device (the full instrument, left, and a closer view of the device connection, right) which measures the electrical responses of the basic components for computers mimicking the human brain. The tunnel junctions are on a thin film on the substrate plate.

The technology and design of neuromorphic computing is advancing more rapidly than its rival revolution, quantum computing. There is already wide speculation both in academia and company R&D about ways to inscribe heavy computing capabilities in the hardware of smart phones, tablets and laptops. The key is to achieve the extreme energy-efficiency of a biological brain and mimic the way neural networks process information through electric impulses,” explains Majumdar.

In their recent article in Advanced Functional Materials, Majumdar and her team show how they have fabricated a new breed of “ferroelectric tunnel junctions”, that is, few-nanometre-thick ferroelectric thin films sandwiched between two electrodes. They have abilities beyond existing technologies and bode well for energy-efficient and stable neuromorphic computing.

The junctions work in low voltages of less than five volts and with a variety of electrode materials – including silicon used in chips in most of our electronics. They also can retain data for more than 10 years without power and be manufactured in normal conditions.

Tunnel junctions have up to this point mostly been made of metal oxides and require 700 degree Celsius temperatures and high vacuums to manufacture. Ferroelectric materials also contain lead which makes them – and all our computers – a serious environmental hazard.

Our junctions are made out of organic hydro-carbon materials and they would reduce the amount of toxic heavy metal waste in electronics. We can also make thousands of junctions a day in room temperature without them suffering from the water or oxygen in the air”, explains Majumdar.

What makes ferroelectric thin film components great for neuromorphic computers is their ability to switch between not only binary states – 0 and 1 – but a large number of intermediate states as well. This allows them to ‘memoriseinformation not unlike the brain: to store it for a long time with minute amounts of energy and to retain the information they have once received – even after being switched off and on again.

We are no longer talking of transistors, but ‘memristors’. They are ideal for computation similar to that in biological brains.  Take for example the Mars 2020 Rover about to go chart the composition of another planet. For the Rover to work and process data on its own using only a single solar panel as an energy source, the unsupervised algorithms in it will need to use an artificial brain in the hardware.

What we are striving for now, is to integrate millions of our tunnel junction memristors into a network on a one square centimetre area. We can expect to pack so many in such a small space because we have now achieved a record-high difference in the current between on and off-states in the junctions and that provides functional stability. The memristors could then perform complex tasks like image and pattern recognition and make decisions autonomously,” says Majumdar.


New Quantum Computer Uses 10,000 Times Less Power

Japan has unveiled its first quantum computer prototype, amid a global race to build ever-more powerful machines with faster speeds and larger brute force that are key towards realising the full potential of artificial intelligence. Japan’s machine can theoretically make complex calculations 100 times faster than even a conventional supercomputer, but use just 1 kilowatt of power – about what is required by a large microwave oven – for every 10,000 kilowatts consumed by a supercomputer. Launched recently, the creators – the National Institute of Informatics, telecom giant NTT and the University of Tokyo – said they are building a cloud system to house their “quantum neural network” technology.

In a bid to spur further innovation, this will be made available for free to the public and fellow researchers for trials at
The creators, who aim to commercialise their system by March 2020, touted its vast potential to help ease massive urban traffic congestion, connect tens of thousands of smartphones to different base stations for optimal use in a crowded area, and even develop innovative new drugs by finding the right combination of chemical compounds.

Quantum computers differ from conventional supercomputers in that they rely on theoretical particle physics and run on subatomic particles such as electrons in sub-zero temperatures. Most quantum computers, for this reason, destabilise easily and are error-prone, thereby limiting their functions.

We will seek to further improve the prototype so that the quantum computer can tackle problems with near-infinite combinations that are difficult to solve, even by modern computers at high speed,” said Stanford University Professor Emeritus Yoshihisa Yamamoto, who is heading the project.
Japan’s prototype taps into a 1km-long optical fibre cable packed with photons, and exploits the properties of light to make super-quick calculations. Its researchers said they deemed the prototype ready for public use, after tests showed that it was capable of operating stably around the clock at room temperature.


Ultra-fast Data Processing At Nanoscale

Advancement in nanoelectronics, which is the use of nanotechnology in electronic components, has been fueled by the ever-increasing need to shrink the size of electronic devices like nanocomputers in a bid to produce smaller, faster and smarter gadgets such as computers, memory storage devices, displays and medical diagnostic tools.

While most advanced electronic devices are powered by photonics – which involves the use of photons to transmit informationphotonic elements are usually large in size and this greatly limits their use in many advanced nanoelectronics systems. Plasmons, which are waves of electrons that move along the surface of a metal after it is struck by photons, holds great promise for disruptive technologies in nanoelectronics. They are comparable to photons in terms of speed (they also travel with the speed of light), and they are much smaller. This unique property of plasmons makes them ideal for integration with nanoelectronics. However, earlier attempts to harness plasmons as information carriers had little success.

Addressing this technological gap, a research team from the National University of Singapore (NUS) has recently invented a novel “converter” that can harness the speed and small size of plasmons for high frequency data processing and transmission in nanoelectronics.

This innovative transducer can directly convert electrical signals into plasmonic signals, and vice versa, in a single step. By bridging plasmonics and nanoscale electronics, we can potentially make chips run faster and reduce power losses. Our plasmonic-electronic transducer is about 10,000 times smaller than optical elements. We believe it can be readily integrated into existing technologies and can potentially be used in a wide range of applications in the future,” explained Associate Professor Christian Nijhuis from the Department of Chemistry at the NUS Faculty of Science, who is the leader of the research team behind this breakthrough.

This novel discovery was first reported in the journal Nature Photonics.


Graphene, Not Glass, Is The Key To Better Optics

A lens just a billionth of a metre thick could transform phone cameras. Researchers at Swinburne University in Melbourne, Australia, have created ultra-thin lenses that cap an optical fibre, and can produce images with the quality and sharpness of much larger glass lenses.

Compared with current lenses, our graphene lens only needs one film to achieve the same resolution,” says Professor Baohua Jia, a research leader at Swinburne’s Centre for Micro-Photonics. “In the future, mobile phones could be much thinner, without having to sacrifice the quality of their cameras. Our lens also allows infrared light to pass through, which glass lenses don’t.”

Producing graphene can be costly and challenging, so Baohua and her colleagues used a laser to pattern layers of graphene oxide (graphene combined with oxygen). By then removing the oxygen, they produced low-cost, patterned films of graphene, a thousand times thinner than a human hair. “By patterning the graphene oxide film in this way, its optical and electrical properties can be altered, which allowed us to place them in different devices,” she says.

Warm objects give off infrared light, so mobile phones with graphene lenses could be used to scan for hotspots in the human body and help in the early identification of diseases like breast cancer. By attaching the lens to a fibre optic tip, endoscopes — instruments that are currently several millimetres wide—could be made a million times smaller. The team is also investigating graphene’s amazing properties for their potential use as supercapacitors, capable of storing very large amounts of energy, which could replace conventional batteries.

Baohua’s work on graphene lenses was published in Nature Communications.


Optical Computer

Researchers at the University of Sydney (Australia) have dramatically slowed digital information carried as light waves by transferring the data into sound waves in an integrated circuit, or microchipTransferring information from the optical to acoustic domain and back again inside a chip is critical for the development of photonic integrated circuits: microchips that use light instead of electrons to manage data.

These chips are being developed for use in telecommunications, optical fibre networks and cloud computing data centers where traditional electronic devices are susceptible to electromagnetic interference, produce too much heat or use too much energy.

The information in our chip in acoustic form travels at a velocity five orders of magnitude slower than in the optical domain,” said Dr Birgit Stiller, research fellow at the University of Sydney and supervisor of the project.

It is like the difference between thunder and lightning,” she said.

This delay allows for the data to be briefly stored and managed inside the chip for processing, retrieval and further transmission as light wavesLight is an excellent carrier of information and is useful for taking data over long distances between continents through fibre-optic cables.

But this speed advantage can become a nuisance when information is being processed in computers and telecommunication systems.


China, Global Leader In NanoScience

Mobile phones, computers, cosmetics, bicyclesnanoscience is hiding in so many everyday items, wielding a huge influence on our lives at a microscale level. Scientists and engineers from around the world exchanged new findings and perceptions on nanotechnology at the recent 7th International Conference on Nanoscience and Technology (ChinaNANO 2017) in Beijing last week. China has become a nanotechnology powerhouse, according to a report released at the conference. China’s applied nanoscience research and the industrialization of nanotechnology have been developing steadily, with the number of nano-related patent applications ranking among the top in the world.

According to Bai Chunli, president of the Chinese Academy of Sciences (CAS), China faces new opportunities for nanoscience research and development as it builds the National Center for Nanoscience and Technology  (NCNST) and globally influential national science centers.

We will strengthen the strategic landscape and top-down design for developing nanoscience, which will contribute greatly to the country’s economy and society,” said Bai.

Nanoscience can be defined as the study of the interaction, composition, properties and manufacturing methods of materials at a nanometer scale. At such tiny scales, the physical, chemical and biological properties of materials are different from those at larger scales — often profoundly so.

For example, alloys that are weak or brittle become strong and ductile; compounds that are chemically inert become powerful catalysts. It is estimated that there are more than 1,600 nanotechnology-based consumer products on the market, including lightweight but sturdy tennis rackets, bicycles, suitcases, automobile parts and rechargeable batteries. Nanomaterials are used in hairdryers or straighteners to make them lighter and more durable. The secret of how sunscreens protect skin from sunburn lies in the nanometer-scale titanium dioxide or zinc oxide they contain.

In 2016, the world’s first one-nanometer transistor was created. It was made from carbon nanotubes and molybdenum disulphide, rather than silicon.
Carbon nanotubes or silver nanowires enable touch screens on computers and televisions to be flexible, said Zhu Xing, chief scientist (CNST). Nanotechnology is also having an increasing impact on healthcare, with progress in drug delivery, biomaterials, imaging, diagnostics, active implants and other therapeutic applications. The biggest current concern is the health threats of nanoparticles, which can easily enter body via airways or skin. Construction workers exposed to nanopollutants face increased health risks.

The report was co-produced by Springer Nature, National Center for Nanoscience and Technology (NCNST) and the National Science Library of the Chinese Academy of Sciences (CAS).


AR Smart Glasses, Next Frontier Of FaceBook

Facebook is hard at work on the technical breakthroughs needed to ship futuristic smart glasses that can let you see virtual objects in the real world. A patent application for a “waveguide display with two-dimensional scanner” was published on Thursday by three members from the advanced research division of Facebook’s virtual-reality subsidiary, Oculus.

The smart glasses being developed by Oculus will use a waveguide display to project light onto the wearer’s eyes instead of a more traditional display. The smart glasses would be able to display images, video, and work with connected speakers or headphones to play audio when worn.The display “may augment views of a physical, real-world environment with computer-generated elements” and “may be included in an eye-wear comprising a frame and a display assembly that presents media to a user’s eyes,” according to the filing.

By using waveguide technology, Facebook is taking a similar approach to Microsoft‘s HoloLens AR headset and the mysterious glasses being developed by the Google-backed startup Magic Leap.

One of the authors of the patent is, in fact, lead Oculus optical scientist Pasi Saarikko, who joined Facebook in 2015 after leading the optical design of the HoloLens at Microsoft.

While work is clearly being done on the underlying technology for Facebook‘s smart glasses now, don’t expect to see the device anytime soon. Michael Abrash, the chief scientist of Oculus, recently said that AR glasses won’t start replacing smartphones until as early as 2022.

Facebook CEO Mark Zuckerberg has called virtual and augmented reality the next major computing platform capable of replacing smartphones and traditional PCs. Facebook purchased Oculus for $2 billion in 2014 and plans to spend billions more on developing the technology.


Use The Phone And See 3D Content Without 3D Glasses

RED, the company known for making some truly outstanding high-end cinema cameras, is set to release a smartphone in Q1 of 2018 called the HYDROGEN ONE. RED says that it is a standalone, unlocked and fully-featured smartphone “operating on Android OS that just happens to add a few additional features that shatter the mold of conventional thinking.” Yes, you read that right. This phone will blow your mind, or something – and it will even make phone calls.

In a press release riddled with buzzwords broken up by linking verbs, RED praises their yet-to-be smartphone with some serious adjectives. If we were just shown this press release outside of living on RED‘s actual server, we would swear it was satire. Here are a smattering of phrases found in the release.

Incredible retina-riveting display
Holographic multi-view content
RED Hydrogen 4-View content
Assault your senses
Proprietary H3O algorithm
Multi-dimentional audio

  • There are two models of the phone, which run at different prices. The Aluminum model will cost $1,195, but anyone worth their salt is going to go for the $1,595 Titanium version. Gotta shed that extra weight, you know?

Those are snippets from just the first three sections, of which there are nine. I get hyping a product, but this reads like a catalog seen in the background of a science-fiction comedy, meant to sound ridiculous – especially in the context of a ficticious universe.

Except that this is real life.

After spending a few minutes removing all the glitter words from this release, it looks like it will be a phone using a display similar to what you get with the Nintendo 3DS, or what The Verge points out as perhaps better than the flopped Amazon Fire Phone. Essentially, you should be able to use the phone and see 3D content without 3D glasses. Nintendo has already proven that can work, however it can really tire out your eyes. As an owner of three different Nintendo 3DS consoles, I can say that I rarely use the 3D feature because of how it makes my eyes hurt. It’s an odd sensation. It is probalby why Nintendo has released a new handheld that has the same power as the 3DS, but dropping the 3D feature altogether.

Anyway, back to the HYDROGEN ONE, RED says that it will work in tandem with their cameras as a user interface and monitor. It will also display what RED is calling “holographic content,” which isn’t well-described by RED in this release. We can assume it is some sort of mixed-dimensional view that makes certain parts of a video or image stand out over the others.


Building Brain-Inspired AI Supercomputing System

IBM (NYSE: IBM) and the U.S. Air Force Research Laboratory (AFRL) today announced they are collaborating on a first-of-a-kind brain-inspired supercomputing system powered by a 64-chip array of the IBM TrueNorth Neurosynaptic System. The scalable platform IBM is building for AFRL will feature an end-to-end software ecosystem designed to enable deep neural-network learning and information discovery. The system’s advanced pattern recognition and sensory processing power will be the equivalent of 64 million neurons and 16 billion synapses, while the processor component will consume the energy equivalent of a dim light bulb – a mere 10 watts to power.
IBM researchers believe the brain-inspired, neural network design of TrueNorth will be far more efficient for pattern recognition and integrated sensory processing than systems powered by conventional chips. AFRL is investigating applications of the system in embedded, mobile, autonomous settings where, today, size, weight and power (SWaP) are key limiting factors. The IBM TrueNorth Neurosynaptic System can efficiently convert data (such as images, video, audio and text) from multiple, distributed sensors into symbols in real time. AFRL will combine this “right-brain perception capability of the system with the “left-brain” symbol processing capabilities of conventional computer systems. The large scale of the system will enable both “data parallelism” where multiple data sources can be run in parallel against the same neural network and “model parallelism” where independent neural networks form an ensemble that can be run in parallel on the same data.


AFRL was the earliest adopter of TrueNorth for converting data into decisions,” said Daniel S. Goddard, director, information directorate, U.S. Air Force Research Lab. “The new neurosynaptic system will be used to enable new computing capabilities important to AFRL’s mission to explore, prototype and demonstrate high-impact, game-changing technologies that enable the Air Force and the nation to maintain its superior technical advantage.”

“The evolution of the IBM TrueNorth Neurosynaptic System is a solid proof point in our quest to lead the industry in AI hardware innovation,” said Dharmendra S. Modha, IBM Fellow, chief scientist, brain-inspired computing, IBM Research – Almaden. “Over the last six years, IBM has expanded the number of neurons per system from 256 to more than 64 million – an 800 percent annual increase over six years.’’