Posts belonging to Category nanocomputer



Ultra-fast Data Processing At Nanoscale

Advancement in nanoelectronics, which is the use of nanotechnology in electronic components, has been fueled by the ever-increasing need to shrink the size of electronic devices like nanocomputers in a bid to produce smaller, faster and smarter gadgets such as computers, memory storage devices, displays and medical diagnostic tools.

While most advanced electronic devices are powered by photonics – which involves the use of photons to transmit informationphotonic elements are usually large in size and this greatly limits their use in many advanced nanoelectronics systems. Plasmons, which are waves of electrons that move along the surface of a metal after it is struck by photons, holds great promise for disruptive technologies in nanoelectronics. They are comparable to photons in terms of speed (they also travel with the speed of light), and they are much smaller. This unique property of plasmons makes them ideal for integration with nanoelectronics. However, earlier attempts to harness plasmons as information carriers had little success.

Addressing this technological gap, a research team from the National University of Singapore (NUS) has recently invented a novel “converter” that can harness the speed and small size of plasmons for high frequency data processing and transmission in nanoelectronics.

This innovative transducer can directly convert electrical signals into plasmonic signals, and vice versa, in a single step. By bridging plasmonics and nanoscale electronics, we can potentially make chips run faster and reduce power losses. Our plasmonic-electronic transducer is about 10,000 times smaller than optical elements. We believe it can be readily integrated into existing technologies and can potentially be used in a wide range of applications in the future,” explained Associate Professor Christian Nijhuis from the Department of Chemistry at the NUS Faculty of Science, who is the leader of the research team behind this breakthrough.

This novel discovery was first reported in the journal Nature Photonics.

Source: http://news.nus.edu.sg/

The Ultra Smart Community Of The Future

Japan’s largest electronics show CEATEC – showcasing its version of our future – in a connected world with intelligent robots And cars that know when the driver is falling asleep. This is Omron‘s “Onboard Driving Monitoring Sensor,” checking its driver isn’t distracted.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

We are developing sensors that help the car judge what state the driver is in, with regards to driving. For example, if the driver has his eyes open and set on things he should be looking at, if the driver is distracted or looking at smartphones, and these types of situations,” explains Masaki Suwa, Omron Corp. Chief Technologist.

After 18 years of consumer electronics, CEATEC is changing focus to the Internet of Things and what it calls ‘the ultra-smart community of the future‘ A future where machines take on more important roles – machines like Panasonic‘s CaloRieco – pop in your plate and know exactly what you are about to consume.

By placing freshly cooked food inside the machine, you can measure total calories and the three main nutrients: protein, fat and carbohydrate. By using this machine, you can easily manage your diet,” says Panasonic staff engineer Ryota Sato.

Even playtime will see machines more involved – like Forpheus the ping playing robot – here taking on a Olympic bronze medalist – and learning with every stroke.
Rio Olympics Table Tennis player , Jun Mizutani, Bronze Medalist, reports: “It wasn’t any different from playing with a human being. The robot kept improving and getting better as we played, and to be honest, I wanted to play with it when it had reached its maximum level, to see how good it is.”

Optical Computer

Researchers at the University of Sydney (Australia) have dramatically slowed digital information carried as light waves by transferring the data into sound waves in an integrated circuit, or microchipTransferring information from the optical to acoustic domain and back again inside a chip is critical for the development of photonic integrated circuits: microchips that use light instead of electrons to manage data.

These chips are being developed for use in telecommunications, optical fibre networks and cloud computing data centers where traditional electronic devices are susceptible to electromagnetic interference, produce too much heat or use too much energy.

The information in our chip in acoustic form travels at a velocity five orders of magnitude slower than in the optical domain,” said Dr Birgit Stiller, research fellow at the University of Sydney and supervisor of the project.

It is like the difference between thunder and lightning,” she said.

This delay allows for the data to be briefly stored and managed inside the chip for processing, retrieval and further transmission as light wavesLight is an excellent carrier of information and is useful for taking data over long distances between continents through fibre-optic cables.

But this speed advantage can become a nuisance when information is being processed in computers and telecommunication systems.

Source: https://sydney.universty.au/

Very Fast Magnetic Data Storage

For almost seventy years now, magnetic tapes and hard disks have been used for data storage in computers. In spite of many new technologies that have been developed in the meantime, the controlled magnetization of a data storage medium remains the first choice for archiving information because of its longevity and low price. As a means of realizing random access memories (RAMs), however, which are used as the main memory for processing data in computers, magnetic storage technologies were long considered inadequate. That is mainly due to its low writing speed and relatively high energy consumption.

In 1956, IBM introduced the first magnetic hard disc, the RAMAC. ETH researchers have now tested a novel magnetic writing technology that could soon be used in the main memories of modern computers

Pietro Gambardella, Professor at the Department of Materials of the Eidgenössische Technische Hochschule Zürich (ETHZ, Switzerland), and his colleagues, together with colleagues at the Physics Department and at the Paul Scherrer Institute (PSI), have now shown that using a novel technique, magnetic storage can still be achieved very fast and without wasting energy.

In 2011, Gambardella and his colleagues already demonstrated a technique that could do just that: An electric current passing through a specially coated semiconductor film inverted the magnetization in a tiny metal dot. This is made possible by a physical effect called spin-orbit-torque. In this effect, a current flowing in a conductor leads to an accumulation of electrons with opposite magnetic moment (spins) at the edges of the conductor. The electron spins, in turn, create a magnetic field that causes the atoms in a nearby magnetic material to change the orientation of their magnetic moments. In a new study the scientists have now investigated how this process works in detail and how fast it is.

The results were recently published in the scientific journal Nature Nanotechnology.

Source: https://www.ethz.ch/

How To Store Data At The Molecular Level

From smartphones to nanocomputers or supercomputers, the growing need for smaller and more energy efficient devices has made higher density data storage one of the most important technological quests. Now scientists at the University of Manchester have proved that storing data with a class of molecules known as single-molecule magnets is more feasible than previously thought. The research, led by Dr David Mills and Dr Nicholas Chilton, from the School of Chemistry, is being published in Nature. It shows that magnetic hysteresis, a memory effect that is a prerequisite of any data storage, is possible in individual molecules at -213 °C. This is tantalisingly close to the temperature of liquid nitrogen (-196 °C).

The result means that data storage with single molecules could become a reality because the data servers could be cooled using relatively cheap liquid nitrogen at -196°C instead of far more expensive liquid helium (-269 °C). The research provides proof-of-concept that such technologies could be achievable in the near future.

The potential for molecular data storage is huge. To put it into a consumer context, molecular technologies could store more than 200 terabits of data per square inch – that’s 25,000 GB of information stored in something approximately the size of a 50p coin, compared to Apple’s latest iPhone 7 with a maximum storage of 256 GB.

Single-molecule magnets display a magnetic memory effect that is a requirement of any data storage and molecules containing lanthanide atoms have exhibited this phenomenon at the highest temperatures to date. Lanthanides are rare earth metals used in all forms of everyday electronic devices such as smartphones, tablets and laptops. The team achieved their results using the lanthanide element dysprosium.

This is very exciting as magnetic hysteresis in single molecules implies the ability for binary data storage. Using single molecules for data storage could theoretically give 100 times higher data density than current technologies. Here we are approaching the temperature of liquid nitrogen, which would mean data storage in single molecules becomes much more viable from an economic point of view,’ explains Dr Chilton.

The practical applications of molecular-level data storage could lead to much smaller hard drives that require less energy, meaning data centres across the globe could become a lot more energy efficient.

Source: http://www.manchester.ac.uk/

AR Smart Glasses, Next Frontier Of FaceBook

Facebook is hard at work on the technical breakthroughs needed to ship futuristic smart glasses that can let you see virtual objects in the real world. A patent application for a “waveguide display with two-dimensional scanner” was published on Thursday by three members from the advanced research division of Facebook’s virtual-reality subsidiary, Oculus.

The smart glasses being developed by Oculus will use a waveguide display to project light onto the wearer’s eyes instead of a more traditional display. The smart glasses would be able to display images, video, and work with connected speakers or headphones to play audio when worn.The display “may augment views of a physical, real-world environment with computer-generated elements” and “may be included in an eye-wear comprising a frame and a display assembly that presents media to a user’s eyes,” according to the filing.

By using waveguide technology, Facebook is taking a similar approach to Microsoft‘s HoloLens AR headset and the mysterious glasses being developed by the Google-backed startup Magic Leap.

One of the authors of the patent is, in fact, lead Oculus optical scientist Pasi Saarikko, who joined Facebook in 2015 after leading the optical design of the HoloLens at Microsoft.

While work is clearly being done on the underlying technology for Facebook‘s smart glasses now, don’t expect to see the device anytime soon. Michael Abrash, the chief scientist of Oculus, recently said that AR glasses won’t start replacing smartphones until as early as 2022.

Facebook CEO Mark Zuckerberg has called virtual and augmented reality the next major computing platform capable of replacing smartphones and traditional PCs. Facebook purchased Oculus for $2 billion in 2014 and plans to spend billions more on developing the technology.

Source: http://pdfaiw.uspto.gov/
A
ND
http://www.businessinsider.com

No More Batteries For Cellphones

University of Washington (UW) researchers have invented a cellphone that requires no batteries — a major leap forward in moving beyond chargers, cords and dying phones. Instead, the phone harvests the few microwatts of power it requires from either ambient radio signals or light.

The team also made Skype calls using its battery-free phone, demonstrating that the prototype made of commercial, off-the-shelf components can receive and transmit speech and communicate with a base station.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

We’ve built what we believe is the first functioning cellphone that consumes almost zero power,” said co-author Shyam Gollakota, an associate professor in the Paul G. Allen School of Computer Science & Engineering at the UW. “To achieve the really, really low power consumption that you need to run a phone by harvesting energy from the environment, we had to fundamentally rethink how these devices are designed.”

The team of UW computer scientists and electrical engineers eliminated a power-hungry step in most modern cellular transmissionsconverting analog signals that convey sound into digital data that a phone can understand. This process consumes so much energy that it’s been impossible to design a phone that can rely on ambient power sources. Instead, the battery-free cellphone takes advantage of tiny vibrations in a phone’s microphone or speaker that occur when a person is talking into a phone or listening to a call.

An antenna connected to those components converts that motion into changes in standard analog radio signal emitted by a cellular base station. This process essentially encodes speech patterns in reflected radio signals in a way that uses almost no power. To transmit speech, the phone uses vibrations from the device’s microphone to encode speech patterns in the reflected signals. To receive speech, it converts encoded radio signals into sound vibrations that that are picked up by the phone’s speaker. In the prototype device, the user presses a button to switch between these two “transmitting” and “listening” modes.

The new technology is detailed in a paper published July 1 in the Proceedings of the Association for Computing Machinery on Interactive, Mobile, Wearable and Ubiquitous Technologies.

Source: http://www.washington.edu/
AND
http://www.reuters.com/

How To Generate Any Cell Within The Patient’s Own Body

Researchers at The Ohio State University Wexner Medical Center and Ohio State’s College of Engineering have developed a new technology, Tissue Nanotransfection (TNT), that can generate any cell type of interest for treatment within the patient’s own body. This technology may be used to repair injured tissue or restore function of aging tissue, including organs, blood vessels and nerve cells.

By using our novel nanochip technology (nanocomputer), injured or compromised organs can be replaced. We have shown that skin is a fertile land where we can grow the elements of any organ that is declining,” said Dr. Chandan Sen, director of Ohio State’s Center for Regenerative Medicine & Cell Based Therapies, who co-led the study with L. James Lee, professor of chemical and biomolecular engineering with Ohio State’s College of Engineering in collaboration with Ohio State’s Nanoscale Science and Engineering Center.

Researchers studied mice and pigs in these experiments. In the study, researchers were able to reprogram skin cells to become vascular cells in badly injured legs that lacked blood flow. Within one week, active blood vessels appeared in the injured leg, and by the second week, the leg was saved. In lab tests, this technology was also shown to reprogram skin cells in the live body into nerve cells that were injected into brain-injured mice to help them recover from stroke.

This is difficult to imagine, but it is achievable, successfully working about 98 percent of the time. With this technology, we can convert skin cells into elements of any organ with just one touch. This process only takes less than a second and is non-invasive, and then you’re off. The chip does not stay with you, and the reprogramming of the cell starts. Our technology keeps the cells in the body under immune surveillance, so immune suppression is not necessary,” said Sen, who also is executive director of Ohio State’s Comprehensive Wound Center.

Results of the regenerative medicine study have been published in the journal  Nature Nanotechnology.

Source: https://news.osu.edu/

Use The Phone And See 3D Content Without 3D Glasses

RED, the company known for making some truly outstanding high-end cinema cameras, is set to release a smartphone in Q1 of 2018 called the HYDROGEN ONE. RED says that it is a standalone, unlocked and fully-featured smartphone “operating on Android OS that just happens to add a few additional features that shatter the mold of conventional thinking.” Yes, you read that right. This phone will blow your mind, or something – and it will even make phone calls.

In a press release riddled with buzzwords broken up by linking verbs, RED praises their yet-to-be smartphone with some serious adjectives. If we were just shown this press release outside of living on RED‘s actual server, we would swear it was satire. Here are a smattering of phrases found in the release.

Incredible retina-riveting display
Nanotechnology
Holographic multi-view content
RED Hydrogen 4-View content
Assault your senses
Proprietary H3O algorithm
Multi-dimentional audio

  • There are two models of the phone, which run at different prices. The Aluminum model will cost $1,195, but anyone worth their salt is going to go for the $1,595 Titanium version. Gotta shed that extra weight, you know?

Those are snippets from just the first three sections, of which there are nine. I get hyping a product, but this reads like a catalog seen in the background of a science-fiction comedy, meant to sound ridiculous – especially in the context of a ficticious universe.

Except that this is real life.

After spending a few minutes removing all the glitter words from this release, it looks like it will be a phone using a display similar to what you get with the Nintendo 3DS, or what The Verge points out as perhaps better than the flopped Amazon Fire Phone. Essentially, you should be able to use the phone and see 3D content without 3D glasses. Nintendo has already proven that can work, however it can really tire out your eyes. As an owner of three different Nintendo 3DS consoles, I can say that I rarely use the 3D feature because of how it makes my eyes hurt. It’s an odd sensation. It is probalby why Nintendo has released a new handheld that has the same power as the 3DS, but dropping the 3D feature altogether.

Anyway, back to the HYDROGEN ONE, RED says that it will work in tandem with their cameras as a user interface and monitor. It will also display what RED is calling “holographic content,” which isn’t well-described by RED in this release. We can assume it is some sort of mixed-dimensional view that makes certain parts of a video or image stand out over the others.

Source: http://www.red.com/
AND
http://www.imaging-resource.com/

Nanoweapons Against North Korea

Unless you’re working in the field, you probably never heard about U.S. nanoweapons. This is intentional. The United States, as well as Russia and China, are spending billions of dollars per year developing nanoweapons, but all development is secret. Even after Pravda.ru’s June 6, 2016 headline, “US nano weapon killed Venezuela’s Hugo Chavez, scientists say,” the U.S. offered no response.

Earlier this year, May 5, 2017, North Korea claimed the CIA plotted to kill Kim Jong Un using a radioactive nano poison, similar to the nanoweapon Venezuelan scientists claim the U.S. used to assassinate former Venezuelan President Hugo Chavez. All major media covered North Korea’s claim. These accusations are substantial, but are they true? Let’s address this question.

Unfortunately, until earlier this year, nanoweapons gleaned little media attention. However, in March 2017 that changed with the publication of the book, Nanoweapons: A Growing Threat to Humanity (2017 Potomac Books), which inspired two articles. On March 9, 2017, American Security Today published “Nanoweapons: A Growing Threat to Humanity – Louis A. Del Monte,” and on March 17, 2017, CNBC published “Mini-nukes and mosquito-like robot weapons being primed for future warfare.” Suddenly, the genie was out of the bottle. The CNBC article became the most popular on their website for two days following its publication and garnered 6.5K shares. Still compared to other classes of military weapons, nanoweapons remain obscure. Factually, most people never even heard the term. If you find this surprising, recall most people never heard of stealth aircraft until their highly publicized use during the first Iraq war in 1990. Today, almost everyone that reads the news knows about stealth aircraft. This may become the case with nanoweapons, but for now, it remains obscure to the public.

Given their relative obscurity, we’ll start by defining nanoweapons. A nanoweapon is any military weapon that exploits the power of nanotechnology. This, of course, begs another question: What is nanotechnology? According to the United States National Nanotechnology Initiative’s website, nano.gov, “Nanotechnology is science, engineering, and technology conducted at the nanoscale, which is about 1 to 100 nanometers.” To put this in simple terms, the diameter of a typical human hair equals 100,000 nanometers. This means nanotechnology is invisible to the naked eye or even under an optical microscope.

Source: http://www.huffingtonpost.com/

Artificial Intelligence Checks Identity Using Any Smartphone

Checking your identity using simulated human cognition aiThenticate say their system goes way beyond conventional facial recognition systems or the biometrics of passwords, fingerprints and eyescans.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

We need to have a much greater level of a certainty who somebody actually is. In order to answer that question, we appealed to deep science, deep learning, to develop an AI method, artificial intelligence method, in other words to replicate or to mimic or to simulate the way that we as humans, intuitively and instinctively go by recognizing somebody’s head, is very different to the conventional traditional way of face recognition, finger print recognition, for that reason really represents the next generation of authentication technologies or methods,” says AiTthenticate CEO André Immelman.

aiDX uses 16 distinct tests to recognise someone – including eye prints using a standard off the shelf smart phone to access encrypted data stored in the cloud it can operate in active mode – asking the user taking a simple selfie or discreetly in the background.

André Immelman explains: “It has applications in the security sense, it has applications in a customer services sense, you know this kind of things the bank calls you up and says: this is your bank calling, please, where you live, what is your mother’s name, what’s your dog favourite hobby, whatever the case it may be. It takes that kind of guess work out of the equation completely and it answers the, “who” question to much greater levels of confidence or certainty, than what traditional or conventional biometrics have been able to do in the past.”

Billions of dollars a year are lost to identity theft globally. aiThenticate hope their new system can help stop at least some of that illegal trade.

Source: http://www.eyethenticate.za.com/
AND
http://www.reuters.com/

Building Brain-Inspired AI Supercomputing System

IBM (NYSE: IBM) and the U.S. Air Force Research Laboratory (AFRL) today announced they are collaborating on a first-of-a-kind brain-inspired supercomputing system powered by a 64-chip array of the IBM TrueNorth Neurosynaptic System. The scalable platform IBM is building for AFRL will feature an end-to-end software ecosystem designed to enable deep neural-network learning and information discovery. The system’s advanced pattern recognition and sensory processing power will be the equivalent of 64 million neurons and 16 billion synapses, while the processor component will consume the energy equivalent of a dim light bulb – a mere 10 watts to power.
IBM researchers believe the brain-inspired, neural network design of TrueNorth will be far more efficient for pattern recognition and integrated sensory processing than systems powered by conventional chips. AFRL is investigating applications of the system in embedded, mobile, autonomous settings where, today, size, weight and power (SWaP) are key limiting factors. The IBM TrueNorth Neurosynaptic System can efficiently convert data (such as images, video, audio and text) from multiple, distributed sensors into symbols in real time. AFRL will combine this “right-brain perception capability of the system with the “left-brain” symbol processing capabilities of conventional computer systems. The large scale of the system will enable both “data parallelism” where multiple data sources can be run in parallel against the same neural network and “model parallelism” where independent neural networks form an ensemble that can be run in parallel on the same data.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

AFRL was the earliest adopter of TrueNorth for converting data into decisions,” said Daniel S. Goddard, director, information directorate, U.S. Air Force Research Lab. “The new neurosynaptic system will be used to enable new computing capabilities important to AFRL’s mission to explore, prototype and demonstrate high-impact, game-changing technologies that enable the Air Force and the nation to maintain its superior technical advantage.”

“The evolution of the IBM TrueNorth Neurosynaptic System is a solid proof point in our quest to lead the industry in AI hardware innovation,” said Dharmendra S. Modha, IBM Fellow, chief scientist, brain-inspired computing, IBM Research – Almaden. “Over the last six years, IBM has expanded the number of neurons per system from 256 to more than 64 million – an 800 percent annual increase over six years.’’

Source: https://www-03.ibm.com/