Artificial Intelligence Chip Analyzes Molecular-level Data In Real Time

Nano Global, an Austin-based molecular data company, today announced that it is developing a chip using intellectual property (IP) from Arm, the world’s leading semiconductor IP company. The technology will help redefine how global health challenges – from superbugs to infectious diseases, and cancer are conquered.

The pioneering system-on-chip (SoC) will yield highly-secure molecular data that can be used in the recognition and analysis of health threats caused by pathogens and other living organisms. Combined with the company’s scientific technology platform, the chip leverages advances in nanotechnology, optics, artificial intelligence (AI), blockchain authentication, and edge computing to access and analyze molecular-level data in real time.

In partnership with Arm, we’re tackling the vast frontier of molecular data to unlock the unlimited potential of this universe,” said Steve Papermaster, Chairman and CEO of Nano Global. “The data our technology can acquire and process will enable us to create a safer and healthier world.”

We believe the technology Nano Global is delivering will be an important step forward in the collective pursuit of care that improves lives through the application of technology,” explained Rene Haas, executive vice president and president of IPG, Arm. “By collaborating with Nano Global, Arm is taking an active role in developing and deploying the technologies that will move us one step closer to solving complex health challenges.”

Additionally, Nano Global will be partnering with several leading institutions, including Baylor College of Medicine and National University of Singapore, on broad research initiatives in clinical, laboratory, and population health environments to accelerate data collection, analysis, and product development.
The initial development of the chip is in process with first delivery expected by 2020. The company is already adding new partners to their platform.

Source: https://nanoglobal.com/
AND
www.prnewswire.com

Optical Computer

Researchers at the University of Sydney (Australia) have dramatically slowed digital information carried as light waves by transferring the data into sound waves in an integrated circuit, or microchipTransferring information from the optical to acoustic domain and back again inside a chip is critical for the development of photonic integrated circuits: microchips that use light instead of electrons to manage data.

These chips are being developed for use in telecommunications, optical fibre networks and cloud computing data centers where traditional electronic devices are susceptible to electromagnetic interference, produce too much heat or use too much energy.

The information in our chip in acoustic form travels at a velocity five orders of magnitude slower than in the optical domain,” said Dr Birgit Stiller, research fellow at the University of Sydney and supervisor of the project.

It is like the difference between thunder and lightning,” she said.

This delay allows for the data to be briefly stored and managed inside the chip for processing, retrieval and further transmission as light wavesLight is an excellent carrier of information and is useful for taking data over long distances between continents through fibre-optic cables.

But this speed advantage can become a nuisance when information is being processed in computers and telecommunication systems.

Source: https://sydney.universty.au/

How To Store Data At The Molecular Level

From smartphones to nanocomputers or supercomputers, the growing need for smaller and more energy efficient devices has made higher density data storage one of the most important technological quests. Now scientists at the University of Manchester have proved that storing data with a class of molecules known as single-molecule magnets is more feasible than previously thought. The research, led by Dr David Mills and Dr Nicholas Chilton, from the School of Chemistry, is being published in Nature. It shows that magnetic hysteresis, a memory effect that is a prerequisite of any data storage, is possible in individual molecules at -213 °C. This is tantalisingly close to the temperature of liquid nitrogen (-196 °C).

The result means that data storage with single molecules could become a reality because the data servers could be cooled using relatively cheap liquid nitrogen at -196°C instead of far more expensive liquid helium (-269 °C). The research provides proof-of-concept that such technologies could be achievable in the near future.

The potential for molecular data storage is huge. To put it into a consumer context, molecular technologies could store more than 200 terabits of data per square inch – that’s 25,000 GB of information stored in something approximately the size of a 50p coin, compared to Apple’s latest iPhone 7 with a maximum storage of 256 GB.

Single-molecule magnets display a magnetic memory effect that is a requirement of any data storage and molecules containing lanthanide atoms have exhibited this phenomenon at the highest temperatures to date. Lanthanides are rare earth metals used in all forms of everyday electronic devices such as smartphones, tablets and laptops. The team achieved their results using the lanthanide element dysprosium.

This is very exciting as magnetic hysteresis in single molecules implies the ability for binary data storage. Using single molecules for data storage could theoretically give 100 times higher data density than current technologies. Here we are approaching the temperature of liquid nitrogen, which would mean data storage in single molecules becomes much more viable from an economic point of view,’ explains Dr Chilton.

The practical applications of molecular-level data storage could lead to much smaller hard drives that require less energy, meaning data centres across the globe could become a lot more energy efficient.

Source: http://www.manchester.ac.uk/

AR Smart Glasses, Next Frontier Of FaceBook

Facebook is hard at work on the technical breakthroughs needed to ship futuristic smart glasses that can let you see virtual objects in the real world. A patent application for a “waveguide display with two-dimensional scanner” was published on Thursday by three members from the advanced research division of Facebook’s virtual-reality subsidiary, Oculus.

The smart glasses being developed by Oculus will use a waveguide display to project light onto the wearer’s eyes instead of a more traditional display. The smart glasses would be able to display images, video, and work with connected speakers or headphones to play audio when worn.The display “may augment views of a physical, real-world environment with computer-generated elements” and “may be included in an eye-wear comprising a frame and a display assembly that presents media to a user’s eyes,” according to the filing.

By using waveguide technology, Facebook is taking a similar approach to Microsoft‘s HoloLens AR headset and the mysterious glasses being developed by the Google-backed startup Magic Leap.

One of the authors of the patent is, in fact, lead Oculus optical scientist Pasi Saarikko, who joined Facebook in 2015 after leading the optical design of the HoloLens at Microsoft.

While work is clearly being done on the underlying technology for Facebook‘s smart glasses now, don’t expect to see the device anytime soon. Michael Abrash, the chief scientist of Oculus, recently said that AR glasses won’t start replacing smartphones until as early as 2022.

Facebook CEO Mark Zuckerberg has called virtual and augmented reality the next major computing platform capable of replacing smartphones and traditional PCs. Facebook purchased Oculus for $2 billion in 2014 and plans to spend billions more on developing the technology.

Source: http://pdfaiw.uspto.gov/
A
ND
http://www.businessinsider.com

No More Batteries For Cellphones

University of Washington (UW) researchers have invented a cellphone that requires no batteries — a major leap forward in moving beyond chargers, cords and dying phones. Instead, the phone harvests the few microwatts of power it requires from either ambient radio signals or light.

The team also made Skype calls using its battery-free phone, demonstrating that the prototype made of commercial, off-the-shelf components can receive and transmit speech and communicate with a base station.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

We’ve built what we believe is the first functioning cellphone that consumes almost zero power,” said co-author Shyam Gollakota, an associate professor in the Paul G. Allen School of Computer Science & Engineering at the UW. “To achieve the really, really low power consumption that you need to run a phone by harvesting energy from the environment, we had to fundamentally rethink how these devices are designed.”

The team of UW computer scientists and electrical engineers eliminated a power-hungry step in most modern cellular transmissionsconverting analog signals that convey sound into digital data that a phone can understand. This process consumes so much energy that it’s been impossible to design a phone that can rely on ambient power sources. Instead, the battery-free cellphone takes advantage of tiny vibrations in a phone’s microphone or speaker that occur when a person is talking into a phone or listening to a call.

An antenna connected to those components converts that motion into changes in standard analog radio signal emitted by a cellular base station. This process essentially encodes speech patterns in reflected radio signals in a way that uses almost no power. To transmit speech, the phone uses vibrations from the device’s microphone to encode speech patterns in the reflected signals. To receive speech, it converts encoded radio signals into sound vibrations that that are picked up by the phone’s speaker. In the prototype device, the user presses a button to switch between these two “transmitting” and “listening” modes.

The new technology is detailed in a paper published July 1 in the Proceedings of the Association for Computing Machinery on Interactive, Mobile, Wearable and Ubiquitous Technologies.

Source: http://www.washington.edu/
AND
http://www.reuters.com/

How To Generate Any Cell Within The Patient’s Own Body

Researchers at The Ohio State University Wexner Medical Center and Ohio State’s College of Engineering have developed a new technology, Tissue Nanotransfection (TNT), that can generate any cell type of interest for treatment within the patient’s own body. This technology may be used to repair injured tissue or restore function of aging tissue, including organs, blood vessels and nerve cells.

By using our novel nanochip technology (nanocomputer), injured or compromised organs can be replaced. We have shown that skin is a fertile land where we can grow the elements of any organ that is declining,” said Dr. Chandan Sen, director of Ohio State’s Center for Regenerative Medicine & Cell Based Therapies, who co-led the study with L. James Lee, professor of chemical and biomolecular engineering with Ohio State’s College of Engineering in collaboration with Ohio State’s Nanoscale Science and Engineering Center.

Researchers studied mice and pigs in these experiments. In the study, researchers were able to reprogram skin cells to become vascular cells in badly injured legs that lacked blood flow. Within one week, active blood vessels appeared in the injured leg, and by the second week, the leg was saved. In lab tests, this technology was also shown to reprogram skin cells in the live body into nerve cells that were injected into brain-injured mice to help them recover from stroke.

This is difficult to imagine, but it is achievable, successfully working about 98 percent of the time. With this technology, we can convert skin cells into elements of any organ with just one touch. This process only takes less than a second and is non-invasive, and then you’re off. The chip does not stay with you, and the reprogramming of the cell starts. Our technology keeps the cells in the body under immune surveillance, so immune suppression is not necessary,” said Sen, who also is executive director of Ohio State’s Comprehensive Wound Center.

Results of the regenerative medicine study have been published in the journal  Nature Nanotechnology.

Source: https://news.osu.edu/

Building Brain-Inspired AI Supercomputing System

IBM (NYSE: IBM) and the U.S. Air Force Research Laboratory (AFRL) today announced they are collaborating on a first-of-a-kind brain-inspired supercomputing system powered by a 64-chip array of the IBM TrueNorth Neurosynaptic System. The scalable platform IBM is building for AFRL will feature an end-to-end software ecosystem designed to enable deep neural-network learning and information discovery. The system’s advanced pattern recognition and sensory processing power will be the equivalent of 64 million neurons and 16 billion synapses, while the processor component will consume the energy equivalent of a dim light bulb – a mere 10 watts to power.
IBM researchers believe the brain-inspired, neural network design of TrueNorth will be far more efficient for pattern recognition and integrated sensory processing than systems powered by conventional chips. AFRL is investigating applications of the system in embedded, mobile, autonomous settings where, today, size, weight and power (SWaP) are key limiting factors. The IBM TrueNorth Neurosynaptic System can efficiently convert data (such as images, video, audio and text) from multiple, distributed sensors into symbols in real time. AFRL will combine this “right-brain perception capability of the system with the “left-brain” symbol processing capabilities of conventional computer systems. The large scale of the system will enable both “data parallelism” where multiple data sources can be run in parallel against the same neural network and “model parallelism” where independent neural networks form an ensemble that can be run in parallel on the same data.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

AFRL was the earliest adopter of TrueNorth for converting data into decisions,” said Daniel S. Goddard, director, information directorate, U.S. Air Force Research Lab. “The new neurosynaptic system will be used to enable new computing capabilities important to AFRL’s mission to explore, prototype and demonstrate high-impact, game-changing technologies that enable the Air Force and the nation to maintain its superior technical advantage.”

“The evolution of the IBM TrueNorth Neurosynaptic System is a solid proof point in our quest to lead the industry in AI hardware innovation,” said Dharmendra S. Modha, IBM Fellow, chief scientist, brain-inspired computing, IBM Research – Almaden. “Over the last six years, IBM has expanded the number of neurons per system from 256 to more than 64 million – an 800 percent annual increase over six years.’’

Source: https://www-03.ibm.com/

All Carbon Spin Transistor Is Quicker And Smaller

A researcher with the Erik Jonsson School of Engineering and Computer Science at UT Dallas has designed a novel computing system made solely from carbon that might one day replace the silicon transistors that power today’s electronic devices.

The concept brings together an assortment of existing nanoscale technologies and combines them in a new way,” said Dr. Joseph S. Friedman, assistant professor of electrical and computer engineering at UT Dallas who conducted much of the research while he was a doctoral student at Northwestern University.

The resulting all-carbon spin logic proposal, published by lead author Friedman and several collaborators in the June 5 edition of the online journal Nature Communications, is a computing system that Friedman believes could be made smaller than silicon transistors, with increased performance.

Today’s electronic devices are powered by transistors, which are tiny silicon structures that rely on negatively charged electrons moving through the silicon, forming an electric current. Transistors behave like switches, turning current on and off.

In addition to carrying a charge, electrons have another property called spin, which relates to their magnetic properties. In recent years, engineers have been investigating ways to exploit the spin characteristics of electrons to create a new class of transistors and devices called “spintronics.”

Friedman’s all-carbon, spintronic switch functions as a logic gate that relies on a basic tenet of electromagnetics: As an electric current moves through a wire, it creates a magnetic field that wraps around the wire. In addition, a magnetic field near a two-dimensional ribbon of carbon — called a graphene nanoribbon — affects the current flowing through the ribbon. In traditional, silicon-based computers, transistors cannot exploit this phenomenon. Instead, they are connected to one another by wires. The output from one transistor is connected by a wire to the input for the next transistor, and so on in a cascading fashion.

Source: http://www.utdallas.edu/

How To Harness Heat To Power Computers

One of the biggest problems with computers, dating to the invention of the first one, has been finding ways to keep them cool so that they don’t overheat or shut down. Instead of combating the heat, two University of Nebraska–Lincoln engineers have embraced it as an alternative energy source that would allow computing at ultra-high temperatures. Sidy Ndao, assistant professor of mechanical and materials engineering, said his research group’s development of a nano-thermal-mechanical device, or thermal diode, came after flipping around the question of how to better cool computers.

thermal diode

If you think about it, whatever you do with electricity you should (also) be able to do with heat, because they are similar in many ways,” Ndao said. “In principle, they are both energy carriers. If you could control heat, you could use it to do computing and avoid the problem of overheating.”

A paper Ndao co-authored with Mahmoud Elzouka, a graduate student in mechanical and materials engineering, was published in the March edition of Scientific Reports. In it, they documented their device working in temperatures that approached 630 degrees Fahrenheit (332 degrees Celsius).

Source: http://news.unl.edu/

Carbon Nanotubes Self-Assemble Into Tiny Transistors

Carbon nanotubes can be used to make very small electronic devices, but they are difficult to handle. University of Groningen (Netherlands) scientists, together with colleagues from the University of Wuppertal and IBM Zurich, have developed a method to select semiconducting nanotubes from a solution and make them self-assemble on a circuit of gold electrodes. The results look deceptively simple: a self-assembled transistor with nearly 100 percent purity and very high electron mobility. But it took ten years to get there. University of Groningen Professor of Photophysics and Optoelectronics Maria Antonietta Loi designed polymers which wrap themselves around specific carbon nanotubes in a solution of mixed tubes. Thiol side chains on the polymer bind the tubes to the gold electrodes, creating the resultant transistor.

polymer wrapped nanotube

In our previous work, we learned a lot about how polymers attach to specific carbon nanotubes, Loi explains. These nanotubes can be depicted as a rolled sheet of graphene, the two-dimensional form of carbon. ‘Depending on the way the sheets are rolled up, they have properties ranging from semiconductor to semi-metallic to metallic.’ Only the semiconductor tubes can be used to fabricate transistors, but the production process always results in a mixture.

We had the idea of using polymers with thiol side chains some time ago‘, says Loi. The idea was that as sulphur binds to metals, it will direct polymer-wrapped nanotubes towards gold electrodes. While Loi was working on the problem, IBM even patented the concept. ‘But there was a big problem in the IBM work: the polymers with thiols also attached to metallic nanotubes and included them in the transistors, which ruined them.’

Loi’s solution was to reduce the thiol content of the polymers, with the assistance of polymer chemists from the University of Wuppertal. ‘What we have now shown is that this concept of bottom-up assembly works: by using polymers with a low concentration of thiols, we can selectively bring semiconducting nanotubes from a solution onto a circuit.’ The sulphur-gold bond is strong, so the nanotubes are firmly fixed: enough even to stay there after sonication of the transistor in organic solvents.

Over the last years, we have created a library of polymers that select semiconducting nanotubes and developed a better understanding of how the structure and composition of the polymers influences which carbon nanotubes they select’, says Loi. The result is a cheap and scalable production method for nanotube electronics. So what is the future for this technology? Loi: ‘It is difficult to predict whether the industry will develop this idea, but we are working on improvements, and this will eventually bring the idea closer to the market.’

The results were published in the journal Advanced Materials on 5 April.
Source: http://www.rug.nl/
A
ND
https://www.eurekalert.org/

‘Spray-On’ Memory for Paper, Fabric, Plastic

USB flash drives are already common accessories in offices and college campuses. But thanks to the rise in printable electronics, digital storage devices like these may soon be everywhere – including on our groceries, pill bottles and even clothingDuke University researchers have brought us closer to a future of low-cost, flexible electronics by creating a new “spray-on digital memory device using only an aerosol jet printer and nanoparticle inks. The device, which is analogous to a 4-bit flash drive, is the first fully-printed digital memory that would be suitable for practical use in simple electronics such as environmental sensors or RFID tags. And because it is jet-printed at relatively low temperatures, it could be used to build programmable electronic devices on bendable materials like paper, plastic or fabric.

PrintingMemory

Duke University researchers have developed a new “spray-on” digital memory (upper left) that could be used to build programmable electronics on flexible materials like paper, plastic or fabric. They used LEDS to demonstrate a simple application.

We have all of the parameters that would allow this to be used for a practical application, and we’ve even done our own little demonstration using LEDs,” said Duke graduate student Matthew Catenacci, who describes the device in a paper published online in the Journal of Electronic Materials. At the core of the new device, which is about the size of a postage stamp, is a new copper-nanowire-based printable material that is capable of storing digital information.

Memory is kind of an abstract thing, but essentially it is a series of ones and zeros which you can use to encode information,” said Benjamin Wiley, an associate professor of chemistry at Duke and an author on the paper.

Source: https://today.duke.edu/

A Brain-computer Interface To Combat The Rise of AI

Elon Musk is attempting to combat the rise of artificial intelligence (AI) with the launch of his latest venture, brain-computer interface company NeuralinkLittle is known about the startup, aside from what has been revealed in a Wall Street Journal report, but says sources have described it as “neural lace” technology that is being engineered by the company to allow humans to seamlessly communicate with technology without the need for an actual, physical interface. The company has also been registered in California as a medical research entity because Neuralink’s initial focus will be on using the described interface to help with the symptoms of chronic conditions, from epilepsy to depression. This is said to be similar to how deep brain stimulation controlled by an implant helps  Matt Eagles, who has Parkinson’s, manage his symptoms effectively. This is far from the first time Musk has shown an interest in merging man and machine. At a Tesla launch in Dubai earlier this year, the billionaire spoke about the need for humans to become cyborgs if we are to survive the rise of artificial intelligence.

cyborg woman

Over time I think we will probably see a closer merger of biological intelligence and digital intelligence,”CNBC reported him as saying at the time. “It’s mostly about the bandwidth, the speed of the connection between your brain and the digital version of yourself, particularly output.” Transhumanism, the enhancement of humanity’s capabilities through science and technology, is already a living reality for many people, to varying degrees. Documentary-maker Rob Spence replaced one of his own eyes with a video camera in 2008; amputees are using prosthetics connected to their own nerves and controlled using electrical signals from the brain; implants are helping tetraplegics regain independence through the BrainGate project.

Former director of the United States Defense Advanced Research Projects Agency (DARPA), Arati Prabhakar, comments: “From my perspective, which embraces a wide swathe of research disciplines, it seems clear that we humans are on a path to a more symbiotic union with our machines.

Source: http://www.wired.co.uk/