Category Archives: Uncategorized

Early-Stage Detection Of Alzheimer’s In The Blood

Two major studies with promising antibodies have recently failed – possibly because they have been administered too late. A new very early-detection test gives rise to hope. Using current techniques, Alzheimer’s disease, the most frequent cause of dementia, can only be detected once the typical plaques have formed in the brain. At this point, therapy seems no longer possible. However, the first changes caused by Alzheimer’s take place on the protein level up to 20 years sooner. A two-tier method developed at Ruhr-Universität Bochum (RUB) can help detect the disease at a much earlier stage. The researchers from Bochum published their report in the March 2019 edition of the journal “Alzheimer’s and Dementia: Diagnosis, Assessment and Disease Monitoring”.

This has paved the way for early-stage therapy approaches, where the as yet inefficient drugs on which we had pinned our hopes may prove effective,” says Professor Klaus Gerwert from the Department of Biophysics at RUB.

In Alzheimer’s patients, the amyloid beta protein folds incorrectly due to pathological changes long before the first symptoms occur. A team of researchers headed by Klaus Gerwert successfully diagnosed this misfolding using a simple blood test; as a result, the disease can be detected approximately eight years before the first clinical symptoms occur. The test wasn’t suitable for clinical applications however: it did detect 71 per cent of Alzheimer’s cases in symptomless stages, but at the same time provided false positive diagnoses for nine per cent of the study participants. In order to increase the number of correctly identified Alzheimer’s cases and to reduce the number of false positive diagnoses, the researchers poured a lot of time and effort into optimising the test.

As a result, they have now introduced the two-tier diagnostic method. To this end, they use the original blood test to identify high-risk individuals. Subsequently, they add a dementia-specific biomarker, namely tau protein, to run further tests with those test participants whose Alzheimer’s diagnosis was positive in the first step. If both biomarkers show a positive result, there is a high likelihood of Alzheimer’s disease. “Through the combination of both analyses, 87 of 100 Alzheimer’s patients were correctly identified in our study,” summarises Klaus Gerwert. “And we reduced the number of false positive diagnoses in healthy subjects to 3 of 100. The second analysis is carried out in cerebrospinal fluid that is extracted from the spinal cord.

Now, new clinical studies with test participants in very early stages of the disease can be launched,” points out Gerwert. He is hoping that the existing therapeutic antibodies will still have an effect. “Recently, two major promising studies have failed, especially Crenezumab and Aducanumab – not least because it had probably already been too late by the time therapy was taken up. The new test opens up a new therapy window.”

Source: https://news.rub.de/

Exoskeletons Assist Individuals With Spinal Cord Injury

(from Inverse.com) Green Lantern’s ring, Wonder Woman’s bracelets, Captain America’s shield, and, of course, Batman’s batsuit:  30 years later, as National Superhero Day approaches, I’d be designing components of my own supersuits.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

I didn’t really notice this until a few months ago. On that day, my childhood dreams were at once destroyed and fulfilled. Standing in a line, I noticed that everyone was focused on their smartphones’ screens. Suddenly it hit me: I already had Sword of Omens superpowers. With my smartphone, I can see video of faraway events and text my friends to meet up. Billions of people now have what used to be considered superpowers.

But what about the physical superpowers? I wanted those, too — like superhuman endurance or strength. Those may not be too far behind. I’m working on them in Vanderbilt’s Center for Rehabilitation Engineering and Assistive Technology. Humanity has begun to enter the age of wearable exoskeletons and exosuits thatoffer support and strength to people’s bodies. Over the past five years, wearable exoskeletons that assist and aid movement have begun to shift out of research labs and into public use. They’re still early versions, and the science is still emerging, but they include the first of several FDA-approved exoskeletons to assist individuals with spinal cord injury or after stroke, as well as exoskeletons to help keep workers safe and reduce the fatigue of physically demanding jobs.

Source: https://engineering.vanderbilt.edu/
AND
https://www.inverse.com/

How To Create See-through Human Organs

Researchers in Germany have created transparent human organs using a new technology that could pave the way to print three-dimensional body parts such as kidneys for transplants. The organ is then scanned by lasers in a microscope that allows researchers to capture the entire structure, including the blood vessels and every single cell in its specific location. Using this blueprint, researchers print out the scaffold of the organ. They then load the 3D printer with stem cells which act as “ink” and are injected into the correct position making the organ functional.

The team led by Ali Erturk at Ludwig Maximilians University in Munich have developed a technique that uses a solvent to make organs such as the brain and kidneys transparent. While 3D printing is already used widely to produce spare parts for industry, Erturk said the development marks a step forward for 3D printing in the medical field. Until now 3D-printed organs lacked detailed cellular structures because they were based on images from computer tomography or MRI machines, he said.

CLICK ON THE IMAGE TO ENJOY THE VIDEO

3D transparent mouse

We can see where every single cell is located in transparent human organs. And then we can actually replicate exactly the same, using 3D bioprinting technology to make a real functional organ,” he said. “Therefore, I believe we are much closer to a real human organ for the first time now.”

Erturk’s team plan to start by creating a bioprinted pancreas over the next 2-3 years and also hope to develop a kidney within 5-6 years. The researchers will first test to see whether animals can survive with the bioprinted organs and could start clinical trials within 5-10 years, he said.

Source: https://www.reuters.com/

AI Detects Alzheimer’s Six Years In Advance

Using a common type of brain scan, researchers programmed a machine-learning algorithm to diagnose early-stage Alzheimer’s disease about six years before a clinical diagnosis is made – potentially giving doctors a chance to intervene with treatment. No cure exists for Alzheimer’s disease, but promising drugs have emerged in recent years that can help stem the condition’s progression. However, these treatments must be administered early in the course of the disease in order to do any good. This race against the clock has inspired scientists to search for ways to diagnose the condition earlier.

A PET scan of the brain of a person with Alzheimer’s disease

One of the difficulties with Alzheimer’s disease is that by the time all the clinical symptoms manifest and we can make a definitive diagnosis, too many neurons have died, making it essentially irreversible,” says Jae Ho Sohn, MD, MS, a resident in the Department of Radiology and Biomedical Imaging at UC San Francisco.

 

Positron emission tomography (PET) scans, which measure the levels of specific molecules, like glucose, in the brain, have been investigated as one tool to help diagnose Alzheimer’s disease before the symptoms become severe. Glucose is the primary source of fuel for brain cells, and the more active a cell is, the more glucose it uses. As brain cells become diseased and die, they use less and, eventually, no glucose.

Other types of PET scans look for proteins specifically related to Alzheimer’s disease, but glucose PET scans are much more common and cheaper, especially in smaller health care facilities and developing countries, because they’re also used for cancer staging.

Radiologists have used these scans to try to detect Alzheimer’s by looking for reduced glucose levels across the brain, especially in the frontal and parietal lobes of the brain. However, because the disease is a slow progressive disorder, the changes in glucose are very subtle and so difficult to spot with the naked eye. To solve this problem, Sohn applied a machine learning algorithm to PET scans to help diagnose early-stage Alzheimer’s disease more reliably.

This is an ideal application of deep learning because it is particularly strong at finding very subtle but diffuse processes. Human radiologists are really strong at identifying tiny focal finding like a brain tumor, but we struggle at detecting more slow, global changes,” says Sohn. “Given the strength of deep learning in this type of application, especially compared to humans, it seemed like a natural application.

To train the algorithm, Sohn fed it images from the Alzheimer’s Disease Neuroimaging Initiative (ADNI), a massive public dataset of PET scans from patients who were eventually diagnosed with either Alzheimer’s disease, mild cognitive impairment or no disorder. Eventually, the algorithm began to learn on its own which features are important for predicting the diagnosis of Alzheimer’s disease and which are not.

Once the algorithm was trained on 1,921 scans, the scientists tested it on two novel datasets to evaluate its performance. The first were 188 images that came from the same ADNI database but had not been presented to the algorithm yet. The second was an entirely novel set of scans from 40 patients who had presented to the UCSF Memory and Aging Center with possible cognitive impairment.

The algorithm performed with flying colors. It correctly identified 92 percent of patients who developed Alzheimer’s disease in the first test set and 98 percent in the second test set. What’s more, it made these correct predictions on average 75.8 months – a little more than six yearsbefore the patient received their final diagnosis.

Source: https://www.ucsf.edu/

Hypothalamic Stem Cells May Reverse The Human Ageing Process

A study published in the Nature Journal by Dongsheng Cai, Albert Einstein College of Medicine, New York, talks about how stem cells, that determines how fast aging occurs in the body, can help reverse the human ageing process. Stem cells reside in Hypothalamus, a pea-sized part of the brain that contains a bundle of neurons. It is responsible for a wide array of growth, development, digestion, reproduction, and metabolism related processes in the body. As the human body starts to age, these neural stem cells in the body begins to deteriorate and accelerates the human ageing process. So, if you stop these stem cells from wearing away, you can possibly stop the human body from ageing.

The lab tests were conducted on mice, where it was observed that as the mice grew 10 months or older, the stem cells begin to deplete (earlier than the usual time for stem cells to deteriorate in mice). By the time, these mice turn two years and older, the stem cells start to disappear, causing death. However, to prove their hypothesis that stem cells deterioration truly accelerates the ageing process, scientists ‘artificially disrupted’ the stem cells in middle-aged mice, and observed that it significantly grew their rate of ageing.

Once the hypothesis that stem cells depletion leads to accelerated ageing was proved, scientists further injected the hypothalamic stem cells into the brains of older and middle-aged mice, where a sudden decrease in their ageing process was observed. This happens because the hypothalamic stem cells release molecules called microRNAs (miRNAs) which play an important role in regulation of gene expression. These miRNAs (which are bundled inside tiny particles called exosomes) released by the stem cells were then further injected into the cerebrospinal fluid of mice.

After this experiment, the ageing process significantly slowed down, in terms of tissue analysis and behavioral analysis where different changes in animals’ muscle endurance, coordination, social behavior and cognitive ability also showed signs of anti-ageing. Scientists are now looking into exploring the study further and analyze other factors related to microRNAs that might be responsible for the anti-ageing miracle!

Source: https://www.nature.com/
AND
https://in.mashable.com/

DNA Folds Into A Smart Nanocapsule For Drug Delivery

Researchers from University of Jyväskylä and Aalto University in Finland have developed a customized DNA nanostructure that can perform a predefined task in human body-like conditions. To do so, the team built a capsule-like carrier that opens and closes according to the pH level of the surrounding solution. The nanocapsule can be loaded—or packed—with a variety of cargo, closed for delivery and opened again through a subtle pH increase.

DNA folds into a smart nanocapsule for drug delivery
The pH-responsive DNA origami nanocapsule (blue) loaded with an enzyme (yellow color, high pH). 

The function of the DNA nanocapsule is based on pH-responsive DNA residues.To make this happen, the team designed a capsule-like DNA origami structure functionalized with pH-responsive DNA strands. Such dynamic DNA nanodesigns are often controlled by the simple hydrogen-bonding of two complementary DNA sequences. Here, one half of the capsule was equipped with specific double-stranded DNA domains that could further form a DNA triple helix — in other words a helical structure comprised of three, not just two DNA molecules — by attaching to a suitable single-stranded DNA in the other half.

Source: https://www.jyu.fi/

AI Closer To The Efficiency Of The Brain

Computers and artificial intelligence continue to usher in major changes in the way people shop. It is relatively easy to train a robot’s brain to create a shopping list, but what about ensuring that the robotic shopper can easily tell the difference between the thousands of products in the store?

Purdue University researchers and experts in brain-inspired computing think part of the answer may be found in magnets. The researchers have developed a process to use magnetics with brain-like networks to program and teach devices such as personal robots, self-driving cars and drones to better generalize about different objects.

Our stochastic neural networks try to mimic certain activities of the human brain and compute through a connection of neurons and synapses,” said Kaushik Roy, Purdue’s Edward G. Tiedemann Jr. Distinguished Professor of Electrical and Computer Engineering. “This allows the computer brain to not only store information but also to generalize well about objects and then make inferences to perform better at distinguishing between objects.

The stochastic switching behavior is representative of a sigmoid switching behavior of a neuron. Such magnetic tunnel junctions can be also used to store synaptic weights. Roy presented the technology during the annual German Physical Sciences Conference earlier this month in Germany. The work also appeared in the Frontiers in Neuroscience.

The switching dynamics of a nano-magnet are similar to the electrical dynamics of neurons. Magnetic tunnel junction devices show switching behavior, which is stochastic in nature.  The Purdue group proposed a new stochastic training algorithm for synapses using spike timing dependent plasticity (STDP), termed Stochastic-STDP, which has been experimentally observed in the rat’s hippocampus. The inherent stochastic behavior of the magnet was used to switch the magnetization states stochastically based on the proposed algorithm for learning different object representations. “The big advantage with the magnet technology we have developed is that it is very energy-efficient,” said Roy, who leads Purdue’s Center for Brain-inspired Computing Enabling Autonomous Intelligence. “We have created a simpler network that represents the neurons and synapses while compressing the amount of memory and energy needed to perform functions similar to brain computations.

Source: https://www.purdue.edu/

Nanomachines To Deliver Cancer Drugs to Hard-to-reach Areas

In a recent study in mice, researchers found a way to deliver specific drugs to parts of the body that are exceptionally difficult to access. Their Y-shaped block catiomer (YBC) binds with certain therapeutic materials forming a package 18 nanometers wide. The package is less than one-fifth the size of those produced in previous studies, so can pass through much smaller gaps. This allows YBCs to slip through tight barriers in cancers of the brain or pancreas.

The fight against cancer is fought on many fronts. One promising field is gene therapy, which targets genetic causes of diseases to reduce their effect. The idea is to inject a nucleic acid-based drug into the bloodstream — typically small interfering RNA (siRNA) — which binds to a specific problem-causing gene and deactivates it. However, siRNA is very fragile and needs to be protected within a nanoparticle or it breaks down before reaching its target.

siRNA can switch off specific gene expressions that may cause harm. They are the next generation of biopharmaceuticals that could treat various intractable diseases, including cancer,” explained Associate Professor Kanjiro Miyata of the University of Tokyo, who jointly supervised the study. “However, siRNA is easily eliminated from the body by enzymatic degradation or excretion. Clearly a new delivery method was called for.”

Presently, nanoparticles are about 100 nanometers wide, one-thousandth the thickness of paper. This is small enough to grant them access to the liver through the leaky blood vessel wall. However some cancers are harder to reach. Pancreatic cancer is surrounded by fibrous tissues and cancers in the brain by tightly connected vascular cells. In both cases the gaps available are much smaller than 100 nanometers. Miyata and colleagues created an siRNA carrier small enough to slip through these gaps in the tissues.

We used polymers to fabricate a small and stable nanomachine for the delivery of siRNA drugs to cancer tissues with a tight access barrier,” said Miyata. “The shape and length of component polymers is precisely adjusted to bind to specific siRNAs, so it is configurable.”

Source: https://www.u-tokyo.ac.jp/

How To Create Speech From Brain Signals

“In my head, I churn over every sentence ten times, delete a word, add an adjective, and learn my text by heart, paragraph by paragraph,” wrote Jean-Dominique Bauby in his memoir, “The Diving Bell and the Butterfly.” In the book, Mr. Bauby, a journalist and editor, recalled his life before and after a paralyzing stroke that left him virtually unable to move a muscle; he tapped out the book letter by letter, by blinking an eyelid.

Thousands of people are reduced to similarly painstaking means of communication as a result of injuries suffered in accidents or combat, of strokes, or of neurodegenerative disorders such as amyotrophic lateral sclerosis, or A.L.S., that disable the ability to speak.

Now, scientists are reporting that they have developed a virtual prosthetic voice, a system that decodes the brain’s vocal intentions and translates them into mostly understandable speech, with no need to move a muscle, even those in the mouth. (The physicist and author Stephen Hawking used a muscle in his cheek to type keyboardcharacters, which a computer synthesized into speech.)

It’s formidable work, and it moves us up another level toward restoring speech” by decoding brain signals, said Dr. Anthony Ritaccio, a neurologist and neuroscientist at the Mayo Clinic in Jacksonville, Fla., who was not a member of the research group.

The new system, described on Wednesday in the journal Nature,deciphers the brain’s motor commands guiding vocal movement during speech — the tap of the tongue, the narrowing of the lips — and generates intelligible sentences that approximate a speaker’s natural cadence. Experts said the new work represented a “proof of principle,” a preview of what may be possible after further experimentation and refinement. The system was tested on people who speak normally; it has not been tested in people whose neurological conditions or injuries, such as common strokes, could make the decoding difficult or impossible. For the new trial, scientists at the University of California, San Francisco, and U.C. Berkeley recruited five people who were in the hospital being evaluated for epilepsy surgery.

Many people with epilepsy do poorly on medication and opt to undergo brain surgery. Before operating, doctors must first locate the “hot spot” in each person’s brain where the seizures originate; this is done with electrodes that are placed in the brain, or on its surface, and listen for telltale electrical stormsPinpointing this location can take weeks. In the interim, patients go through their days with electrodes implanted in or near brain regions that are involved in movement and auditory signaling. These patients often consent to additional experiments that piggyback on those implants.

Five such patients at U.C.S.F. agreed to test the virtual voice generator. Each had been implanted with one or two electrode arrays: stamp-size pads, containing hundreds of tiny electrodes, that were placed on the surface of the brain. As each participant recited hundreds of sentences, the electrodes recorded the firing patterns of neurons in the motor cortex. The researchers associated those patterns with the subtle movements of the patient’s lips, tongue, larynx and jaw that occur during natural speech. The team then translated those movements into spoken sentences.

Source: https://www.nytimes.com/

Robots Sort Recycling, Detect If An Object Is Paper, Metal Or Plastic

Every year trash companies sift through an estimated 68 million tons of recycling, which is the weight equivalent of more than 30 million cars. A key step in the process happens on fast-moving conveyor belts, where workers have to sort items into categories like paper, plastic and glass. Such jobs are dull, dirty, and often unsafe, especially in facilities where workers also have to remove normal trash from the mix. With that in mind, a team led by researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has developed a robotic system that can detect if an object is paper, metal, or plastic.

The team’s “RoCycle” system includes a soft Teflon hand that uses tactile sensors on its fingertips to detect an object’s size and stiffness. Compatible with any robotic arm, RoCycle was found to be 85 percent accurate at detecting materials when stationary, and 63 percent accurate on an actual simulated conveyer belt. (Its most common error was identifying paper-covered metal tins as paper, which the team says would be improved by adding more sensors along the contact surface.)

CLICK ON THE IMAGE TO ENJOY THE VIDEO

Our robot’s sensorized skin provides haptic feedback that allows it to differentiate between a wide range of objects, from the rigid to the squishy,” says MIT Professor Daniela Rus, senior author on a related paper that will be presented in April at the IEEE International Conference on Soft Robotics (RoboSoft) in Seoul, South Korea. “Computer vision alone will not be able to solve the problem of giving machines human-like perception, so being able to use tactile input is of vital importance.”

A collaboration with Yale University, RoCycle directly demonstrates the limits of sight-based sorting: It can reliably distinguish between two visually similar Starbucks cups, one made of paper and one made of plastic, that would give vision systems trouble.

Source: http://news.mit.edu/