Tag Archives: machine-learning

Artificial Intelligence Revolutionizes Farming

Researchers at MIT have used AI to improve the flavor of basil. It’s part of a trend that is seeing artificial intelligence revolutionize farming.
What makes basil so good? In some cases, it’s AI. Machine learning has been used to create basil plants that are extra-delicious. While we sadly cannot report firsthand on the herb’s taste, the effort reflects a broader trend that involves using data science and machine learning to improve agriculture 

The researchers behind the AI-optimized basil used machine learning to determine the growing conditions that would maximize the concentration of the volatile compounds responsible for basil’s flavor. The basil was grown in hydroponic units within modified shipping containers in Middleton, Massachusetts. Temperature, light, humidity, and other environmental factors inside the containers could be controlled automatically. The researchers tested the taste of the plants by looking for certain compounds using gas chromatography and mass spectrometry. And they fed the resulting data into machine-learning algorithms developed at MIT and a company called Cognizant.

The research showed, counterintuitively, that exposing plants to light 24 hours a day generated the best taste. The research group plans to study how the technology might improve the disease-fighting capabilities of plants as well as how different flora may respond to the effects of climate change.

We’re really interested in building networked tools that can take a plant’s experience, its phenotype, the set of stresses it encounters, and its genetics, and digitize that to allow us to understand the plant-environment interaction,” said Caleb Harper, head of the MIT Media Lab’s OpenAg group, in a press release. His lab worked with colleagues from the University of Texas at Austin on the paper.

The idea of using machine learning to optimize plant yield and properties is rapidly taking off in agriculture. Last year, Wageningen University in the Netherlands organized an “Autonomous Greenhousecontest, in which different teams competed to develop algorithms that increased the yield of cucumber plants while minimizing the resources required. They worked with greenhouses where a variety of factors are controlled by computer systems.

The study has appeared  in the journal PLOS One.

Source: https://www.technologyreview.com/

This Person Does Not exist

With the help of artificial intelligence, you can manipulate video of public figures to say whatever you like — or now, create images of people’s faces that don’t even exist. You can see this in action on a website called thispersondoesnotexist.com. It uses an algorithm to spit out a single image of a person’s face, and for the most part, they look frighteningly realHit refresh in your browser, and the algorithm will generate a new face. Again, these people do not exist.

The website is the creation of software engineer Phillip Wang, and uses a new AI algorithm called StyleGAN, which was developed by researchers at NvidiaGAN, or Generative Adversarial Networks, is a concept within machine learning which aims to generate images that are indistinguishable from real ones. You can train GANs to remember human faces, as well bedrooms, cars, and cats, and of course, generate images of them.

Wang explained that he created the site to create awareness for the algorithm, and chose facesbecause our brains are sensitive to that kind of image.”  He added that it costs $150 a month to hire out the server, as he needs a good amount of graphical power to run the website.


It also started off as a personal agenda mainly because none of my friends seem to believe this AI phenomenon, and I wanted to convince them,” Wang said. “This was the most shocking presentation I could send them. I then posted it on Facebook and it went viral from there.

I think eventually, given enough data, a big enough neural [network] can be teased into dreaming up many different kinds of scenarios,” Wang added.

Source: https://thispersondoesnotexist.com/

Want to Sound Like Barack Obama?

For your hair, there are wigs and hairstylists; for your skin, there are permanent and removable tattoos; for your eyes, there are contact lenses that disguise the shape of your pupils. In short, there’s a plethora of tools people can use if they want to give themselves a makeover—except for one of their signature features: their voice.

Sure a Darth Vader voice changing mask would do something about it, but for people who want to sound like a celebrity or a person of the opposite sex, look no further than Boston-based startup Modulate.


Founded in August 2017 by two MIT grads, this self-funded startup is using machine learning to change your voice as you speak. This could be a celebrity’s voice (like Barack Obama’s), the voice of a game character or even a totally custom voice. With potential applications in the gaming and movie industries, Modulate has launched  with a free online demo that allows users to play with the service.

The cool thing about Modulate is that the software doesn’t simply disguise your voice, but it does something far more radical: it converts a person’s speech into somebody’s else vocal chords, changing the very I.D. of someone’s speech but keeping intact cadence and word choice. As a result, you sound like you, but have in fact someone’s else voice.

Source: https://www.americaninno.com/

AI Robot Presents TV News In China

China’s state news agency Xinhua this week introduced the newest members of its newsroom: AI anchors who will reporttirelessly” all day every day, from anywhere in the country. Chinese viewers were greeted with a digital version of a regular Xinhua news anchor named Qiu Hao. The anchor, wearing a red tie and pin-striped suit, nods his head in emphasis, blinking and raising his eyebrows slightly.


Not only can I accompany you 24 hours a day, 365 days a year. I can be endlessly copied and present at different scenes to bring you the news,” he says. Xinhua also presented an English-speaking AI, based on another presenter, who adds: “The development of the media industry calls for continuous innovation and deep integration with the international advanced technologies … I look forward to bringing you brand new news experiences.”

Developed by Xinhua and the Chinese search engine, Sogou, the anchors were developed through machine learning to simulate the voice, facial movements, and gestures of real-life broadcasters, to present a “a lifelike image instead of a cold robot,” according to Xinhua.

Source: https://www.theguardian.com/

New Materials For New Processors

Computers used to take up entire rooms. Today, a two-pound laptop can slide effortlessly into a backpack. But that wouldn’t have been possible without the creation of new, smaller processors — which are only possible with the innovation of new materials. But how do materials scientists actually invent new materials? Through experimentation, explains Sanket Deshmukh, an assistant professor in the chemical engineering department of Virginia Tech whose team’s recently published computational research might vastly improve the efficiency and costs savings of the material design process.

Deshmukh’s lab, the Computational Design of Hybrid Materials lab, is devoted to understanding and simulating the ways molecules move and interact — crucial to creating a new material. In recent years, materials scientists have employed machine learning, a powerful subset of artificial intelligence, to accelerate the discovery of new materials through computer simulations. Deshmukh and his team have recently published research in the Journal of Physical Chemistry Letters demonstrating a novel machine learning framework that trainson the fly,” meaning it instantaneously processes data and learns from it to accelerate the development of computational models. Traditionally the development of computational models are “carried out manually via trial-and-error approach, which is very expensive and inefficient, and is a labor-intensive task,” Deshmukh explained.

This novel framework not only uses the machine learning in a unique fashion for the first time,” Deshmukh said, “but it also dramatically accelerates the development of accurate computational models of materials.” “We train the machine learning model in a ‘reverse’ fashion by using the properties of a model obtained from molecular dynamics simulations as an input for the machine learning model, and using the input parameters used in molecular dynamics simulations as an output for the machine learning model,” said Karteek Bejagam, a post-doctoral researcher in Deshmukh’s lab and one of the lead authors of the study.

This new framework allows researchers to perform optimization of computational models, at unusually faster speed, until they reach the desired properties of a new material.

Source: https://vtnews.vt.edu/

How To Recreate Memories Of Faces From Brain Data

A new technique developed by neuroscientists at the University of Toronto can reconstruct images of what people perceive based on their brain activity. The technique developed by Dan Nemrodov, a postdoctoral fellow in Assistant Professor Adrian Nestor’s lab at U of T Scarborough, is able to digitally reconstruct images seen by test subjects based on electroencephalography (EEG) data.


When we see something, our brain creates a mental percept, which is essentially a mental impression of that thing. We were able to capture this percept using EEG to get a direct illustration of what’s happening in the brain during this process,” says Nemrodov.

For the study, test subjects hooked up to EEG equipment were shown images of faces. Their brain activity was recorded and then used to digitally recreate the image in the subject’s mind using a technique based on machine learning algorithms. It’s not the first time researchers have been able to reconstruct images based on visual stimuli using neuroimaging techniques. The current method was pioneered by Nestor, who successfully reconstructed facial images from functional magnetic resonance imaging (fMRI) data in the past, but this is the first time EEG has been used.

Source: https://www.reuters.com/

Human Internal Verbalizations Understood Instantly By Computers

MIT researchers have developed a computer interface that can transcribe words that the user verbalizes internally but does not actually speak aloud. The system consists of a wearable device and an associated computing system. Electrodes in the device pick up neuromuscular signals in the jaw and face that are triggered by internal verbalizations — saying wordsin your head” — but are undetectable to the human eye. The signals are fed to a machine-learning system that has been trained to correlate particular signals with particular words. The device also includes a pair of bone-conduction headphones, which transmit vibrations through the bones of the face to the inner ear. Because they don’t obstruct the ear canal, the headphones enable the system to convey information to the user without interrupting conversation or otherwise interfering with the user’s auditory experience.

The device is thus part of a complete silent-computing system that lets the user undetectably pose and receive answers to difficult computational problems. In one of the researchers’ experiments, for instance, subjects used the system to silently report opponents’ moves in a chess game and just as silently receive computer-recommended responses.

The motivation for this was to build an IA device — an intelligence-augmentation device,” says Arnav Kapur, a graduate student at the MIT Media Lab, who led the development of the new system. “Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?” “We basically can’t live without our cellphones, our digital devices,” adds Pattie Maes, a professor of media arts and sciences and Kapur’s thesis advisor. “But at the moment, the use of those devices is very disruptive. If I want to look something up that’s relevant to a conversation I’m having, I have to find my phone and type in the passcode and open an app and type in some search keyword, and the whole thing requires that I completely shift attention from my environment and the people that I’m with to the phone itself. So, my students and I have for a very long time been experimenting with new form factors and new types of experience that enable people to still benefit from all the wonderful knowledge and services that these devices give us, but do it in a way that lets them remain in the present.”

Source: http://news.mit.edu/