Artificial Intelligence Tracks In Real Time Everybody In The Crowd

Artificial Intelligence that can pick you out in a crowd and then track your every move. Japanese firm Hitachi‘s new imaging system locks on to at least 100 different characteristics of an individual … including gender, age, hair style, clothes, and mannerisms. Hitachi says it provides real-time tracking and monitoring of crowded areas.


Until now, we need a lot of security guards and people to review security camera footage. We developed this AI software in the hopes it would help them do just that,” says Tomokazu Murakami, Hitachi researcher.

The system can help spot a suspicious individual or find a missing child, the makers say. So, an eyewitness could provide a limited description, with the AI software quickly scanning its database for a match.
In Japan, the demand for such technology is increasing because of the Tokyo 2020 Olympics, but for us we’re developing it in a way so that it can be utilized in many different places such as train stations, stadiums, and even shopping malls,” comments Tomokazu Murakami.

High-speed tracking of individuals such as this will undoubtedly have its critics. But as Japan prepares to host the 2020 Olympics, Hitachi insists its system can contribute to public safety and security.


A Brain-computer Interface To Combat The Rise of AI

Elon Musk is attempting to combat the rise of artificial intelligence (AI) with the launch of his latest venture, brain-computer interface company NeuralinkLittle is known about the startup, aside from what has been revealed in a Wall Street Journal report, but says sources have described it as “neural lace” technology that is being engineered by the company to allow humans to seamlessly communicate with technology without the need for an actual, physical interface. The company has also been registered in California as a medical research entity because Neuralink’s initial focus will be on using the described interface to help with the symptoms of chronic conditions, from epilepsy to depression. This is said to be similar to how deep brain stimulation controlled by an implant helps  Matt Eagles, who has Parkinson’s, manage his symptoms effectively. This is far from the first time Musk has shown an interest in merging man and machine. At a Tesla launch in Dubai earlier this year, the billionaire spoke about the need for humans to become cyborgs if we are to survive the rise of artificial intelligence.

cyborg woman

Over time I think we will probably see a closer merger of biological intelligence and digital intelligence,”CNBC reported him as saying at the time. “It’s mostly about the bandwidth, the speed of the connection between your brain and the digital version of yourself, particularly output.” Transhumanism, the enhancement of humanity’s capabilities through science and technology, is already a living reality for many people, to varying degrees. Documentary-maker Rob Spence replaced one of his own eyes with a video camera in 2008; amputees are using prosthetics connected to their own nerves and controlled using electrical signals from the brain; implants are helping tetraplegics regain independence through the BrainGate project.

Former director of the United States Defense Advanced Research Projects Agency (DARPA), Arati Prabhakar, comments: “From my perspective, which embraces a wide swathe of research disciplines, it seems clear that we humans are on a path to a more symbiotic union with our machines.


Artificial Intelligence Writes Code By Looting

Artificial intelligence (AI) has taught itself to create its own encryption and produced its own universal ‘language. Now it’s writing its own code using similar techniques to humans. A neural network, called DeepCoder, developed by Microsoft and University of Cambridge computer scientists, has learnt how to write programs without a prior knowledge of code.  DeepCoder solved basic challenges of the kind set by programming competitions. This kind of approach could make it much easier for people to build simple programs without knowing how to write code.

deep coder

All of a sudden people could be so much more productive,” says Armando Solar-Lezama at the Massachusetts Institute of Technology, who was not involved in the work. “They could build systems that it [would be] impossible to build before.”

Ultimately, the approach could allow non-coders to simply describe an idea for a program and let the system build it, says Marc Brockschmidt, one of DeepCoder’s creators at Microsoft Research in Cambridge. UK.DeepCoder uses a technique called program synthesis: creating new programs by piecing together lines of code taken from existing software – just like a programmer might. Given a list of inputs and outputs for each code fragment, DeepCoder learned which pieces of code were needed to achieve the desired result overall.


How To Produce Music Hits With The Help Of Artificial Intelligence

Sony is developing a new software system containing algorithms that create songs based on existing music and help their arrangement and performance..

It sounds like The Beatles…..but wasn’t written by the Fab Four.  ‘Daddy’s Car‘ was created by Sony‘s artificial intelligence system Flow Machines, with the aim of sounding like Lennon and McCartney. It was written using algorithms at Sony‘s Computer Science Lab in Paris.


What the algorithm will do is always try to cope with your constraints, with what you are imposing to the system, to the score, the lead sheet – and the algorithm will always try to repair if you want, or generate stuff that is at the same time compatible with what you imposed and in the same style of the training song set“, says computer scientist Pierre Roy.

Each song‘s starting point is the machine’s database of sheet music from 13,000 existing tracks. Users choose a title whose sound or feel they like. The machine does the rest. Professional musician Benoit Carre recorded ‘Daddy’s Car‘, along with this track, ‘Mister Shadow‘. He insists the music created isn’t devoid of feeling, despite being artificially created.

We can find a soul in whatever type of music, including that generated by a computer. 1980s music was generated by a synthesiser. Music is what the person makes of it. It doesn’t exist alone. Each song is a partition sheet, with a lot of things around it“, comments Benoit carré, music composer from the band Liliclub.

After the song is created, musicians can write their own parts to broaden the sound. The  British rock star Peter Hook doesn’t like the idea: “Nearly every song I’ve written, in New Order and outside of New Order, has been with somebody else, and that is the beauty of it. Writing with a machine – what feedback, what buzz, are you going to get from a machine? All machines do is drive you crazy. You’re forever turning them off and on. So not for me, mate. I’ll stick with people.”

Sony wants to launch albums with songs created entirely by algorithm – one based on Beatles music. It says the algorithms ensure songs are unique and avoid plagiarism….but admit the issue of songwriting credits could be tricky to determine.


Implanted Neural Nanocomputers To Boost Failing Human Brains

As neural implants become more and more advanced, researchers think humans may be able to overcome diseases and defects like strokes and dementia with the help of nanocomputers in our brains.

With the forecasted inevitable rise of the machines — be they robots or artificial intelligences — humans are beginning to realize that they should work to maintain superiority. There are a few ideas about how we should do it, but perhaps the most promising option is to go full cyborg. (What could possibly go wrong?) On Monday, a company called Kernel, announced that it would be leading the charge.


The idea is something straight out of dorm room pot-smoking sessions. What if, the exhaling sophomore muses, we put computers inside our brains? Unfortunately for prospective stoner-scientists, the actual creation of such a device — a functioning, cognitive-enhancing neural implant — has long evaded bioengineers and neuroscientists alike.

Kernel thinks it’s past time to make real progress. Theodore Berger runs the Univerity of Southern California’s Center for Neural Engineering, and he caught the eye of Bryan Johnson, a self-made multimillionaire who’s obsessed with augmenting human intelligence. With Johnson’s entrepreneurial money and Berger’s scientific brain, the two launched Kernel.
For now, Berger and Johnson are focusing on achievable goals with immediate impacts. They are creating an analogous human neural implant that can mitigate cognitive decline in those who suffer from Alzheimer’s and the aftereffects of strokes, concussions, and other brain injuries or neurological diseases. If Kernel is able to replicate even the 10 percent cognitive improvement that Berger demonstrated in monkeys, those who suffer from these cognitive disorders will be that much more capable of forming memories and living out enjoyable lives.


Artificial Intelligence Mimicks Biological Hierarchy

New research from University of Wyoming and INRIA (France) explains why so many biological networks, including the human brain (a network of neurons), exhibit a hierarchical structure, and will improve attempts to create artificial intelligence.

biological hierarchyThe evolution of hierarchy – a simple system of ranking – in biological networks may arise because of the costs associated with network connections

Like large businesses, many biological networks are hierarchically organised, such as gene, protein, neural, and metabolic networks. This means they have separate units that can each be repeatedly divided into smaller and smaller subunits. For example, the human brain has separate areas for motor control and tactile processing, and each of these areas consist of sub-regions that govern different parts of the body.

But why do so many biological networks evolve to be hierarchical? The results of the study suggest that hierarchy evolves not because it produces more efficient networks, but instead because hierarchically wired networks have fewer connections. This is because connections in biological networks are expensive – they have to be built, housed, maintained, etc. – and there is therefore an evolutionary pressure to reduce the number of connections.
The findings not only explain why biological networks are hierarchical, they might also give an explanation for why many man-made systems such as the Internet and road systems are also hierarchical“, comments Jeff Clune, author of the paper.

The study has been published in PLOS Computational Biology.


Artificial Intelligence: The Rise Of The Machines

In a milestone for artificial intelligence, a computer has beaten a human champion at a strategy game that requires “intuition” rather than brute processing power to prevail, its makers said Wednesday. Dubbed AlphaGo, the system honed its own skills through a process of trial and error, playing millions of games against itself until it was battle-ready, and surprised even its creators with its prowess.

go game

AlphaGo won five-nil, and it was stronger than perhaps we were expecting,” said Demis Hassabis, the chief executive of Google DeepMind, a British artificial intelligence (AI) company.

A computer defeating a professional human player at the 3,000-year-old Chinese board game known as Go, was thought to be about a decade off. The clean-sweep victory over three-time European Go champion Fan Huisignifies a major step forward in one of the great challenges in the development of artificial intelligence—that of game-playing,” the British Go Association said in a statement. The two-player game is described as perhaps the most complex ever designed, with more configurations possible than there are atoms in the Universe, Hassabis says. Players take turns placing stones on a board, trying to surround and capture the opponent’s stones, with the aim of controlling more than 50 percent of the board. There are hundreds of places where a player can place the first stone, black or white, with hundreds of ways in which the opponent can respond to each of these moves and hundreds of possible responses to each of those in turn.


NanoComputers That Imitate Human Brain

Making a nanocomputer that learns and remembers like a human brain is a daunting challenge. The complex organ has 86 billion neurons and trillions of connections — or synapses — that can grow stronger or weaker over time. But now scientists from the Tsinghua University (China) report in ACS’ journal Nano Letters the development of a first-of-its-kind synthetic synapse that mimics the plasticity of the real thing, bringing us one step closer to human-like artificial intelligence.


While the brain still holds many secrets, one thing we do know is that the flexibility, or plasticity, of neuronal synapses is a critical feature. In the synapse, many factors, including how many signaling molecules get released and the timing of release, can change. This mutability allows neurons to encode memories, learn and heal themselves. In recent years, researchers have been building artificial neurons and synapses with some success but without the flexibility needed for learning. Tian-Ling Ren and colleagues set out to address that challenge.

The researchers created an artificial synapse out of aluminum oxide and twisted bilayer graphene. By applying different electric voltages to the system, they found they could control the reaction intensity of the receiving “neuron.” The team says their novel dynamic system could aid in the development of biology-inspired electronics capable of learning and self-healing.


Artificial Synapses Operate Image Classification

In what marks a significant step forward for artificial intelligence, researchers at UC Santa Barbara have demonstrated the functionality of a simple artificial neural circuit. For the first time, a circuit of about 100 artificial synapses was proved to perform a simple version of a typical human task: image classification.

“It’s a small, but important step,” said Dmitri Strukov, a professor of electrical and computer engineering. With time and further progress, the circuitry may eventually be expanded and scaled to approach something like the human brain’s, which has 1015 (one quadrillion) synaptic connections.

For all its errors and potential for faultiness, the human brain remains a model of computational power and efficiency for engineers like Strukov and his colleagues, Mirko Prezioso, Farnood Merrikh-Bayat, Brian Hoskins and Gina Adam. That’s because the brain can accomplish certain functions in a fraction of a second what computers would require far more time and energy to perform.

What are these functions? Well, you’re performing some of them right now. As you read this, your brain is making countless split-second decisions about the letters and symbols you see, classifying their shapes and relative positions to each other and deriving different levels of meaning through many channels of context, in as little time as it takes you to scan over this print. Change the font, or even the orientation of the letters, and it’s likely you would still be able to read this and derive the same meaning.

artificial synapses

In the researchers’ demonstration, the circuit implementing the rudimentary artificial neural network was able to successfully classify three letters (“z”, “v” and “n”) by their images, each letter stylized in different ways or saturated with “noise”. In a process similar to how we humans pick our friends out from a crowd, or find the right key from a ring of similar keys, the simple neural circuitry was able to correctly classify the simple images.

While the circuit was very small compared to practical networks, it is big enough to prove the concept of practicality,” said Merrikh-Bayat. According to Gina Adam, as interest grows in the technology, so will research momentum.

And, as more solutions to the technological challenges are proposed the technology will be able to make it to the market sooner,” she said.

The researchers’ findings are published in the journal Nature.


Computers That Learn Just As The Brain Does

Scientists working towards mapping and modelling the human brain, have taken the first step by implanting a simplified mouse brain inside its virtual body. This virtual mouse, they say, could one day replace live mice in lab testing – letting them performing mock experiments with the same degree of accuracy. When certain stimuli are applied to the virtual mouse‘s whiskers and skin, for example, the corresponding parts of its brain are activated.

Image converted using ifftoany

Image converted using ifftoany


That allows us at least in a simplified way to have muscles and senses distributed on the body, like touch is distributed across the entire body surface. And simple models of a peripheral nervous system that would allow us to control muscles, and then interface between the brain and these other parts, so that we get basically the whole animal reconstructed,” explains Neurorobotics scientist Marc-Oliver Gewaltig (Ecole Polytechnique Fédérale de Lausanne EPFL), part of the Human Brain Project (HBP) in Switzerland.
Scientists around the world mapped the position of the mouse brain’s 75 million neurons and the connections between different regions. The virtual brain currently consists of just 200,000 neurons – though this will increase along with computing power. Gewaltig says applying the same meticulous methods to the human brain, could lead to computer processors that learn, just as the brain does. In effect, artificial intelligence.
If you look at the neurobotics platform, if you want to control robots in a similar way as organisms control their bodies; that’s also a form of artificial intelligence, and this is probably where we’ll first produce visible outcomes and results“, he added.. The EU-funded Human Brain Project is scheduled to run until 2023. Among its ambitions, they hope to map diseases of the brain to help diagnose people objectively and develop new, truly personalised therapies.

S Hawking: highly intelligent machines, the “worst mistake in history”

Dismissing the implications of highly intelligent machines could be humankind’s “worst mistake in history“, write astrophysicist Stephen Hawking, computer scientist Stuart Russell, and physicists Max Tegmark and Frank Wilczek in the Independent. “Self-awaremachines have received the Hollywood treatment in the Johnny Depp film Transcendence, but the subject should receive serious consideration, they say.

Successfully creating artificial intelligence would be “the biggest event in human history“, they write, and the possible benefits for everyday human life are enormous. There could come a time, however, when machines outpace human achievement. If and when that day arrives, they wonder, will the best interest of humans still factor into their calculations?
One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand,” they write. “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”

And what are we humans doing to address these concerns, they ask. Nothing.

All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks,” they conclude.

A while back, we wondered about the implications of machine journalists. But maybe we should just be thankful that at least something will be around to write long-form essays on the last days of humankind.


Internet Computer Teaching Itself Everything

Computer scientists from the University of Washington (UW) and the Allen Institute for Artificial Intelligence in Seattle have created the first fully automated computer program that teaches everything there is to know about any visual concept. Called Learning Everything about Anything, or LEVAN, the program searches millions of books and images on the Web to learn all possible variations of a concept, then displays the results to users as a comprehensive, browsable list of images, helping them explore and understand topics quickly in great detail.

It is all about discovering associations between textual and visual data,” said Ali Farhadi, a UW assistant professor of computer science and engineering. “The program learns to tightly couple rich sets of phrases with pixels in images. This means that it can recognize instances of specific concepts when it sees them.”

The research team will present the project and a related paper this month at the Computer Vision and Pattern Recognition annual conference in Columbus, Ohio.