Neuron

Wrap Your Mind Around Neural Networks

Artificial Intelligence is playing an ever increasing role in the lives of civilized nations, though most citizens probably don’t realize it. It’s now commonplace to speak with a computer when calling a business. Facebook is becoming scary accurate at recognizing faces in uploaded photos. Physical interaction with smart phones is becoming a thing of the past… with Apple’s Siri and Google Speech, it’s slowly but surely becoming easier to simply talk to your phone and tell it what to do than typing or touching an icon. Try this if you haven’t before — if you have an Android phone, say “OK Google”, followed by “Lumos”. It’s magic!

Advertisements for products we’re interested in pop up on our social media accounts as if something is reading our minds. Truth is, something is reading our minds… though it’s hard to pin down exactly what that something is. An advertisement might pop up for something that we want, even though we never realized we wanted it until we see it. This is not coincidental, but stems from an AI algorithm.

At the heart of many of these AI applications lies a process known as Deep Learning. There has been a lot of talk about Deep Learning lately, not only here on Hackaday, but all over the interwebs. And like most things related to AI, it can be a bit complicated and difficult to understand without a strong background in computer science.

If you’re familiar with my quantum theory articles, you’ll know that I like to take complicated subjects, strip away the complication the best I can and explain it in a way that anyone can understand. It is the goal of this article to apply a similar approach to this idea of Deep Learning. If neural networks make you cross-eyed and machine learning gives you nightmares, read on. You’ll see that “Deep Learning” sounds like a daunting subject, but is really just a $20 term used to describe something whose underpinnings are relatively simple.

Machine Learning

When we program a machine to perform a task, we write the instructions and the machine performs them. For example, LED on… LED off… there is no need for the machine to know the expected outcome after it has completed the instructions. There is no reason for the machine to know if the LED is on or off. It just does what you told it to do. With machine learning, this process is flipped. We tell the machine the outcome we want, and the machine ‘learns’ the instructions to get there. There are several ways to do this, but let us focus on an easy example:

Early neural network from MIT

If I were to ask you to make a little robot that can guide itself to a target, a simple way to do this would be to put the robot and target on an XY Cartesian plane, and then program the robot to go so many units on the X axis, and then so many units on the Y axis. This straightforward method has the robot simply carrying out instructions, without actually knowing where the target is. It works only when you know the coordinates for the starting point and target. If either changes, this approach would not work.

Machine Learning allows us to deal with changing coordinates. We tell our robot to find the target, and let it figure out, or learn, its own instructions to get there. One way to do this is have the robot find the distance to the target, and then move in a random direction. Recalculate the distance, move back to where it started and record the distance measurement. Repeating this…

Tech firms searching for way to quickly spot video violence

Above: Facebook Live advertisement as shown in the Montgomery BART station in San Francisco, Calif.

It’s time to learn how AI can transform your business at the 2017 VB Summit, June 5th – 6th in Berkeley. Meet AI ecosystem leaders who are shaping a new digital economy. Request an Invite – Save 50%!

(Reuters) — Companies from Singapore to Finland are racing to improve artificial intelligence so software can automatically spot and block videos of grisly murders and mayhem before they go viral on social media.

None, so far, claim to have cracked the problem completely.

A Thai man who broadcast himself killing his 11-month-old daughter in a live video on Facebook this week, was the latest in a string of violent crimes shown live on the social media company. The incidents have prompted questions about how Facebook’s reporting system works and how violent content can be flagged faster.

A dozen or more companies are wrestling with the problem, those in the industry say. Google – which faces similar problems with its YouTube service – and Facebook are working on their own solutions.

Most are focusing on deep learning: a type of artificial intelligence that makes use of computerized neural networks. It is an approach that David Lissmyr, founder of Paris-based image and video analysis company Sightengine, says goes back to efforts in the 1950s to mimic the way neurons work and interact in the brain.

Teaching computers to learn with deep layers of artificial neurons has really only taken off in the past few years, said Matt Zeiler, founder and CEO of New York-based Clarifai, another video analysis company.

It’s only been relatively recently that there has been enough computing power and data available for teaching these systems, enabling “exponential leaps in the accuracy and efficacy of machine learning”, Zeiler said.

Feeding images

The teaching system begins with images fed through the computer’s neural layers, which then…

Scientists Grow Working Human Brain Circuits

Researchers at the Stanford University School of Medicine have successfully grown the first-ever working 3D brain circuits in a petri dish. Writing in the journal Nature, they say the network of living cells will allow us to study how the human brain develops.

Scientists have been culturing brain cells in the lab for some time now. But previous projects have produced only flat sheets of cells and tissue, which can’t really come close to recreating the three-dimensional conditions inside our heads. The Stanford researchers were especially interested in the way brain cells in a developing fetus can join up together to create networks.

“We’ve never been able to recapitulate these human-brain developmental events in a dish before,” senior author Sergiu Pasca, MD said in a statement.

Studying real-life pregnant women and their fetuses can also be ethically and technically tricky, which means there’s still a…

New stem cell method produces millions of human brain and muscle cells in days

Scientists at the University of Cambridge and the Wellcome Trust Sanger Institute have created a new technique that simplifies the production of human brain and muscle cells – allowing millions of functional cells to be generated in just a few days.

Human pluripotent stem cells are ‘master cells’ that have the ability to develop into almost any type of tissue, including brain cells. They hold huge potential for studying human development and the impact of diseases, including cancer, Alzheimer’s, multiple sclerosis, and heart disease.

In a human, it takes nine to twelve months for a single brain cell to develop fully. It can take between three and 20 weeks using current methods to create human brain cells, including grey matter (neurons) and white matter (oligodendrocytes) from an induced pluripotent stem cell – that is, a stem cell generated by reprogramming a skin cell to its ‘master’ stage. However, these methods are complex and time-consuming, often producing a mixed population of cells.

Opti-OX

The new platform technology, OPTi-OX, optimises the way of switching on genes in human stem cells. Scientists applied OPTi-OX to the production of millions of nearly identical cells in a matter of days. In addition to the neurons, oligodendrocytes, and muscle cells the scientists created in the study, OPTi-OX holds the possibility of generating any cell type at unprecedented purities, in this short timeframe.

Producing neurons, oligodendrocytes and muscle cells

To produce the neurons, oligodendrocytes, and muscle cells, the team altered the DNA in…