The brain continues to surprise us with its magnificent complexity. Groundbreaking research that combines neuroscience with math tells us that our brain creates neural structures with up to 11 dimensions when it processes information. Essentially, you will a multiverse in and out of existence when you think. The researchers “found a world that we had never imagined,” saidHenry Markram, director of the Blue Brain Project, which made the discovery.
The goal of the Blue Brain Project, which is based in Switzerland, is to digitally create a “biologically detailed” simulation of the human brain. By creating digital brains with an “unprecedented” level of biological information, the scientists aim to advance our understanding of the incredibly intricate human brain, which has about 86 billion neurons.
To get a clearer vision of how such an immense network operates to form our thoughts and actions, the scientists employed supercomputers and a peculiar branch of math. The team based its current research on the digital model of the neocortex that it finished in 2015. They probed the way this digital neocortex responded by using the mathematical system of algebraic topology. It allowed them to determine that our brain constantly creates very intricate multi-dimensional geometrical shapes and spaces that look like “sandcastles”.
Without using algebraic topology, a branch of mathematics that describes systems with any number of dimensions, visualizing the multi-dimensional network was impossible.
Utilizing the novel mathematical approach, researchers were able to see…
Being able to do all this well, and in some cases better than humans, is a recent development. Creating photorealistic images is only a few months old. So how did all this come about?
Perceptrons: The 40s, 50s And 60s
We begin in the middle of the 19th century. One popular type of early neural network at the time attempted to mimic the neurons in biological brains using an artificial neuron called a perceptron. We’ve already covered perceptrons here in detail in a series of articles by Al Williams, but briefly, a simple one looks as shown in the diagram.
Given input values, weights, and a bias, it produces an output that’s either 0 or 1. Suitable values can be found for the weights and bias that make a NAND gate work. But for reasons detailed in Al’s article, for an XOR gate you need more layers of perceptrons.
In a famous 1969 paper called “Perceptrons”, Minsky and Papert pointed out the various conditions under which perceptrons couldn’t provide the desired solutions for certain problems. However, the conditions they pointed out applied only to the use of a single layer of perceptrons. It was known at the time, and even mentioned in the paper, that by adding more layers of perceptrons between the inputs and the output, called hidden layers, many of those problems, including XOR, could be solved.
Despite this way around the problem, their paper discouraged many researchers, and neural network research faded into the background for a decade.
Backpropagation And Sigmoid Neurons: The 80s
In 1986 neural networks were brought back to popularity by another famous paper called “Learning internal representations by error propagation” by David Rummelhart, Geoffrey Hinton and R.J. Williams. In that paper they published the results of many experiments that addressed the problems Minsky talked about regarding single layer perceptron networks, spurring many researchers back into action.
Also, according to Hinton, still a key figure in the area of neural networks today, Rummelhart had reinvented an efficient algorithm for training neural networks. It involved propagating back from the outputs to the inputs, setting the values for all those weights using something called a delta rule.
The set of calculations for setting the output to either 0 or 1 shown in the perceptron diagram above is called the neuron’s activation function. However, for Rummelhart’s algorithm, the activation function had to be one for which a derivative exists, and for that they chose to use the sigmoid function (see diagram).
And so, gone was the perceptron type of neuron whose output was linear, to be replaced by the non-linear sigmoid neuron, still used in many networks today. However, the term Multilayer Perceptron (MLP) is often used today to refer not to the network containing perceptrons discussed above but to the multilayer network which we’re talking about in this section with it’s non-linear neurons, like the sigmoid. Groan, we know.
Also, to make programming easier, the bias was made a neuron of its own, typically with a value of one, and with its own weights. That way its weights, and hence indirectly its value, could be trained along with all the other weights.
And so by the late 80s, neural networks had taken on their now familiar shape and an efficient algorithm existed for training them.
Convoluting And Pooling
In 1979 a neural network called Neocognitron introduced the concept of convolutional layers, and in 1989, the backpropagation algorithm was adapted to train those convolutional layers.
What does a convolutional layer look like? In the networks we talked about above, each input neuron has a connection to every hidden neuron. Layers like that are called fully connected layers. But with a convolutional layer, each neuron in the convolutional layer connects to only a subset of the input neurons. And those subsets usually overlap both horizontally and vertically. In the diagram, each neuron in the convolutional layer is connected to a 3×3 matrix of input neurons, color-coded for clarity, and those matrices overlap by one.
This 2D arrangement helps a lot when trying to learn features in images, though their use isn’t limited to images. Features in images occupy pixels in a 2D space, like the various parts of the letter ‘A’ in the diagram. You can see that one of the convolutional neurons is connected to a 3×3 subset of input neurons that contain a white vertical feature down the middle, one leg of the ‘A’, as well as a shorter horizontal feature across the top on the right. When training on numerous images, that neuron may become trained to fire strongest when shown features like that.
Surprising study results show that with a little stimulation, a previously unremarkable part of the brain actually causes a feeding frenzy in mice. The are linking the findings with a strange symptom that comes from a similar circumstance related to Parkinson’s Disease. When electrodes are implanted into human brains to ease their symptoms, they develop an incredible hunger.
Nerve cells in a poorly understood part of the brain have the power to prompt voracious eating in already well-fed mice.
Two to three seconds after blue light activated cells in the zona incerta, a patch of neurons just underneath the thalamus and above the hypothalamus, mice dropped everything and began shoveling food into their mouths. This dramatic response, described May 26 in Science, suggests a role in eating behavior for a part of the brain that hasn’t received much scrutiny.
Scientists have previously proposed a range of jobs for the zona incerta, linking it to attention, movement and even posture. The new study suggests another job — controlling eating behavior, perhaps even in humans. “Being able to include the zona incerta in models of feeding is going to help us understand it better,” says study coauthor Anthony van den Pol, a neuroscientist at Yale University.
The new results may also help explain why a small number of Parkinson’s disease patients develop binge-eating behavior when electrodes are implanted in their brains to ease their symptoms. Those electrodes may be stimulating zona incerta nerve cells, van den Pol suspects.
During intermittent stimulation of some zona incerta…
Most of us don’t remember infant or toddlerhood. My sister swears she can remember being two years-old. I can’t remember anything before three-and-a-half. It was when they took her home from the hospital. I remember I was so excited, not because of my new baby sister, but because I was getting Spiderman comic books, for being so good during the ordeal. But why do we all have this hole in our memory? Why can’t we remember being a baby?
Sigmund Freud was the first to address this phenomenon, what he called infant amnesia or childhood amnesia. He thought it had to do with being bombarded by abundance of psychosexual phenomenon which, were you to process it, might make your head explode. This theory is no longer considered valid. Since then, neuroscientists, psychologists, and linguists have each approached the question in different ways.
Certain breakthroughs in the study of memory are now offering insights. Neuroscientists today believe, it’s because areas of the brain where long-term memory is stored aren’t fully developed yet. Two areas are responsible for memory formation—the hippocampus and the medial temporal lobe. Besides long-term and short-term memory, there are two other aspects, semantic and episodic memory. Semantic memory is remembering necessary skills or where objects in the environment can be found, both of which help us navigate the world.
Model of memory formation for spoken words. By Matthew H. Davis and M. Gareth Gaskell [CC BY 3.0], Wikimedia Commons.
The parts of the brain necessary for semantic memory are fully matured by age one. Yet, the hippocampus isn’t quite able to integrate the disparate networks it manages at that age, quite yet. This isn’t achievable until somewhere between the ages of two and four.
Episodic memory strings individual plot points together, to form the kind of linear structure we’re used to. Curiously, the prefrontal cortex, the area responsible for episodic memory, isn’t fully developed until we’re in our twenties. Memories from the 20s and beyond, may have more added texture and depth and include important details, such as the date and time in which an incident occurred. Interestingly, in the 1980s, researchers discovered that people remember what happened…
Artificial Intelligence is playing an ever increasing role in the lives of civilized nations, though most citizens probably don’t realize it. It’s now commonplace to speak with a computer when calling a business. Facebook is becoming scary accurate at recognizing faces in uploaded photos. Physical interaction with smart phones is becoming a thing of the past… with Apple’s Siri and Google Speech, it’s slowly but surely becoming easier to simply talk to your phone and tell it what to do than typing or touching an icon. Try this if you haven’t before — if you have an Android phone, say “OK Google”, followed by “Lumos”. It’s magic!
Advertisements for products we’re interested in pop up on our social media accounts as if something is reading our minds. Truth is, something is reading our minds… though it’s hard to pin down exactly what that something is. An advertisement might pop up for something that we want, even though we never realized we wanted it until we see it. This is not coincidental, but stems from an AI algorithm.
At the heart of many of these AI applications lies a process known as Deep Learning. There has been a lot of talk about Deep Learning lately, not only here on Hackaday, but all over the interwebs. And like most things related to AI, it can be a bit complicated and difficult to understand without a strong background in computer science.
If you’re familiar with my quantum theory articles, you’ll know that I like to take complicated subjects, strip away the complication the best I can and explain it in a way that anyone can understand. It is the goal of this article to apply a similar approach to this idea of Deep Learning. If neural networks make you cross-eyed and machine learning gives you nightmares, read on. You’ll see that “Deep Learning” sounds like a daunting subject, but is really just a $20 term used to describe something whose underpinnings are relatively simple.
When we program a machine to perform a task, we write the instructions and the machine performs them. For example, LED on… LED off… there is no need for the machine to know the expected outcome after it has completed the instructions. There is no reason for the machine to know if the LED is on or off. It just does what you told it to do. With machine learning, this process is flipped. We tell the machine the outcome we want, and the machine ‘learns’ the instructions to get there. There are several ways to do this, but let us focus on an easy example:
If I were to ask you to make a little robot that can guide itself to a target, a simple way to do this would be to put the robot and target on an XY Cartesian plane, and then program the robot to go so many units on the X axis, and then so many units on the Y axis. This straightforward method has the robot simply carrying out instructions, without actually knowing where the target is. It works only when you know the coordinates for the starting point and target. If either changes, this approach would not work.
Machine Learning allows us to deal with changing coordinates. We tell our robot to find the target, and let it figure out, or learn, its own instructions to get there. One way to do this is have the robot find the distance to the target, and then move in a random direction. Recalculate the distance, move back to where it started and record the distance measurement. Repeating this…
It’s time to learn how AI can transform your business at the 2017 VB Summit, June 5th – 6th in Berkeley. Meet AI ecosystem leaders who are shaping a new digital economy. Request an Invite – Save 50%!
(Reuters) — Companies from Singapore to Finland are racing to improve artificial intelligence so software can automatically spot and block videos of grisly murders and mayhem before they go viral on social media.
None, so far, claim to have cracked the problem completely.
A Thai man who broadcast himself killing his 11-month-old daughter in a live video on Facebook this week, was the latest in a string of violent crimes shown live on the social media company. The incidents have prompted questions about how Facebook’s reporting system works and how violent content can be flagged faster.
A dozen or more companies are wrestling with the problem, those in the industry say. Google – which faces similar problems with its YouTube service – and Facebook are working on their own solutions.
Most are focusing on deep learning: a type of artificial intelligence that makes use of computerized neural networks. It is an approach that David Lissmyr, founder of Paris-based image and video analysis company Sightengine, says goes back to efforts in the 1950s to mimic the way neurons work and interact in the brain.
Teaching computers to learn with deep layers of artificial neurons has really only taken off in the past few years, said Matt Zeiler, founder and CEO of New York-based Clarifai, another video analysis company.
It’s only been relatively recently that there has been enough computing power and data available for teaching these systems, enabling “exponential leaps in the accuracy and efficacy of machine learning”, Zeiler said.
The teaching system begins with images fed through the computer’s neural layers, which then…
Researchers at the Stanford University School of Medicine have successfully grown the first-ever working 3D brain circuits in a petri dish. Writing in the journal Nature, they say the network of living cells will allow us to study how the human brain develops.
Scientists have been culturing brain cells in the lab for some time now. But previous projects have produced only flat sheets of cells and tissue, which can’t really come close to recreating the three-dimensional conditions inside our heads. The Stanford researchers were especially interested in the way brain cells in a developing fetus can join up together to create networks.
“We’ve never been able to recapitulate these human-brain developmental events in a dish before,” senior author Sergiu Pasca, MD said in a statement.
Studying real-life pregnant women and their fetuses can also be ethically and technically tricky, which means there’s still a…
Scientists at the University of Cambridge and the Wellcome Trust Sanger Institute have created a new technique that simplifies the production of human brain and muscle cells – allowing millions of functional cells to be generated in just a few days.
Human pluripotent stem cells are ‘master cells’ that have the ability to develop into almost any type of tissue, including brain cells. They hold huge potential for studying human development and the impact of diseases, including cancer, Alzheimer’s, multiple sclerosis, and heart disease.
In a human, it takes nine to twelve months for a single brain cell to develop fully. It can take between three and 20 weeks using current methods to create human brain cells, including grey matter (neurons) and white matter (oligodendrocytes) from an induced pluripotent stem cell – that is, a stem cell generated by reprogramming a skin cell to its ‘master’ stage. However, these methods are complex and time-consuming, often producing a mixed population of cells.
The new platform technology, OPTi-OX, optimises the way of switching on genes in human stem cells. Scientists applied OPTi-OX to the production of millions of nearly identical cells in a matter of days. In addition to the neurons, oligodendrocytes, and muscle cells the scientists created in the study, OPTi-OX holds the possibility of generating any cell type at unprecedented purities, in this short timeframe.
Producing neurons, oligodendrocytes and muscle cells
To produce the neurons, oligodendrocytes, and muscle cells, the team altered the DNA in…