Intelligent Systems, Artificial Intelligence, Smart Recommenders, Machine Learning and the list of endless fancy words that popup here and there over websites will always have a mystery behind. Over the past few years, we have witnessed great advancements in computer systems. Computers can now take over tasks that we, humans, never thought a computer would be able to do – including tasks that no human brain can efficiently and quickly perform such as looking through thousands of text files and drawing connections between them, reading millions of medical papers and connecting genes to potential diseases. The latter is the job of IBM Watson’s Discovery Advisor, a tool for researchers.
This way it seems that many researchers around the world strive to build computers that can substitute humans completely. The question that arises is: are we going to see computer brains that completely mimic human brains? In our post today, we cover some basics of the research in this direction trying to figure out an answer for the million cells question ..
The idea of teaching a computer to learn concepts from data without being explicitly programmed is not new. Did I say teach a computer? Yes! in this domain, we treat computers as kids who learn from other people by passing through different experiences. Since late 1950s, different mathematical models have been developed to add a sense of smartness to computers. Regression models, artificial intelligence models and neural networks have emerged after that. In this post, I chose to talk about the artificial neural networks as it is most similar to how human brain works.
Babies see adults moving around. In their third to fourth month, the accumulated knowledge in their brains start to stimulate them to make mini-pushups. Few months later, they learn to sit. After a year, kids start to walk. Have you ever heard of a kids-walking teacher? No! because they learn walking by observation. In computer domain, our goal is to give the computer (our kid) some data (experiences) and somehow ask the computer to build up knowledge based on that data. Our hope is that given a completely new piece of data (experience) that was not in our initial set, the computer (our kid) can take appropriate actions.
Human brains consist of neural cells. Each of these cells looks like the one in the below.
In the body of the neuron, there is the nucleus that receives pulses of electricity from input wires (dendrites) and based on these signals, the neuron does some computation and sends a message (electrical impulses) to other neurons through output wires (axons). The human brain has billions of these neurons connected together. Different neurons in the brain are responsible for different senses, like the sight, smell and touch senses. It’s scientifically observed that any neuron in the brain net can learn to do other jobs. For example, experiments on animals prove that if we disconnect the wires that connect an auditory neuron to the ears and connect it to the eyes, the neuron will learn to see. Similar experiments disconnect the somatosensory neuron connection to the hand and connect it to the eyes, it will eventually learn to see. (photos credit: Machine Learning on Coursera for Andrew Ng).
Now, let’s switch context to talk about mimicking this neural network in computers. In computers, we create a similar model that has the three major components:
- Cell body that contains the neuron. This neuron is responsible for doing the computations.
- Input wires that carry out signals as inputs.
- Output wire(s) that transfer the output signal to other neurons.
The above figure is a simple artificial (computer-one) neural network that has only one neuron (the orange circle). x1, x2, and x3 are the inputs to the neuron and they carry numerical values. The function h is called the hypothesis function. It computes it value by multiplying the input vector x by a wight vector w and then the output is passed through an activation function that computes the final scalar output. A more complex neural network is shown below.
Each vertical line is called a layer. In the above figure, layer 1 contains the neurons that represent inputs. Layer 2 is also called a hidden layer. It does the core computation. Layer 3 is called the output layer and does a computation on the data received from layer 2 and then outputs one final result. In real-world scenarios, computer neural networks have more neurons in each hidden layer.
Now, the missing information in the one-neuron figure is:
- What is the weight vector to be multiplied by the input vector?
- After multiplying the two vectors, what is the activation function that will output the final result?
The answers to the above two questions are what is going to define your artificial neural network. If you could solve a specific mathematical problem by assigning values to the weight vector and choosing an activation function by yourself, most of the current research problems will be solved However, it is the most difficult part of designing and implementing a neural network. Therefore, we ask the computer to assign a randomly chosen values for the weights at the beginning. We also choose the sigmoid function as the activation function (see why?). Then what?
After the startup random values, feed the network with input data and receive the result from the output layer (this process is called feedforwarding). We compare the result with a pre-known ground truth value by computing a loss function. Then, we ask the computer to try minimizing this loss by changing the weight vector values (see gradient descent algorithm). We repeat the process (feedforwarding) until the loss function is minimum. That’s how we teach computers to learn!
What can be done by an artificial neural network?
When your phone interpret and understand your voice command, it’s likely that a neural network is helping to understand your speech. When you write on the touch screen and it converts your handwriting into computer letters, it is a neural network behind. When you cash a check, the machines that automatically read the digits also use neural networks. In fact, with the advancements in computing resources, training a neural network to do a human jobs becomes more efficient. One of the hottest fields nowadays is Deep Learning.
Deep neural networks are now being trained to have similar cognitive abilities as in humans. The question that remains open: Are we going to witness an era when one can interact with a computer exactly as interacting with other humans?
In my opinion, the answer will be yes if we could solve the challenges mentioned by Chris in his 2007 blog post “10 Important Differences Between Brains and Computers”
Want to join the race developing a humanized computer? Start off with the Machine Learning course on Coursera from Andrew Ng ..