Neural network: Difference between revisions

no edit summary
No edit summary
No edit summary
Line 10: Line 10:
The nodes (a place where computation happens) are organized in layers. They combine input from the data with weights (or coefficients), as mentioned above, amplifying or dampening the input, and consequently assigning significance to inputs for the task the algorithm is trying to learn. The input weight products are then summed and the result is passed through a node’s activation function in order to determine if a signal progresses further through the network and, if so, to what extent. The pairing of adjustable weights with input features is how significance is assigned to those features with regard to how the network classifies and clusters input <ref name="”4”" />.
The nodes (a place where computation happens) are organized in layers. They combine input from the data with weights (or coefficients), as mentioned above, amplifying or dampening the input, and consequently assigning significance to inputs for the task the algorithm is trying to learn. The input weight products are then summed and the result is passed through a node’s activation function in order to determine if a signal progresses further through the network and, if so, to what extent. The pairing of adjustable weights with input features is how significance is assigned to those features with regard to how the network classifies and clusters input <ref name="”4”" />.


A node layer is a row of the artificial neurons that turn on or off as the input passes through the net (figure 1). The output of each layer is the subsequent layer’s input. The process starts from an initial input layer that receives the data. The number of input and output nodes in an ANN depends on the problem to which the network is being applied. Conversely, there are no fixed rules as to how many nodes the hidden layer should have. If it has few nodes, the network might have difficulty generalizing to problems it has never encountered before; if there are too many nodes, the network may take a long time to learn anything of value <ref name="”4”" />.
A node layer is a row of the artificial neurons that turn on or off as the input passes through the net (figure 1). The output of each layer is the subsequent layer’s input. The process starts from an initial input layer that receives the data. The number of input and output nodes in an ANN depends on the problem to which the network is being applied. Conversely, there are no fixed rules as to how many nodes the [[hidden layer]] should have. If it has few nodes, the network might have difficulty generalizing to problems it has never encountered before; if there are too many nodes, the network may take a long time to learn anything of value <ref name="”4”" />.


An efficient way to solve complex problems is to decompose the complex system into simpler elements to be able to understand it. On the other side, simple elements can be gathered to produce a complex system. The network structure is one approach to achieve this. Even though there are a number of different types of networks, they can be generalized as having the following components: a set of nodes and connections between them. These can be seen as the computational units, receiving inputs and processing them to obtain an output. The complexity of the processing can vary. It can be simple, like summing the inputs, or complex, in which a node might contain another network, for example. The interactions of the nodes through the connections between them lead to a global behavior of the network. This behavior cannot be observed in the single elements that form the network. This is called an emergent behavior, in which the abilities of the network as a whole supersede the ones of its constitutive elements <ref name="”6”" />.
An efficient way to solve complex problems is to decompose the complex system into simpler elements to be able to understand it. On the other side, simple elements can be gathered to produce a complex system. The network structure is one approach to achieve this. Even though there are a number of different types of networks, they can be generalized as having the following components: a set of nodes and connections between them. These can be seen as the computational units, receiving inputs and processing them to obtain an output. The complexity of the processing can vary. It can be simple, like summing the inputs, or complex, in which a node might contain another network, for example. The interactions of the nodes through the connections between them lead to a global behavior of the network. This behavior cannot be observed in the single elements that form the network. This is called an emergent behavior, in which the abilities of the network as a whole supersede the ones of its constitutive elements <ref name="”6”" />.
Line 83: Line 83:


==The backpropagation algorithm==
==The backpropagation algorithm==
The [[backpropagation]] algorithm is used in layered feedforward ANNs, and it is one of the most popular algorithms. The artificial neurons, organized in layers, send their signals “forward” and the errors are propagated backwards. This algorithm uses supervised learning, in which examples of the inputs and outputs that are intended for the network compute are provided. The error, which is the difference between actual and expected results, is then calculated. The goal of the backpropagation algorithm is to reduce the error, until the ANN learns the training data. The training begins with random weights, and the objective is to adjust them to achieve a minimal error level. Resuming, the backpropagation algorithm can be broken down to four main steps: 1) feedforward computation; 2) back propagation to the output layer; 3) back propagation to the hidden layer, and 4) weight updates <ref name="”6”" /> <ref name="”8”" />.


The backpropagation algorithm is used in layered feedforward ANNs, and it is one of the most popular algorithms. The artificial neurons, organized in layers, send their signals “forward” and the errors are propagated backwards. This algorithm uses supervised learning, in which examples of the inputs and outputs that are intended for the network compute are provided. The error, which is the difference between actual and expected results, is then calculated. The goal of the backpropagation algorithm is to reduce the error, until the ANN learns the training data. The training begins with random weights, and the objective is to adjust them to achieve a minimal error level. Resuming, the backpropagation algorithm can be broken down to four main steps: 1) feedforward computation; 2) back propagation to the output layer; 3) back propagation to the hidden layer, and 4) weight updates <ref name="”6”" /> <ref name="”8”" />.
==Explain Like I'm 5 (ELI5)==
Neural networks are computer programs designed to mimic the human brain's workings. Just as our brain contains millions of tiny [[neuron]]s working together in synapses to facilitate thoughts and decision-making, a neural network contains many small components called [[artificial neuron]]s working together in order to solve problems.
 
When we learn something new, like how to ride a bike, our brain changes the connections between neurons in order to retain what was learned. In a similar fashion, an artificial neural network can alter its artificial neurons' connections in order to learn from data and make better decisions.
 
For instance, if we want the neural network to recognize photos of cats, then we must show it an extensive library of images of cats and explain what makes a cat a cat. After viewing enough [[examples]], even if it has never seen that particular breed before, the neural network should be able to accurately recognize a picture of a cat as one.
 
Neural networks are like superhuman computer brains that learn and make decisions independently!


==References==
==References==