An
Artificial Neural Network (ANN) is an information-processing paradigm that is
inspired by the way biological nervous systems, such as the brain, process information.
The key element of this paradigm is the novel structure of the information processing
system. It is composed of a large number of highly interconnected processing elements
(neurons) working in unison to solve specific problems. ANNs, like people, learn
by example. An ANN is configured for a specific application, such as pattern recognition
or data classification, through a learning process. Learning in biological systems
involves adjustments to the synaptic connections that exist between the neurons.
This is true of ANNs as well. Neural
network simulations appear to be a recent development. However, this field was
established before the advent of computers, and has survived several eras. Many
important advances have been boosted by the use of inexpensive computer emulations.
The first artificial neuron was produced in 1943 by the neurophysiologist Warren
McCulloch and the logician Walter Pitts. There
were some initial simulations using formal logic. McCulloch and Pitts (1943) developed
models of neural networks based on their understanding of neurology. These models
made several assumptions about how neurons worked. Their networks were based on
simple neurons, which were considered to be binary devices with fixed threshold.
Not
only was neuroscience, but psychologists and engineers also contributed to the
progress of neural network simulations. Rosenblatt (1958) stirred considerable
interest and activity in the field when he designed and developed the Perceptron.
The Perceptron had three layers with the middle layer known as the association
layer. This system could learn to connect or associate a given input to a random
output unit.Another system was
the ADALINE (Adaptive Linear Element) which was developed in 1960 by Widrow and
Hoff (of Stanford University). The ADALINE was an analogue electronic device made
from simple components.