Delta Learning Rule | Artificial Neural Networks
Also referred to as Widrow-Hoff learning rule, it is a supervised algorithm learning rule and a special case of backpropagation, introduced by Bernard Widrow and Marcian Hoff.
The algorithm has a continuous activation function which compares the output vector and the target output, given an input vector, to check the difference. If the difference is zero, then there is no learning and if the difference is a non-zero, then adjust the vector weights accordingly, to minimize the difference.
The delta learning rule is used for updating the weights of the inputs to artificial neurons in a single-layer neural network and it’s also used to minimize the error over all training patterns, which is the difference between the actual output and target output.
SUPPORT [[:thetqweb:]] VIA
Delta Learning Rule | ANN Learning Rules
Artificial Neural Networks | thetqweb