Delta Learning Rule | Artificial Neural Networks
Also referred to as Widrow-Hoff learning rule, it is a supervised algorithm learning rule and a special case of backpropagation, introduced by Bernard Widrow and Marcian Hoff.
The algorithm has a continuous activation function which compares the output vector and the target output, given an input vector, to check the difference. If the difference is zero, then there is no learning and if the difference is a non-zero, then adjust the vector weights accordingly, to minimize the difference.
The delta learning rule is used for updating the weights of the inputs to artificial neurons in a single-layer neural network and it’s also used to minimize the error over all training patterns, which is the difference between the actual output and target output.
Consider buying thetqweb.com a coffee [buymeacoffee.com/thetqweb.com] if this information was helpful. Even the least is most significant! This site is supported by generous donations such as yours! Click on the floating Purple Coffee Cup on the bottom-right side, or click on the “Buy me a Coffee” tab on the site! Thank You in Advance!
Delta Learning Rule | ANN Learning Rules
Artificial Neural Networks | thetqweb