Hebbian Learning Rule | Artificial Neural Networks
Also referred to as Hebb learning rule, it is an unsupervised algorithm learning rule, introduced by Donald Hebb.
At the start, values of all weights connecting neurons are set to zero. The rule follows a principle that, if two responding neurons, close to each other, are activated simultaneously or at the same time, then the weights connecting these two neurons should increase. Therefore, if the two responding neutrons are activated separately or at different times, the weights connecting the two neutrons should decrease.
In this rule, the desired responses are not used but rather the actual responses, in the learning process, making it a supervised learning algorithm learning rule. Neurons that are either positive or negative at the same time have strong positive weights while those that are opposite, positive and negative, have strong negative weights.
The hebbian learning rule is used to identify how to improve weights of nodes in an artificial neural network.
SUPPORT [[:thetqweb:]] VIA
Hebbian Learning Rule | ANN Learning Rules
Artificial Neural Networks | thetqweb