Hebbian Learning Rule

Hebbian Learning Rule | Artificial Neural Networks


Foundation :

Also referred to as Hebb learning rule, it is an unsupervised algorithm learning rule, introduced by Donald Hebb.


Operation :

At the start, values of all weights connecting neurons are set to zero. The rule follows a principle that, if two responding neurons, close to each other, are activated simultaneously or at the same time, then the weights connecting these two neurons should increase. Therefore, if the two responding neutrons are activated separately or at different times, the weights connecting the two neutrons should decrease.


In this rule, the desired responses are not used but rather the actual responses, in the learning process, making it a supervised learning algorithm learning rule. Neurons that are either positive or negative at the same time have strong positive weights while those that are opposite, positive and negative, have strong negative weights.


Usage :

The hebbian learning rule is used to identify how to improve weights of nodes in an artificial neural network.


Consider buying thetqweb.com a coffee [buymeacoffee.com/thetqweb.com] if this information was helpful. Even the least is most significant! This site is supported by generous donations such as yours! Click on the floating Purple Coffee Cup on the bottom-right side, or click on the “Buy me a Coffee” tab on the site! Thank You in Advance!


Hebbian Learning Rule | ANN Learning Rules
Artificial Neural Networks | thetqweb