Perceptron Learning Rule | Artificial Neural Networks
It is a supervised algorithm learning rule, introduced by Frank Rosenblatt.
In the rule, the artificial neural network starts its learning by assigning a random value to each weight. The rule starts by calculating the output value on the basis of a set of records for which we can know the target output value.
The network then compares the calculated or actual output value with the target output. Next it calculates the error function, which involves a comparison between the target output and the actual output. If there is any difference found, then a change must be made to the weights of connection.
The perception learning rule is used to correct errors in supervised learning algorithms and it is also used in natural language processing for tasks such as part-of speech tagging (also known as word-category disambiguation or grammatical tagging) and syntactic parsing (also known as syntax parsing or parsing).
SUPPORT [[:thetqweb:]] VIA
Perceptron Learning Rule | ANN Learning rules
Artificial Neural Networks | thetqweb