Deep Learning and Recurrent Neural Networks (RNN)

Deep Learning and Recurrent Neural Networks (RNN)

[[ What is Deep Learning? ]]

Deep Learning is a branch of Machine Learning (ML) which uses multiple – usually three or more – layers to create artificial neural networks through ‘learning’ from large data sets, that essentially try to mimic the human brain function.

There are several algorithms used in Deep Learning, including Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNN). Below, RNNs are discussed in details.


[[ Recurrent Neural Networks (RNN) ]]

Recurrent Neural Networks (RNNs) are a type of Artificial Neural Networks (ANNs) which utilizes sequential data/information or time-series data to train and learn.


[ How Recurrent Neural Networks work ]

RNNs use their multiple layers to train and fit their models, such that they share parameters across each layer of the network, that is, via recurrent networks. These parameters are maintained within each layer of the network as “weights“. These weights are used to facilitate reinforcement learning through the process of “backpropagation“.

RNNs utilize Backpropagation Through Time (BPTT) algorithm to train themselves by calculating errors that occurred in the output layer from the input in the input layer. These calculations are necessary for purposes of adjusting weights/parameters of models appropriately. This is done at every step after summing up errors, and is necessary because of the parameter sharing.


[ Problems of Recurrent Neural Networks ]

1. Vanishing Gradients, and
2. Exploding Gradients,

Both problems can be solved by reducing the number of hidden layers, and thus the complexity, within the neural network.


[ Types of Recurrent Neural Networks ]

a. one to one,
b. one to many,
c. many to one, &
d. many to many.


[ Applications of Recurrent Neural Networks ]

RNNs are used in day to day activities to improve the quality of life. Some of the uses of RNNs are;

1. Natural Language Processing (NLP) – For example, in IBM Watson; Apple Siri; Amazon Alexa; Microsoft Cortana; & Google Assistant.
2. Machine Translation – For example, in Microsoft Translate; & Google Translate.
3. Speech Recognition – For example, in IBM Watson; Apple Siri; Amazon Alexa; Microsoft Cortana; & Google Assistant.


[ Are RNNs Accurate? ]

All the RNNs as applied in IBM Watson, Apple Siri, Amazon Alexa, Microsoft Cortana & Google Assistant have their various flaws, but also strengths for the simple reason that they are tailored to accomplish specific tasks.

For instance, Amazon Alexa is good in General Knowledge, Entertainment, Online Shopping and Smart Homes, but poor in Directions. Google Assistant is good in Music, Food Orders, Shopping, Directions and Voice Recognition, but poor in Communications and Security. On the other hand, Apple Siri is good in Music & Podcasts, Communications and Security, but poor in Voice Recognition and General Knowledge.


[ Areas of Improvement for Deep Learning and Recurrent Neural Networks ]

1. Natural Language Generation (NLG) – This entails processing natural language and responding via natural language as well, an area that needs improvement in all applications.
2. Mutable Natural Language Structure – For all applications, improveDeep Learning and Recurrent Neural Networks (RNN)ment is required to mainly understand rather than respond to phrases and sentences that can be said by humans to mean different things.
3. Natural Language Understanding (NLU) – For all applications, much improvement is required to understand statements from different people who have different pronunciations of the same words due to accents.


Deep Learning and Recurrent Neural Networks (RNN)
Wiki | thetqweb