Overview
Deep learning is inspired by the Neural Network, which can mimic the human brain function. The neural network model is the procedure to train the model to predict the output based on data fed to the network model. It is good when the model predicts the right output, but what when the model goes wrong or predicts an inaccurate result?
What’s Next!
In this article, we will explore the Neural Network concept of Forward propagation and Backward propagation as well as highlight what is loss function and when to use it to result the accurate output.
Table of Contents
What is Forward Propagation?
Forward propagation is the process of flowing the data in a Neural Network through the layers from the left direction to the right direction. The data flows from the input layer, followed by the hidden layer, and to the output layer.
Let’s take an example to understand the Forward Propagation in a Neural Network.
In the above example, the first layer is the input layer, assigned with two features f1, and f2. Each feature has three layers, assigned with weights w1, w2, and w3, etc. These layers transfer input data to the hidden layers i.e., the second layer. Next, the output of each hidden layer is transferred to the output layer. Hence, the predicted output is ready.
How to compute the Forward Propagation output?
After transferring the raw data from the input layer to the hidden layer. Next, at the stage of the hidden layer, two steps are taken by the hidden layer.
Step 1:
The summation is found by multiplying each weight with its respective feature, adding them up, and adding bias at the end.
The formula of summation: Z = W1*f1 + W2*f1 + Wn*fn + bias
Where,
f = Feature, assigned to the input layer
W = Weight, assigned to each layer
Step 2:
Now, after the summation of weight, this summation is transferred to the Activation Function i.e., A = σ(Z), where σ (sigma) is the activation function, and Z is the Summation.
Now the output of the Activation function i.e. the output of the hidden layer is transferred to the output layer as the final output i.e. Y’.
What is Backward Propagation?
Backward propagation is the opposite data flow process of Forward propagation. The data flow starts from the output layer, followed by the hidden layer, and then the input layer. It enables the adjustment of the weight and bises to reduce the loss function or error rate to minimize the difference between the predicted and actual output.
In other words, Backward propagation is the procedure to improve the Neural Network model’s output by adjusting the weights and biases.
Let’s take an example to understand Backward Propagation in a Neural Network.
So, to follow the backward propagation procedure, we first need to find the predicted output through the procedure of Forward propagation.
Now, since we have the predicted output from the Forward propagation, we need to compare the predicted output and the actual output.
According to the Backward propagation, we get the difference between the predicted and actual output. So, to reduce the difference between the predicted and actual output, we follow the procedure of Backward propagation.
Step 1: First of all, we need to apply the Loss function. There are a few loss functions in Neural a Network by which we can change or find the new adjusted value of each layer’s weight.
Step 2: Starting from the output layer, we follow up to each layer and reach the input layer, that’s what backward propagation is.
Step 3: When done with changing all weights, by considering the new values of layers, follow the Forward propagation procedure from the input layer to find the new output.
Step 4: Now, you have the new predicted output, so compare again the new predicted output with the actual output. If the difference is too minor, which can be ignored, then it’s fine, else again follow the backpropagation procedure (from Step 1 to Step 3) to get the new output.
Note: We keep continuing the find the new predicted output until the Neural Network model reaches the best actual output or leaves no difference between predicted and actual output.
What is Loss Function?
A Loss function in the Neural Network measures the performance of the Neural Network model and reports how well a model performs. If the model is not good at the result, then the loss function is applied to the model to improve the final result. A loss function attempts to minimize the difference between the predicted and actual value.
For the classification task, the model uses the cross-entropy loss function, when the Neural Network predicts the probabilities.
For the regression task, the model uses the mean squared error (MSE) loss function, when the Neural Network predicts continuous numbers.
For the forecasting task, the model uses the mean absolute percentage error (MAP) loss function, when it is necessary to monitor the model’s performance.
Conclusion
Forward propagation transfers the raw data from the input layer to the output layer, followed by the hidden layer, whereas backward propagation is the opposite procedure to transfer the data, starting from the output layer to the input layer, followed by the hidden layer. The loss function is used to measure the accuracy of the predicted output with the actual output and if the predicted output is found different than actual, then the loss function is implemented to reduce the difference and predict the accurate output.