Deep Learning: Impact of Gradient Descent Optimization Algorithm during Training

Rahul S
1 min readApr 20, 2023

During training, the objective is to find the optimal weights and biases (parameters) for the neural network that minimize the loss function. Gradient descent optimization is one of the most commonly used algorithms for training neural networks.

The activation function impacts the gradient descent optimization algorithm by affecting the gradients that are propagated backwards through the network during backpropagation. The gradients are used to update the weights and biases in each layer of the network in order to minimize the loss function.

In particular, the choice of activation function can impact the magnitude of the gradients. If the gradients become too small or too large, it can lead to “vanishing” or “exploding” gradient problem.

Activation functions such as ReLU and its variants can help alleviate the vanishing gradient problem by keeping the gradients relatively large and preventing them from becoming too small. On the other hand, activation functions such as sigmoid and hyperbolic tangent can exacerbate the vanishing gradient problem because they saturate at high and low input values, leading to small gradients.

--

--