Member-only story
Levers in Neural Network Models
A neural network model is represented by a set of parameters and hyperparameters. The parameters include the weights and biases of all nodes, and the hyper-parameters include several levers like layers, nodes in a layer, activation functions, cost functions, learning rate, and optimizers.
Training neural model means determining the right values for these parameters and hyperparameters in such a way that it maximizes the accuracy of predictions for the use case.
Usually, we start with a network architecture created with intuition. Weights and biases are also initialized to random values–or with some statistical method. Then we repeat iterations of applying weights and biases to the inputs and computing the error. Based on the error found, the weights and biases are adjusted so the error gets reduced. The process of adjusting weights and biases is an iterative back-and-forth process that goes on until the desired levels of performance are achieved.
You should read the following articles to have an intuitive understanding of how a neural network learns: