Basic Steps in Neural Network based Algorithms — A mini-Guide for absolute beginners

This article discusses the flow of an artificial neural network (ANN) for beginners.

Rahul S
14 min readJun 13

--

Here’s a summary of the main points covered in the article:

  1. Input Layer: The input to an ANN consists of numeric values represented as a vector. The input data, such as text, images, or speech, needs to be pre-processed and transformed into appropriate numeric representations.
  2. Hidden Layers: Hidden layers in a neural network acquire and use knowledge. An ANN can have one or more hidden layers, with each layer containing one or more nodes. The architecture of the network is defined by the number of layers and nodes.
  3. Weights and Biases: Weights and biases are trainable parameters in a neural network. Each node in the network has associated weights and biases, which are adjusted during the training process to minimize prediction error.
  4. Activation Functions: Activation functions determine how a node propagates information to the next layer. They help the neural network learn patterns in the data. Different activation functions have specific advantages and applications.
  5. Output Layer: The output layer is the final layer in the neural network, where predictions are obtained. The activation function used in the output layer depends on the type of problem, such as classification or regression.
  6. Setup and Initialization: Before training the model, the input data is preprocessed, split into training, validation, and test sets, and various parameters and hyperparameters are selected. The weights and biases are initialized, often with random values.
  7. Forward Propagation: During forward propagation, input data is passed through the neural network to generate predictions. The outputs are compared with the actual values to compute the error.
  8. Measuring Accuracy and Error: The error is measured using loss and cost functions. Loss functions measure the error for individual samples, while cost functions provide an average error over a set of samples.
  9. Back Propagation: Back propagation is the process of adjusting the weights and biases based on the prediction error. It starts from the output layer and propagates the error backward through the network, updating the parameters layer by layer.

--

--

Attention in Transformers

2 min read

Nov 27

Deep Learning: Importance of Data Normalization

3 min read

Oct 8

Deep Learning: What Makes Transformers So Effective?

2 min read

Oct 6

Deep Learning: Guidelines for model optimization and tuning

10 min read

Dec 1, 2022

Deep learning: A non-mathematical intuition of how a neural network learns

5 min read

Nov 30, 2022

Deep Learning: GELU (Gaussian Error Linear Unit) Activation Function

2 min read

Aug 24

Unlocking Artistic Magic: Decoding Neural Style Transfer (NST)

4 min read

Aug 11

Deep Learning: Internal Covariate Shift & Batch Normalization

3 min read

Aug 23

Deep Learning: Activation Functions — 10 Tricky questions

4 min read

Aug 17

Deep Learning: Impact of Gradient Descent Optimization Algorithm during Training

1 min read

Apr 20

Rahul S

I learn as I write | LLM, NLP, Statistics, ML