# Machine Learning — Cost Function, An Introduction

The cost function measures the error in the model’s predictions and guides the optimization process towards minimizing that error.

Cost function is used to learn the parameters in the machine learning model such that the total error is as minimal as possible. It is a measure of how wrong the model is in terms of its ability to estimate the relationship between the dependent and independent variable.

It is a mathematical function that measures the difference between the predicted output of a model and the true output (also known as the target or label).

In fact, one can say that the very goal of a machine learning modelling process is to minimize the cost function, which reflects the error in the model’s predictions.

There are different types of cost functions used in machine learning, depending on the problem and the type of model being used.

Mean Squared Error (MSE):

MSE is used in regression problems. It is defined as the average of the squared differences between the predicted and true values.

MSE = 1/N * Σ(y_pred — y_true)²

where y_pred is the predicted value, y_true is the true value, and N is the number of samples in the dataset.

Binary Cross-Entropy:

Binary cross-entropy is used for binary classification problems, where the goal is to predict one of two possible outcomes. It measures the difference between the predicted probabilities and the true probabilities of the two classes. The formula for binary cross-entropy is:

CE = — (y_true * log(y_pred) + (1 — y_true) * log(1 — y_pred))

where y_true is the true label (0 or 1), and y_pred is the predicted probability of the positive class (i.e., the probability of the predicted label being 1).

Categorical Cross-Entropy: