Member-only story

Cross Entropy and Cross-Entropy Loss are closely related concepts, but they serve different purposes in the realm of probability theory and machine learning.
Cross Entropy:
Cross Entropy, at its core, is a measure of dissimilarity between two probability distributions. It quantifies how different one probability distribution is from another. It is expressed as:
In this expression:
y_i
represents the true probability distribution (often one-hot encoded labels).p_i
represents the predicted probability distribution generated by a model.
Cross Entropy is a general concept from information theory and probability theory. It’s not exclusive to machine learning but has various applications in areas like information retrieval and statistics.
Cross Entropy captures how much information is lost when using one probability distribution (the predicted probabilities) to represent another (the true probabilities).
Cross-Entropy Loss/Log Loss:
Cross-Entropy Loss, also known as “Log Loss,” is a specific application of Cross Entropy in machine learning.
It is used as a loss function in classification tasks to guide the training of models. The Cross-Entropy Loss is calculated as:
In this context:
y_i
represents the true class labels (often one-hot encoded).p_i
represents the predicted probabilities assigned by the model to each class.
During model training, the goal is to minimize the Cross-Entropy Loss. By minimizing this loss, the model aims to produce predicted probabilities that closely resemble the true class labels.
In Cross Entropy, we compare two probability distributions, while in Cross-Entropy Loss, we compare predicted probabilities with true class labels.