Embeddings: A Simplistic Overview

THE GEN AI SERIES | KEEP IN TOUCH

Rahul S
4 min readFeb 15, 2024

An embedding gives a way of representing data as points in space where the locations are semantically meaningful. The location in the embedding space captures something about the meaning of a piece of text.

So, how are these embeddings computed?

One simple method would be to embed each word in a sentence separately, and then take a sum or a mean, of all the individual words’ embeddings. For a long time, this was the dominant approach to computing embeddings. We had a list of all the most commonly occurring English words. And for each word, we would separately train a set of parameters to give an embedding just for that word. In case of a sentence, we took an average of all of them.

But such a sentence-level embedding doesn’t understand word ordering. So, modern embeddings do something more sophisticated. We instead
use a transformer neural network to compute a context-aware representation of each word. And then, we take an average of the context-aware embeddings.

A transformer neural network takes each word and then computes an embedding for that word. It also takes into account what other words appear in the sentence.

--

--