Embeddings are a really neat trick that often come wrapped in a pile of intimidating jargon.
If you can make it through that jargon, they unlock powerful and exciting techniques that can be applied to all sorts of interesting problems.
I gave a talk about embeddings at PyBay 2023. This article represents an improved version of that talk, which should stand alone even without watching the video.
If you’re not yet familiar with embeddings I hope to give you everything you need to get started applying them to real-world problems.
The YouTube video near the beginning of the article is a great way to consume this content.
The basics of it is this: let’s assume you have a blog with thousands of posts.
If you were to take a blog post and run it through an embedding model, the model would turn that blog post into a list of gibberish floating point numbers. (Seriously, it’s gibberish… nobody knows what these numbers actually mean.)
As you run additional posts through the model, you’ll get additional numbers, and these numbers will all mean something. (Again, we don’t know what.)
The thing is, if you were to take these gibberish values and plot them on a graph with X, Y, and Z coordinates, you’d start to see clumps of values next to each other.
These clumps would represent blog posts that are somehow related to each other.
Again, nobody knows why this works… it just does.
This principle is the underpinnings of virtually all LLM development that’s taken place over the past ten years.
What’s mind blowing is depending on the embedding model you use, you aren’t limited to a graph with 3 dimensions. Some of them use tens of thousands of dimensions.
If you are at all interested in working with large language models, you should take 38 minutes and read this post (or watch the video). Not only did it help me understand the concept better, it also is filled with real-world use cases where this can be applied.