from series, words and their mappings.

words and their mappings - introduction

"Words, my colonel, do not reach certain meanings." - Dangerous Games, Oğuz Atay

Language is our primary tool for grasping and communicating both the abstract and the concrete. To capture and communicate the vastness of thought, emotion, and perception; we use a nice notation system: words.

I am not so sure if the main character of the novel Dangerous Games, Hikmet, is entirely right with that statement or not. But words are like a notation for a discretization on a continuum of meaning. The most useful notation we have ever used, maybe.

Years after the boom of deep learning, with the advent of Large Language Models (LLMs), it is again realized that language is more than just words. It is a medium for transferring and shaping concepts and information.

The map is not the territory, notation is not the music, and words are not the mathematical objects LLMs engage with.

This post is a very brief introduction to a series, exploring words, their mappings, and applications emerging from them in the context of LLMs. In the series, we will be discovering:

Words and Notations

Our attempts to organize, represent or express concepts often lead us to create systems of notation, i.e. structured languages like mathematics, musical scores, or even emojis. These notations serve as tools that help us encode and manipulate abstract (and concrete) concepts. They shape how we think, how we reason, what we can imagine, and what can we do.

Words, the most common notation we use among ourselves, also serve as a form of notation we use for communicating with LLMs. But make no mistake—words are merely notation to LLMs as well, representations of underlying mathematical objects.

LLMs are using a different internal notation than words.

LLMs and Meanings

In LLMs, each word, phrase, or concept is mapped into an abstract, multidimensional space known as latent space, where the model identifies and captures complex patterns and relationships. Within this latent space, concepts are represented by vectors, enabling the model to organize and interpret language based on the relationships between these mathematical representations.

These vectors are called embeddings. The latent space is vast, allowing the model to place concepts that often appear together or share meaning in proximity to one another.

This enables so many things: content generation, semantic search, exploratory tools and creative tools.

In the next post, we will be discovering the power of embeddings and the latent space. Also, we will explore the power of LLMs beyond standard applications and how they can fuel innovative, user-centered AI tools.

Stay tuned!