# Entropy + Algebra + Topology = ?

Today I'd like to share a bit of math involving ideas from information theory, algebra, and topology. It's all in a new paper I've recently uploaded to the arXiv, whose abstract you can see on the right. The paper is short — just 11 pages! Even so, I thought it'd be nice to stroll through some of the surrounding mathematics here.

To introduce those ideas, let's start by thinking about the function $d\colon[0,1]\to\mathbb{R}$ defined by $d(x)=-x\log x$ when $x>0$ and $d(x)=0$ when $x=0$. Perhaps after getting out pencil and paper, it's easy to check that this function satisfies an equation that looks a lot like the product rule from Calculus:

Functions that satisfy an equation reminiscent of the "Leibniz rule," like this one, are called derivations, which invokes the familiar idea of a derivative. The nonzero term $-x\log x$ above may also look familiar to some of you. It's an expression that appears in the Shannon entropy of a probability distribution. A probability distribution on a finite set $\{1,\ldots,n\}$ for $n\geq 1$ is a sequence $p=(p_1,\ldots,p_n)$ of nonnegative real numbers satisfying $\sum_{i=1}^np_i=1$, and the Shannon entropy of $p$ is defined to be

Now it turns out that the function $d$ is nonlinear, which means we can't pull it out in front of the summation. In other words,  $H(p)\neq d(\sum_ip_i).$ Even so, curiosity might cause us to wonder about settings in which Shannon entropy is itself a derivation. One such setting is described in the paper above, which shows a correspondence between Shannon entropy and derivations of (wait for it...) topological simplices!

But what does that mean? To make sense of it, we need to bring in an algebraic tool that has origins in homotopy theory. That tool is called an operad, and it's something we've previously introduced here on the blog. Roughly speaking, an operad is an abstract way to encode the various "flavors" that algebras come in: associative algebras, commutative algebras, Lie algebras, and so on. Operads have been used extensively in algebraic topology (see this friendly article by Jim Stasheff in the AMS Notices) and even in physics, too. In fact, we saw an example of an operad on PBS Infinite Series way back in the day — the associahedra!

As it turns out, topological simplices are another nice example of an operad, as described in this old blog post and more recently in chapter 12 of Tom Leinster's new book, Entropy and Diversity. Formally, an $n-1$-simplex $\Delta^{n-1}$ is the set of all points $(p_1,\ldots,p_n)$ in $\mathbb{R}^n$ such that $0\leq p_i\leq 1$ for each $i$ and $\sum_i p_i=1$. So a point in a simplex is nothing more than a probability distribution! In this way, probabilities and topology go hand-in-hand.

But what does this this have to do with entropy? Or algebra? Or derivations, for that matter? I'll explain. But first, let me tell you why I find the confluence of these ideas so intriguing.

## A Detour into (Co)homology

In recent years, it's become evident that the intersection of information theory and algebraic topology is fertile ground. Ideas from (co)homological algebra, in particular, have arisen in a few different places. Loosely speaking, homological tools enable the detection of "holes" in a topological space and are thus a helpful way to distinguish one space from another — just count the number of holes in each! Conceptually, a hole is like a string that is closed and through which you can poke your finger. Said differently, a hole is a closed string that is not the boundary of some 2-dimensional region of space. And by the way, boundaries are often indicated with the letter $d,$ meaning that when $R$ is a region its boundary is denoted by $dR$.

If $S$ is a closed string, then its boundary is intuitively just a point. (Imagine starting with the unit interval $[0,1]$ and glueing the endpoints $0$ and $1$ together to form a loop.) This idea can be succinctly written as $dS=0$. If that closed string $S$ is also the boundary of some region so that $S=dR$, then it follows that $dS=d(dR)=0$. This leads to the pithy saying, "the boundary of a boundary is zero," which translates concisely as $d^2=0$. This equation is strikingly fundamental in mathematics.

"If I could only understand the beautiful consequence following from the concise proposition $d^2=0$."
- Henri Cartan

Quanta Magazine recently published a great article explaining these ideas, so I won't go into detail. The main thing to know is that "holes" are more formally called cycles, or better yet, 1-cycles, since the concept can be abstracted to higher dimensions. In any case, the ability to detect boundaries is important. A one-dimensional hole is precisely a 1-cycle that isn't a boundary! What's more, this story about homology has a dual version called cohomology. There, the dual notion of a hole is called a cocycle, and in both cases the totality of all (co)cycles, (co)boundaries (and higher dimensional analogues), and the (co)boundary detector $d$ can be organized into something called a (co)chain complex.

Lingo aside, here's the relevant point: Although I've drawn amoeba-like shapes above, we can also make sense of "holes" and "shapes" and "(co)homology" in a purely algebraic, rather than topological, setting. For example, you can compute the homology of your favorite associative algebra! In this algebraic context, there are some scenarios in which the boundary operator $d$ may also satisfy the Leibniz rule (or some version of it), i.e. the boundary operator $d$ may also be a derivation.

I'm being admittedly vague here, but I hope to pique your curiosity.

After all, what does this any of this have to do with entropy?

## Connecting the Dots

As I mentioned above, the nexus of information theory and algebraic topology is a tantalizing place. In 2015, Pierre Baudot and Daniel Bennequin published a paper called "The Homological Nature of Entropy" where they introduce tools of "information cohomology" and construct a certain cochain complex for which entropy represents the unique 1-cocycle. Around the same time, Philippe Elbaz-Vincent and Herbet Gangl define so-called "information functions of degree 1," which are functions that look like entropy, and proved these functions behave "a lot like certain derivations."

A few years years earlier, John Baez, Tobias Fritz, and Tom Leinster gave a category theoretical characterization of entropy in 2011. In preparation of that paper, Baez wrote an informal article on the nLab where he observed that entropy seems to behave like a derivation when viewed from the vantage point of operads. (Verifying this observation and making it precise is the content of my paper.) And — as if that weren't enough! — in 2019 Tom Mainiero explored cohomological ideas in the context of mutual information and entropy in a paper called "Homological Tools for the Quantum Mechanic" and found that entropy appears in the Euler characteristic of a particular cochain complex associated to a quantum state.

Phew!

Taking inventory of these ideas, one gets the feeling that these results are all consistent with the notion that entropy behaves a little like "$d$ of something" for some suitable (co)boundary-like operator $d$.

The result I've shared on the arXiv is in a similar vein.

## Entropy + Algebra + Topology = Derivations

Rather than looking at derivations $d$ on a (co)chain complex associated to a topological space, or derivations of an associative algebra, we instead look at derivations of the operad of topological simplices. Defining that concept is a key part of the paper, so you'll have to read the article to know what it means!

Inspiration for this came from a few observations made by John Baez in this 2011 blog post together with a nice characterization of Shannon entropy given by Dmitry Faddeev in 1956 and an enlightening variation of it given recently by Tom Leinster in chapter 12 of this book. And I first learned about the operad of simplices in this excellent talk by Tom at CIRM in 2017 on "The Categorical Origins of Entropy."

The math that ties all this together is explained in the preprint, which I've titled "Entropy as a Topological Operad Derivation." I hope you'll take a look! Perhaps unsurprisingly, there are a few pictures, too. Here's my favorite one, which you can find on page 9:

And with that, I'll leave you with the punchline of the paper:

### Theorem.

Shannon entropy defines a derivation of the operad of topological simplices, and for every derivation of this operad there exists a point at which it is given by a constant multiple of Shannon entropy.

Share
Related Posts