# Comparing Topologies

It's possible that a set $X$ can be endowed with two or more topologies that are comparable. Over the years, mathematicians have used various words to describe the comparison: a topology $\tau_1$ is said to be coarser than another topology $\tau_2$, and we write $\tau_1\subseteq\tau_2$, if every open set in $\tau_1$ is also an open set in $\tau_2$. In this scenario, we also say $\tau_2$ is finer than $\tau_1$. But other folks like to replace "coarser" by "smaller" and "finer" by "larger." Still others prefer to use "weaker" and "stronger." But how can we keep track of all of this? Personally, I like to think in terms of (and while sipping a cup of) coffee!

(Now does it make sense why the indiscrete topology on a set $X$ is the coarsest/smallest/weakest topology, while the discrete topology is the finest/largest/strongest topology? )
Comment

# English is Not Commutative

Here's another unspoken rule of mathematics: English doesn't always commute!

Word order is important:

Comment

# Necessary versus Sufficient?

In sum, the sufficient condition (a.k.a. the "if" direction) allows you to get what you want. That is, if you assume the sufficient condition, you'll obtain your desired conclusion. It's enough. It's sufficient.

On the other hand, the necessary condition (a.k.a. the "only if" direction) is the one you must assume in order to get what you want. In other words, if you don't have the necessary condition then you can't reach your desired conclusion. It is necessary

Here's a little graphic which summarizes this:

Comment

# "Up to Isomorphism"?

“Up to isomorphism” is a phrase that seems to get thrown around a lot without ever being explained. Simply put, we say two groups (or any other algebraic structures) are the same “up to isomorphism” if they’re isomorphic! In other words, they share the exact same structure and therefore they are essentially indistinguishable. Hence we consider them to be one and the same! But, you see, we mathematicians are very precise, and so we really don't like to use the word “same." Instead we prefer to say “same up to isomorphism.” Voila!

Comment

# Four Flavors of Continuity

Here's a chart to help keep track of some of the different "flavors" of continuity in real analysis. Notice that the flavors vary according to $\delta$'s dependence on $\epsilon$, the point $x$, or the function $f$.

Explicitly, the definitions are give below. Let $X$ and $Y$ be metric spaces with metrics $d_X$ and $d_Y$, respectively.
• Suppose $f:X\to Y$ is a function and fix $x_0\in X$. Then $f$ is continuous at $x_0$ if for each $x\in X$ and for each $\epsilon >0$ there is a $\delta>0$ such that $d_X(x,x_0)< \delta$ implies $d_Y(f(x),f(x_0))<\epsilon$.

• Suppose $f:X\to Y$ is a function. Then $f$ is uniformly continuous if for each $\epsilon>0$ there is a $\delta>0$ such that $d_X(x_1,x_2)< \delta$ implies $d_Y(f(x_1),f(x_2))< \epsilon$ for all $x_1,x_2\in X$.

• Let $\mathscr{F}$ be a collection of continuous functions $f:X\to Y$ and fix $x_0\in X$. Then $\mathscr{F}$ is equicontinuous at $x_0$ if for each $x\in X$ and for each $\epsilon>0$ there is a $\delta>0$ so that $d_X(x,x_0)< \delta$ implies $d_Y(f(x),f(x_0))<\epsilon$ for all $f\in\mathscr{F}$.

• Let $\mathscr{F}$ be a collection of continuous functions $f:X\to Y$. Then $\mathscr{F}$ is uniformly equicontinuous if for each $\epsilon>0$ there is a $\delta>0$ so that $d_X(x_1,x_2)< \delta$ implies $d_Y(f(x_1),f(x_2))<\epsilon$ for all $x_1,x_2\in X$ and for all $f\in\mathscr{F}$.
Comment

# Why are Noetherian Rings Special?

In short, "Noetherian-ness" is a property which generalizes "PID-ness." As Keith Conrad so nicely puts it,

The property of all ideals being singly generated is often not preserved under common ring-theoretic constructions (e.g. $\mathbb{Z}$ is a PID but $\mathbb{Z}[x]$ is not), but the property of all ideals being finitely generated does remain valid under many constructions of new rings from old rings. For example... every quadratic ring $\mathbb{Z}[\sqrt{d}]$ is Noetherian, even though many of these rings are not PIDs." (italics added)

So you see? We like rings with finitely generated ideals because it keeps the math (relatively) nice. For example, you could ask, "Given a Noetherian ring $R$, can I build a new ring such that it, too, is Noetherian?" Yep. You can construct the polynomial ring $R[x]$ and it will be Noetherian whenever $R$ is. For more on the Noetherian property, see here.
Comment

# Motivation for the Tensor Product

In general, if $F$ is a field and $V$ is a vector space over $F$, the tensor product answers the question "How can I define scalar multiplication on $V$ by some larger field which contains $F$?" (Of course this holds if we replace the word "field" by "ring" and consider the same scenario with modules.)

Concrete example: Suppose $V$ is the set of all $2\times 2$ matrices with entries in $F=\mathbb{R}$. In this case we know what "$F$-scalar multiplication" means: if $M\in V$ is a matrix and $c\in \mathbb{R}$, then the new matrix $cM$ makes perfect sense. But what if we want to multiply $M$ by complex scalars too? How can we make sense of something like $(3+4i)M$? That's precisely what the tensor product is for! We need to create a set of elements of the form $$\text{(complex number) "times" (matrix)}$$ so that the mathematics still makes sense. With a little massaging, $\text{"times"}$ will become $\otimes$ and this set of elements will turn out to be $\mathbb{C}\otimes_{\mathbb{R}}V$.

Extending the idea further, we can also construct the tensor product between two vector spaces (and more generally, between two modules). And it's this construction which answers the more general question, "How can I define multiplication between two vectors?"

Comment

# One Unspoken Rule of Algebra

Here's an algebra tip! Whenever you're asked to prove $$A/B\cong C$$ where $A,B,C$ are groups, rings, fields, modules, etc., mostly likely the The First Isomorphism Theorem involved! See if you can define a homomorphism $\varphi$ from $A$ to $C$ such that $\ker\varphi=B$. If the map is onto, then by the First Isomorphism Theorem, you can conclude $A/\ker\varphi=A/B\cong C$. (And even if the map is not onto, you can still conclude $A/B\cong \varphi(A)$.) Voila!

# What do Polygons and Galois Theory Have in Common?

Galois Theory is all about symmetry. So, perhaps not surprisingly, symmetries found among the roots of polynomials (via Galois theory) are closely related to symmetries of polygons in the plane (via geometry). In fact, the two are highly analogous! The book Galois Theory (2ed, p. 59) by Joseph Rotman unfolds this analogy for us, as the picture below illustrates (click below to expand):

(Not sure what Galois Theory is? Find out here!)

Comment

# Borel-Cantelli Lemma (Pictorially)

The Borel-Cantelli Lemma says that if $(X,\Sigma,\mu)$ is a measure space with $\mu(X)<\infty$ and if $\{E_n\}_{n=1}^\infty$ is a sequence of measurable sets such that $\sum_n\mu(E_n)< \infty$, then $$\mu\left(\bigcap_{n=1}^\infty \bigcup_{k=n}^\infty E_k\right)=\mu\left(\limsup_{n\to\infty} En \right)=0.$$ (For the record, I didn't understand this when I first saw it (or for a long time afterwards). My only thought was, "But what does that mean? In English??") To help our intuition, notice the conclusion is the same as saying $$\mu(\{x\in X: \text{there exists infinitely many n such that x\in E_n}\})=0.$$ And this is another way of saying

almost every $x\in X$ lives in at most finitely many $E_n.$

So for a ("almost every") fixed $x\in X$ we have a picture like this:

Note! For each of the "almost every" $x$'s, we get a different finite collection of $E_n$. For an example of this picture in action, see this post.
Comment

# Operator Norm, Intuitively

If $X$ and $Y$ are normed vector spaces, a linear map $T:X\to Y$ is said to be bounded if $\|T\|< \infty$ where $$\|T\|=\sup_{\underset{x\neq 0}{x\in X}}\left\{\frac{|T(x)|}{|x|}\right\}.$$ (Note that $|T(x)|$ is the norm in $Y$ whereas $|x|$ is the norm in $X$.) One can show that this is equivalent to $$\|T\|=\sup_{x\in X}\{|T(x)|:|x|=1\}.$$ So intuitively (at least in two dimensions), we can think of $\|T\|$ this way:
Comment

# Need to Show a Map is Bijective?

Of course to prove a map $\phi$ is a bijection, you can show it's one-to-one and onto. But don't forget it also suffices to produce the inverse map $\phi^{-1}$! (This holds for a general functions $f$ as well as, say, homomorphisms in algebra.) In some cases - I'm thinking group theory in particular now - it's easier to define $\phi^{-1}$ (and prove that it's a homomorphism) than to prove the injectivity and surjectivity of $\phi$. Just something to keep in mind.
Comment

# Need to Prove Your Ring is NOT a UFD?

You're given a ring $R$ and are asked to show it's not a UFD. Where do you begin? One standard trick is to apply the Rational Roots Theorem. In its most general statement, one of the theorem's hypotheses is that your ring is a UFD. So, by way of contradiction, apply the theorem and see what you get!
1 Comment

# Two Ways to be Small

In real analysis, there are two ways a measurable set $E$ can be small. Either

• the measure of $E$ is 0, OR
• $E$ is nowhere dense.
Intuitively, to say the measure of $E$ is $0$ means that the total "length" of the "stuff" in $E$ is zero (measure = a generalization of length). To say $E$ is nowhere dense means that $E$ exists, but there's not much to it. Much like a spider web, or an atom which is mostly empty space. (We've discussed nowhere density before.) So here's a question: Can a set be small in one sense but not the other? How about this:

Is it possible for a set to be nowhere dense and yet have POSITIVE measure?

The answer is YES! The Fat Cantor Set is a prime example. It is nowhere dense (for reasons we have mentioned before), and yet it has positive measure.

Comment

# One Unspoken Rule of Measure Theory

Here's a measure theory trick: when asked to prove that a set of points in $\mathbb{R}$ (or some measure space $X$) has a certain property, try to show that the set of points which does NOT have that property has measure 0! This technique is used quite often.
Comment

# What's a Transitive Group Action?

Let a group $G$ act on a set $X$. The action is said to be transitive if for any two $x,y\in X$ there is a $g\in G$ such that $g\cdot x=y$. This is equivalent to saying there is an $x\in X$ such that $\text{orb}(x)=X$, i.e. there is exactly one orbit. And all this is just the fancy way of saying that $G$ shuffles all the elements of $X$ among themselves. In other words,

"What happens in $X$ stays in $X$."

Illustration: Imagine that $X$ is a little box of marbles resting on the floor of a room $Y$, and let $G$ act on $Y$ so that all the elements inside the room (including the marbles in the box $X$!) get tossed around. (So $G$ is like an earthquake, or something....) If the action of $G$ is transitive, then none of the marbles in the little box fell outside the box and onto the floor ($Y$). Even though the elements of $X$ were shuffled, they remained within $X$.
Comment

# Completing a Metric Space, Intuitively

An incomplete metric space is very much like a golf course: it has a lot of missing points! (A golf course has exactly eighteen "missing points" a.k.a. holes.) The process of completeting a metric space is akin to filling the holes ( = limits of Cauchy sequences) in a golf course ( = the metric space) with dirt ( = equivalence classes of Cauchy sequences).
Comment

An $R$-module $M$ is like the big brother to a group action! The two notions are analogous. In group theory, we have a group $G$ acting on a set $A$; in module theory, we have a ring $R$ acting on an abelian group $M$ (as evidenced by the module axioms).