When you do a lot of algebra you find yourself using kernels a lot. Sometimes without even realising it. So it makes sense to understand them pretty early on in any study of abstract algebra. So let’s talk about kernels.
The easiest way to think about a kernel is to imagine that it is a subset of some set that vanishes when you try to move it. By vanishes I mean it and all its elements turn in to zeroes. They’re like the ghosts of sets, now you see them, now you don’t.
The second issue is that in order to have a kernel you need to associate it with the way in which you move the subset. You can move them in different ways, each possibly having different kernels; in other words different elements turn in to zeroes depending on the way in which their moved.
Let’s put some authority on what we’ve just said. To properly define a kernel you need 1) a set and 2) a way to move its elements, or in other words, you need a map
. Once you have these two you just apply the map to each element in the set, giving you
, and see where it goes. Let’s say that the map takes elements from
and maps them to elements of
, thus
. Pick an element
inside
and see where it gets mapped to, so you are looking for
. Let’s suppose
gets mapped to zero (we write
). Then we can say that
belongs to the kernel of the map
. See how we’ve said that
belongs to the kernel of the map, thus reinforcing the concept that the kernel, although a subset of
, is a property of the map.
Say we keep doing this and we work out that and
are the only five elements of
that get mapped to
by the map
. We then say that the subset, call it
, consists of
. In other words,
is a subset of
(so we write
and it contains
. Our little set of five elements here are doomed because they all get mapped to 0 as soon as they come in contact with
.
An important concept here is what exactly is this “zero” that the elements of the kernel get mapped to? By “zero” we mean that particular element of a set that when you multiply any of the elements by it you get the zero back (we’re touching on the definition of a group here). Consider the set of integers, which you should know as . In this special case the zero element happens to be called ‘zero’ but this doesn’t have to be the case. For example, the zero element of a vector space is the zero vector. A map
between two vector spaces, say
and
, may potentially map a subset of vectors to the zero vector and not the zero scalar “0”. For instance, the vector
could get mapped to
(note the bold zero to indicate the fact this is a zero vector and not a zero scalar). We then collect up all the vectors that do this and call that collection
, and it consists of all vectors in
such that
equals
. In symbols this statement looks like this:
Formally speaking we say that the kernel of the map from a set
to another set
is a subset of
and is the preimage of the zero subspace of
. Let’s look at a simple example. Suppose we start with the set of integers
and a map f. The map f takes an integer and subtracts 1, that’s all it does. What is the kernel of
? Well this is easy as pie. First ask yourself (and this is often overlooked) what is the zero element of the integers Z? Luckily for us the zero element is 0. Now, which integers get mapped to 0 by
? In other words, what integers turn in to zero when you subtract 1 from them? There is only one and it is 1. Thus
and the kernel of the map is the one-element subset
.
Kernels of a Differential Operator
Do you remember solving homogeneous differential equations? You know, the ones when you’ve got a bunch of derivatives of x on the left and a zero on the right, something like this:
When you try to solve such a differential equation you are trying to find the value of that satisfies the equation, in other words equals zero. This is the same problem! The only differences are trivial: the map is the differential operator disguised as
and the zero is just the regular zero scalar. Pretty cool huh? Who knew that solving homogeneous differential equations is precisely the same thing as computing the kernel of some differential operator. In fact, a solution of a homogeneous differential equation (as there can be many) is indeed always an element of the kernel of the corresponding differential operator.
Kernels of Group Homomorphisms
Can we apply the same line of thinking to groups? When we talk about groups we no longer talk of the zero element, instead it becomes the identity element; under the group operation – the star could be addition, multiplication or some other binary relation. Similarly, there are group homomorphisms
between groups and these replace the maps that we have been talking about up to this point. The kernel of a group homomorphism between groups
and
is simply the preimage of the subset
consisting of the identity element of
; that is, the subgroup of the group
consisting of all those elements of G that get mapped by h to the identity element
. Furthermore, since group homomorphisms (by their very definition) preserves all group structures, in particular the identity element, then the identity of
must also belong to the kernel of
. And from this, if the preimage of
is exactly
then the group homomorphism is also injective; in other words a group homomorphism
from
to
is injective if and only if the only element of
that gets mapped to the identity in
is the identity.
One last thing about groups. It turns out that when you have a group homomorphism between groups, the kernel of the homomorphism is not just a subgroup but it’s always a normal subgroup! This means that if you take any element of the group, say , and you multiply it by any element of the kernel
, say
, then, and this is the amazing bit,
always equals
! Conjugation isn’t satisfied very often in groups and it’s usually very hard to pick out the elements of a group that satisfy conjugation under the group operation. Bu this way, all we did was define a group homomorphism and immediately the kernel picks them out for you!