Today I went to a talk by David Nadler which was titled “Introduction to Stacks”. The idea was to motivate the definition of a stack and give some examples, without actually defining everything rigorously. Despite having read a fair bit about stacks and going to a couple of talks, I found his presentation quite enlightening for the following reasons:
- It struck a good balance between not drowning the audience in a pool of functorial abstraction, and providing enough intuition to enable one to work out the details if sufficiently motivated.
- It focused on the bare bones essence of the subject, which made it clear that stacks are a general framework that can be applied to many parts of mathematics.
Philosophy
The overarching pedagogical metaphor was based on a typical introduction to manifolds. When introducing manifolds, it is necessary to first explain the local picture which of course is . In addition to being the atom from which all other manifolds are constructed, it is a manifold worth of study in it’s own right. After getting familiar with this example, the next step is to explain how these local pieces can be stitched together to form the atlas of a manifold.
Finally, we can step back and notice that one big advantage of manifolds is that they allow us to talk about spaces with a specific local structure, without having to actually provide a collection of local objects and the manual for gluing them together. This has obvious practical advantages, but the conceptual benefits are arguably greater still. Take for example the sphere. There are many ways to build it from a collection of linear spaces, but intuitively, these constructions should be different facets of the same object. I.e, the notion of a sphere should not depend on the specific way in which we choose to construct it. Manifolds such as the sphere have the important property that they exist as topological spaces independently of their construction, and thus allow us to rigorously compare the various ways in which they can be assembled.
Each of these steps formally requires many layers of details, but it’s possible to attain an intuitive understanding, and even a working knowledge, with a more parsimonious exposition. For example, it’s easy to get a feeling for without rigorously defining the real line and products of sets. Similarly, it’s possible to explain the gluing of local data and the ambiant topological space without formally defining topological spaces, homeomrphisms and diffeomorphisms.
We’ll take a similar approach in our introduction to stacks. This time, the local picture comes in the form of a groupoid so we’ll spend some time getting an intuitive feel for these objects and explicitly working out a couple of key examples. A stack can be constructed from a collection of groupoids, but exists abstractly as a certain type of functor which plays the role of the topological spaces in the manifold analogy.
Groupoids
Enough with the philosophy – let’s start by understanding what groupoids are and what they’re used for. In order to describe a groupoid, we first have to decide which category w’re working in. Let’s denote this category by . A groupoid is a pair of objects
and
in
, and a pair of morphisms:

We think of the object as parameterizing a collection of elements, and of the object
as parameterizing a collection of morphisms between these elements. The map
intuitively takes a morphism to the element that is it’s source, and the map
takes a morphism to it’s target.
We would of course like this data to actually behave like a collection of elements and morphisms, so we introduce a map
that tells us how to take two morphisms and produce a third. The fiber product is taken with the left map being
and the right map being
. This insures that the target of the first map is equal to the source of the second map which is a natural assumption to make when composing maps.
In addition, we would like all of our morphisms to have inverses, which we formally enforce by introducing a third map
which intuitively sends each morphism to its inverse. Finally, we have a map
which is supposed to send each element in to an identity map. Of course, we must formally write down conditions involving these maps that guarantee that they behave as we expect, but this is an easy and boring exercise in drawing commutative diagrams.
Following the analogy to atlases of manifolds, we would like to think of elements in as being components of an atlas, and of
as the gluing data of the atlas.
Let’s see some examples of this definition in action.
- Groups The simplest category is
, so this is the category that we’ll use in the first example. Let
be a group, which is in particular an object in
. To represent
as a groupoid, we can set
and
. In this case, the morphisms
and
are the unique maps from
to a point so they are completely uninteresting. On the other hand, the composition map
is the group multiplication, the inverse maps
sends an element of
to it’s inverse, and the map
sends the point to the identity element of
.
- Group Actions We now move onto a similar, but less trivial example which will hopefully be more illuminating as well. As before, we’ll stay in the category of sets. Let
be a group and
a set that it acts on. We can use this group action to construct a groupoid in the following manner. First of all, we set
and
. Unlike the previous example, the morphisms
and
now play an important role. We define
to be the projection onto
and
to be the map sending
to
. It isn’t hard to work out the correct definitions of
,
, and
.
- The Projective Plane One reason that we like groupoids is that they provide a framework that formalizes what it means to glue together elements of a category along identifying maps. In this example, we’ll work in the category in which you like to think of the complex projective place
. For example, we could work in the category of schemes. As we know, one way to define the projective plane is to start with two copies of the affine line
, and glue them together. Let
be the copy containing
, and let
be the one containing infinity. Furthermore, we define
and
. Following the usual construction, we glue
and
by identifying
to
via the map
. In other words, we have a map
and its inverse
.
We will now define a groupoid which captures the data in this gluing process. For starters, the elements that we would like to map to one another are the points of
and
so we define the scheme
to be the disjoint union of these two spaces:
We now need to find a scheme
that parameterizes our gluing maps. We claim that the following scheme does the job:
Indeed, the
component represents the identity maps for the points in
as we would clearly like to identify points to themselves.
plays the same role. The points of
represent the maps from the points in
to the points in
. In other words, this part of
tells us how to identify
with
. The component
plays the same role in the other direction. It is easy to formalize this by defining the source and target maps. For example,
will send a point in
to the corresponding point in
and
will send a point
to
.
While reading this example you may have been bothered by the following objection. Instead of fooling around with affine lines and gluing maps, why couldn’t we have simply taken both
and
to be
itself, which is surely a perfectly legitimate scheme, and set the source and target maps to be the identity morphisms? The answer is that this indeed would have been a perfectly good groupoid, and in general, there is typically more than one groupoid that intuitively corresponds to an object that we are interested in. The quest to tie these various groupoids into a slick coordinate free package is what will lead us to notion of a stack.
Furthermore, this example showcases the analogy between groupoids and atlases of manifolds.
Stacks
Paraphrasing David Nadler, a groupoid is a stack together with a choice of an atlas.
As in the example of the projective plane, there may be multiple ways of constructing a groupoid that encodes a structure of interest. How can we unify these constructions into a single unified object? In other words, what are the abstract properties of groupids which allow us to declare when two of them are the “same”?
We turn to the land of manifolds for inspiration. As Grothendeick realized, instead of studying a manifold directly, it is sometimes convenient to study the functor of points that it induces on the category of topological spaces. In other words, we may associate to an atlas
of a manifold
the functor
which sends a topological space
to
. This last set is the set of maps from
to
that factor through the atlas. Explicitly, to get such a map we cover
by a collection of open subsets and map them to the components of the atlas such that the obvious compatibility conditions are satisfied. We thus end up with a map of topological spaces from
to the manifold.
The advantage is that different choices of an atlas on
will still induce the same functor of points. In other words, they will allow the same maps from a given topological space. This extra level of abstraction provides us with an intrinsic object that is independent of our particular construction of the manifold.
In fact, we could reverse this process isolate a list of properties shared by functors of the form
for an atlas
, and define a manifold to be a functor from
to
which satisfies these properties. Of course, isolating these properties may be difficult, but the payoff would be a completely coordinate free definition of a manifold in terms of that ways in which topological spaces can map to it.
The idea in the case of groupoids is similar. As before, we first must fix a category
. Given a groupoid
, we can define a functor
where, somewhat confusingly,
is the category of groupoids taken in the category of sets (otherwise just known as “groupoids”), not in the category
.
The functor
associates to an object
the groupoid with
and where the maps
,
etc… are the naturally induced maps. Notice the similarity with the manifold case. In order to make the comparison more evident, consider the groupoid in example of the projective plane. In that case,
and so
can be viewed as maps to our chosen cover. The other groupoid conditions tell us about the compatibility required for this to induce a map to
.
Remark:
In this construction, we’ve swept one detail under the rug. Notice that if we take for example our object to beitself, then the identity map from
to
does not factor through
. So to get all of the maps we need, we have to “sheafify” the functor
. I.e, we have to add elements that appear if we cover our object
by a sufficiently fine cover. This is entirely analogous to what happens in the manifold case.
Even our first (and somewhat trivial) example of a groupoid gives us an interesting functor. Recall that in that example, we had
and
. In that situation,
isn’t interesting, but
, together with the composition, inverse and identity maps, give us the data of a sections of a
-bundle on
.
As with manifolds, we would now like to define a stack to be a functor from
to
which is “similar” to a functor of the form
for some groupoid
. However, given the nature of this exposition, we wont’t go into the details of the meaning of the word “similar”. On the other hand, I do think that it is instructive to try to get a feel for what such a functor should look like. For starters, the functor of points of an actual object in
is of this form. More generally, given an equivalence relation on an object, it is easy to see that we can use it to construct a functor that is of the form
for a groupoid
, even if the quotient may not exist as an object in our category. So whatever “similar” means, they will at the very least allow us to describe quotients of objects by various relations. This is extremely useful in categories such as schemes where constructing quotients is typically hard, or even impossible.