Introduction to Stacks

Today I went to a talk by David Nadler which was titled “Introduction to Stacks”. The idea was to motivate the definition of a stack and give some examples, without actually defining everything rigorously. Despite having read a fair bit about stacks and going to a couple of talks, I found his presentation quite enlightening for the following reasons:

  • It struck a good balance between not drowning the audience in a pool of functorial abstraction, and providing enough intuition to enable one to work out the details if sufficiently motivated.
  • It focused on the bare bones essence of the subject, which made it clear that stacks are a general framework that can be applied to many parts of mathematics.


The overarching pedagogical metaphor was based on a typical introduction to manifolds. When introducing manifolds, it is necessary to first explain the local picture which of course is \mathbb{R}^n. In addition to being the atom from which all other manifolds are constructed, it is a manifold worth of study in it’s own right. After getting familiar with this example, the next step is to explain how these local pieces can be stitched together to form the atlas of a manifold.

Finally, we can step back and notice that one big advantage of manifolds is that they allow us to talk about spaces with a specific local structure, without having to actually provide a collection of local objects and the manual for gluing them together. This has obvious practical advantages, but the conceptual benefits are arguably greater still. Take for example the sphere. There are many ways to build it from a collection of linear spaces, but intuitively, these constructions should be different facets of the same object. I.e, the notion of a sphere should not depend on the specific way in which we choose to construct it. Manifolds such as the sphere have the important property that they exist as topological spaces independently of their construction, and thus allow us to rigorously compare the various ways in which they can be assembled.

Each of these steps formally requires many layers of details, but it’s possible to attain an intuitive understanding, and even a working knowledge, with a more parsimonious exposition. For example, it’s easy to get a feeling for \mathbb{R}^2 without rigorously defining the real line and products of sets. Similarly, it’s possible to explain the gluing of local data and the ambiant topological space without formally defining topological spaces, homeomrphisms and diffeomorphisms.

We’ll take a similar approach in our introduction to stacks. This time, the local picture comes in the form of a groupoid so we’ll spend some time getting an intuitive feel for these objects and explicitly working out a couple of key examples. A stack can be constructed from a collection of groupoids, but exists abstractly as a certain type of functor which plays the role of the topological spaces in the manifold analogy.


Enough with the philosophy – let’s start by understanding what groupoids are and what they’re used for. In order to describe a groupoid, we first have to decide which category w’re working in. Let’s denote this category by \mathcal{C}. A groupoid is a pair of objects X_0 and X_1 in ob(\mathcal{C}), and a pair of morphisms:


We think of the object X_1 as parameterizing a collection of elements, and of the object X_0 as parameterizing a collection of morphisms between these elements. The map s intuitively takes a morphism to the element that is it’s source, and the map t takes a morphism to it’s target.

We would of course like this data to actually behave like a collection of elements and morphisms, so we introduce a map c


that tells us how to take two morphisms and produce a third. The fiber product X_0\times_{X_1}X_0 is taken with the left map being X_0\xrightarrow{s}X_1 and the right map being X_0\xrightarrow{t}X_1. This insures that the target of the first map is equal to the source of the second map which is a natural assumption to make when composing maps.

In addition, we would like all of our morphisms to have inverses, which we formally enforce by introducing a third map


which intuitively sends each morphism to its inverse. Finally, we have a map


which is supposed to send each element in X_1 to an identity map. Of course, we must formally write down conditions involving these maps that guarantee that they behave as we expect, but this is an easy and boring exercise in drawing commutative diagrams.

Following the analogy to atlases of manifolds, we would like to think of elements in X_1 as being components of an atlas, and of X_0 as the gluing data of the atlas.

Let’s see some examples of this definition in action.

  • Groups The simplest category is \textbf{Set}, so this is the category that we’ll use in the first example. Let G be a group, which is in particular an object in \textbf{Set}. To represent G as a groupoid, we can set X_0=G and X_1=\{pt\}. In this case, the morphisms s and t are the unique maps from G to a point so they are completely uninteresting. On the other hand, the composition map G\times G\xrightarrow{c}G is the group multiplication, the inverse maps G\xrightarrow{i}G sends an element of G to it’s inverse, and the map \{pt\}\xrightarrow{e}G sends the point to the identity element of G.
  • Group Actions We now move onto a similar, but less trivial example which will hopefully be more illuminating as well. As before, we’ll stay in the category of sets. Let G be a group and X a set that it acts on. We can use this group action to construct a groupoid in the following manner. First of all, we set X_0=X\times G and X_1=X. Unlike the previous example, the morphisms s and t now play an important role. We define X\times G\xrightarrow{s}X to be the projection onto X and X\times G\xrightarrow{t}X to be the map sending (x,g) to g\cdot x. It isn’t hard to work out the correct definitions of c, i, and e.
  • The Projective Plane One reason that we like groupoids is that they provide a framework that formalizes what it means to glue together elements of a category along identifying maps. In this example, we’ll work in the category in which you like to think of the complex projective place \mathbb{CP}^1. For example, we could work in the category of schemes. As we know, one way to define the projective plane is to start with two copies of the affine line \mathbb{A}^1_{\mathbb{C}}, and glue them together. Let A_0 be the copy containing 0, and let A_\infty be the one containing infinity. Furthermore, we define G_0=A_0\setminus{0} and G_\infty =A_\infty\setminus{\infty}. Following the usual construction, we glue A_0 and A_\infty by identifying G_0 to G_\infty via the map 1/z. In other words, we have a map G_0\xrightarrow{\alpha}G_\infty and its inverse \beta.

    We will now define a groupoid which captures the data in this gluing process. For starters, the elements that we would like to map to one another are the points of A_0 and A_\infty so we define the scheme X_1 to be the disjoint union of these two spaces:

    X_1=A_0\coprod A_\infty

    We now need to find a scheme X_0 that parameterizes our gluing maps. We claim that the following scheme does the job:

    X_0=A_0\coprod A_\infty \coprod G_0 \coprod G_\infty

    Indeed, the A_0 component represents the identity maps for the points in A_0 as we would clearly like to identify points to themselves. A_\infty plays the same role. The points of G_0 represent the maps from the points in G_0 to the points in G_\infty. In other words, this part of X_0 tells us how to identify G_0\subset A_0 with G_\infty\subset A_\infty. The component G_\infty plays the same role in the other direction. It is easy to formalize this by defining the source and target maps. For example, s will send a point in G_0\subset X_0 to the corresponding point in G_0\subset A_0\subset X_1 and t will send a point a\in G_0\subset X_0 to \alpha(a)\in G_\infty\subset A_\infty\subset X_1.

    While reading this example you may have been bothered by the following objection. Instead of fooling around with affine lines and gluing maps, why couldn’t we have simply taken both X_0 and X_1 to be \mathbb{P}^1 itself, which is surely a perfectly legitimate scheme, and set the source and target maps to be the identity morphisms? The answer is that this indeed would have been a perfectly good groupoid, and in general, there is typically more than one groupoid that intuitively corresponds to an object that we are interested in. The quest to tie these various groupoids into a slick coordinate free package is what will lead us to notion of a stack.

    Furthermore, this example showcases the analogy between groupoids and atlases of manifolds.


    Paraphrasing David Nadler, a groupoid is a stack together with a choice of an atlas.

    As in the example of the projective plane, there may be multiple ways of constructing a groupoid that encodes a structure of interest. How can we unify these constructions into a single unified object? In other words, what are the abstract properties of groupids which allow us to declare when two of them are the “same”?

    We turn to the land of manifolds for inspiration. As Grothendeick realized, instead of studying a manifold directly, it is sometimes convenient to study the functor of points that it induces on the category of topological spaces. In other words, we may associate to an atlas \mathcal{A} of a manifold M the functor


    which sends a topological space X to \textrm{Maps}(X,\mathcal{A}). This last set is the set of maps from X to M that factor through the atlas. Explicitly, to get such a map we cover X by a collection of open subsets and map them to the components of the atlas such that the obvious compatibility conditions are satisfied. We thus end up with a map of topological spaces from X to the manifold.

    The advantage is that different choices of an atlas on M will still induce the same functor of points. In other words, they will allow the same maps from a given topological space. This extra level of abstraction provides us with an intrinsic object that is independent of our particular construction of the manifold.

    In fact, we could reverse this process isolate a list of properties shared by functors of the form h_\mathcal{A} for an atlas \mathcal{A}, and define a manifold to be a functor from \textbf{Top} to \textbf{Set} which satisfies these properties. Of course, isolating these properties may be difficult, but the payoff would be a completely coordinate free definition of a manifold in terms of that ways in which topological spaces can map to it.

    The idea in the case of groupoids is similar. As before, we first must fix a category \mathcal{C}. Given a groupoid G=(X_0,X_1,s,t,c,e,i), we can define a functor


    where, somewhat confusingly, \textbf{Groupoid} is the category of groupoids taken in the category of sets (otherwise just known as “groupoids”), not in the category \mathcal{C}.

    The functor h_G associates to an object C\in\mathcal{C} the groupoid with


    and where the maps s', t' etc… are the naturally induced maps. Notice the similarity with the manifold case. In order to make the comparison more evident, consider the groupoid in example of the projective plane. In that case, X_1=A_0\coprod A_\infty and so \textrm{Maps}(C,X_1) can be viewed as maps to our chosen cover. The other groupoid conditions tell us about the compatibility required for this to induce a map to \mathbb{P}^1.

    In this construction, we’ve swept one detail under the rug. Notice that if we take for example our object to be \mathbb{P}^1 itself, then the identity map from \mathbb{P}^1 to \mathbb{P}^1 does not factor through X_1=A_0\coprod A_\infty. So to get all of the maps we need, we have to “sheafify” the functor h_G. I.e, we have to add elements that appear if we cover our object C\in\mathcal{C} by a sufficiently fine cover. This is entirely analogous to what happens in the manifold case.

    Even our first (and somewhat trivial) example of a groupoid gives us an interesting functor. Recall that in that example, we had X_0=G and X_1=\{pt\}. In that situation, X'_1=\textrm{Maps}(C,\{pt\}) isn’t interesting, but X'_0=\textrm{Maps}(C,G), together with the composition, inverse and identity maps, give us the data of a sections of a G-bundle on C.

    As with manifolds, we would now like to define a stack to be a functor from \mathcal{C} to \textbf{Groupoid} which is “similar” to a functor of the form h_G for some groupoid G. However, given the nature of this exposition, we wont’t go into the details of the meaning of the word “similar”. On the other hand, I do think that it is instructive to try to get a feel for what such a functor should look like. For starters, the functor of points of an actual object in \mathcal{C} is of this form. More generally, given an equivalence relation on an object, it is easy to see that we can use it to construct a functor that is of the form h_G for a groupoid G, even if the quotient may not exist as an object in our category. So whatever “similar” means, they will at the very least allow us to describe quotients of objects by various relations. This is extremely useful in categories such as schemes where constructing quotients is typically hard, or even impossible.

This entry was posted in Exposition and tagged , . Bookmark the permalink.

1 Response to Introduction to Stacks

  1. alexyoucis says:

    First, some small typos:

    “It is a manifold worth of study”—should be–>”It is a manifold worthy of study”

    “ambiant”—should be–>”ambient”

    “homeomrphisms “—should be—>”homeomorphisms”

    Some comments:

    -When you say “we’d like to think of X_1 as being the components of our atlas”, I think you mean “components of our chart”, and same with the next usage of “atlas” in that sentence. I know you know this, but mixing-up chart and atlas is exactly the opposite of what you want to do. You want your groupoid to be like a chart (one way of trivializing the manifold), and the stack to be like the atlas (the abstract way of identifying all trivializations). It might be nice to draw out precisely this analogy more.

    Also, I’m not precisely sure what you mean by “the components of our atlas”. Do you mean

    \displaystyle X_1=\bigsqcup_\alpha U_\alpha

    where \{(U_\alpha,\varphi_\alpha)\} is your set of charts covering the manifold? Also, what in this context is X_0? I think it’s just

    \displaystyle \bigcup_{\alpha,\beta}U_\alpha\cap U_\beta

    with the two arrows X_0\to X_1 being the obvious inclusions on each open: U_\alpha\cap U_\beta\to U_\alpha and U_\alpha\cap U_\beta\to U_\beta.

    Is that correct?

    – I think you can certainly naturally extend your first example. In particular, if you choose X_1 to be the terminal object of \mathcal{C} (assuming it has one!), then you’re really just defining a group object–something people are probably very aware of.
    – The same comment works for your second example. People are probably used to the idea of a group object G of \mathcal{C} acting on an object X of \mathcal{C} (e.g. a Lie group acting on a manifold), and your example easily extends to this case.
    – In your example with \mathbb{P}^1, I think it’s worth pointing out that while you certainly could have taken \mathbb{P}^1=X_1=X_0, it’s not entirely in line with your notion of charts. It seems like a better analogy would be to think of your first groupoid as being a groupoid in \mathbf{AffSch}/\mathbb{C}. So, what you’ve done is present a ‘covering by affine charts’ of the non-affine scheme \mathbb{P}^1. You then want to take something like X_1=X_0=\mathbb{P}^1, but this is no longer a groupoid in \mathbf{AffSch}/\mathbb{C}. So, you move into the larger category \mathbf{Sch}/\mathbb{C}, or even all the way to stacks!, to find such an object.
    -“Paraphrasing David Nadler, a groupoid is a stack together with a choice of an atlas” More of a typo, but I think you mean chart here again, right?
    – In fact, I think this atlas/chart thing pervades the rest of the post. I’m not trying to be annoying–I just want to make sure I’m not thinking about this wrong?
    – Maybe I’m being crazy, but your definition of h_{\mathcal{A}} seems weird to me. Namely, for a covering chart C=\{(U_\alpha,\varphi_\alpha)\} on a manifold M, it seems more natural to consider h^C. Namely, morphisms M\to X are morphisms U_\alpha\to X which glue. I guess it makes no formal difference (Yoneda works both ways) but do you feel that yours is more natural? I know it’s wrong from the sheaf perspective though.
    – Your definition of h_G is a little unclear to me. I think what you’re saying is take the groupoid X_0\xrightarrow{s,t}X_1 to the groupoid h_G(X_0)\xrightarrow{h_G(s),h_G(t)}h_G(X_1) (with the obvious induced maps), correct?
    -This is with regards to the first paragraph after remarks:

    So, this is a little unsettling for me. I’m used to most topologies being subcanonical. So, if you hand me a presheaf=functor of the form h_X, I expect it to already be a sheaf. Can you explain, in the case of schemes, how h_X for a scheme X, relates to h_G for a groupoid G? For example, I believe you can just embed Schemes into Groupoids in Schemes by X\mapsto G_X:=(X\xrightarrow{\text{id},\text{id}}X). If so, then I’d imagine that h_{G_X} is a sheaf since h_X is! Can you explain what goes wrong here?

    More generally, when is h_G already a sheaf? What does it being a sheaf tell you about the Groupoid G? That it’s somehow a ‘fine enough’ covering? For example, I somehow think the above confusion is cleared up, once again, if you don’t work in \mathbbf{Sch}/\mathbb{C} for that example, but in \mathbf{AffSch}/\mathbb{C}. Yeah?


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s