Algebra IV doc. Lukáš Vokřínek, PhD. February 13, 2024 Contents Introduction iii Syllabus iii 1. Noetherian rings 1 2. Invariant theory 10 3. Localization 12 4. Primary decomposition 16 5. Chain complexes 19 6. Abelian categories 28 7. Derived functors 30 8. Balancing Tor and Ext 34 9. Ext and extensions 39 10. Homological dimension 42 11. Group cohomology 44 12. Flatness is stalkwise 51 13. Simplicial resolutions 53 14. Representation theory 56 1 15. Characters of groups 60 16. Representations of symmetry groups Sn 62 17. Integrally closed rings, valuation rings, Dedekind domains 62 18. Some interesting exercises 68 19. Possible essay topics 69 n Introduction Introduction will be here at some point (or not). Lukas Vokfmek Syllabus Syllabus will be here at some point (or not). in 1. Noetherian rings 1. Noetherian rings Definition 1.1. Let A be a ring. We say that an A-module M is noetherian, if it satisfies the ascending chain condition for submodules, i.e. if there exists no strictly increasing chain M0 C Mi C • • • C Mn C • • • of submodules of M. As a special case, we say that A is a noetherian ring, if it is noetherian as an A-module, i.e. if it satisfies the ascending chain condition for ideals in A. Theorem 1.2. An A-module M is noetherian iff every submodule of M is finitely generated. Proof. Assume that M is noetherian and let L C M be infinitely generated. We construct inductively a stricly increasing sequence of finitely generated submodules Ln C L in the following way: we start with Lq = 0 and then inductively Ln C L for otherwise L would be finitely generated and we set Ln+\ = Ln + Rxn+\ where xn+\ G L \ Ln. For the opposite implication, assume that every submodule of M is finitely generated and that Mq C Mi C • • • is a sequence of submodules of M. Then M^ = UnMn is a submodule. It is finitely generated by assumption, M^ = R{x±,..., x^} and since each x{ lies in some Mj, there exists n such that x±,..., xy- G Mn. Then Mn = Mn+i = • • •. □ Theorem 1.3. Let 0 —> M' M A- M" —> 0 6e a short exact sequence of A-modules. Then M is noetherian iff both M' and M" are noetherian. Proof. If M is noetherian then the lattice of submodules of both M' and M" = M/M' are sublattices of the lattice of ideals of M and as such do not contain an infinite chain. Assume conversely that both M', M" are noetherian and let Mq C Mi C • • • be a sequence of submodules. Then M'n = a_1(Mn) is constant for n 3> 0 and so is M" = /3(Mn). But then so must be Mn: for if x G Mn+i, then f3(x) G M"+1 = M" and so f3(x) = /3(y) for some y G Mn. Analogically, x — y = a{z) for some z G M^, and thus x = y + a{z) G Mn. (Alternatively: the inclusion Mn —> Mn+\ is an extension of inclusions M'n —> M'n+1 and M" —> M"+1, which are isomorphisms for n>0 and 5-lemma gives the result.) □ Proof. Assume conversely that M', M" are noetherian and let L C M be a submodule. Then for L' = a~ 1(L), L" = /3(L) we get a short exact sequence 0 —> L' —> L —> L" —> 0. Since both L' C M' and L" C M" are finitely generated, so is L. Corollary 1.4. If A is a noetherian ring, the every finitely generated module M is noetherian. Proof. The sum of two modules can be expressed via a (split) short exact sequence 0 > M' > M' M" > M" 0, the previous theorem thus shows that every finitely generated free module An is noetherian and also every quotient of it, i.e. any finitely generated module. □ In the proceeding, the commutativity assumption is crucial. Definition 1.5. An A-algebra is a homomorphism of rings p: A —?► B. Mostly, it will be a mono and we will thus think of B as a supring of A. 1 1. Noetherian rings Example 1.6. A[x±,... ,xn] is an A-algebra. Since B is canonically a i3-module, by restricting scalars along p we may also treat it as an A-module. An alternative definition of an A-algebra is as an A-module B together with an A-bilinear mapping B x B —» B (multiplication) that, together with the addition, makes B into a ring (this means that B is a monoid object in A—Mod). Definition 1.7. We say that an A-algebra B is finitely generated, when there exist b±,..., bn G B that generate B as an A-algebra, i.e. via addition, multiplication and multiplication by scalars from A. We write B = A[b±,..., bn]. We say that an A-algebra B is finite, when B is a finitely generated A-module (i.e. there exist b±,..., bn G B that generate B via addition and multiplication by scalars from B). We write B = A{b±,..., bn}. We remark that finite generation is equivalent to the existence of a surjective homomor-phism of A-algebras A[x±,..., xn] —> B (sending Xj to the generators 6j of B; this is so because ..., xn] is a free A-algebra on generators x±,..., xn). For any finite A-algebra there exists a surjective homomorphism of A-modules ..., xn} —> B. Theorem 1.8. Let A be a noetherian ring and B a finite A-algebra. Then B is also a noetherian ring. Proof. By the corollary, B is a noetherian A-module, so every A-submodule of B is finitely generated as an A-module. This implies easily that every ideal of B (i.e. i3-submodule A-submodule) is a finite generated as an ideal (i.e. i3-submodule). □ Example 1.9. The ring Z is noetherian. Then also Z[i] = {a + bi \ a, b G Z} is noetherian. Theorem 1.10. Let A be a noetherian ring, D C A a multiplicative subset. The also the localization D~1A is a noetherian ring. Proof. Again, the lattice of ideals of D~1A is a sublattice of the one for A. □ Theorem 1.11 (Hilbert basis theorem). If A is noetherian, then so is A[x]. Proof. Let / C A[x] be an ideal. We define an ideal J = {a G A\3p G / : p = ax1" + lot}, i.e. the ideal of leading coefficients of the polynomials from /. Let J = (a±,..., a^) and pick polynomials pi G / with leading coefficients a^; we may assume that they all have the same degree r. The set A 1 we have ai = fii,..., = /3j_i, a« > /3j. Since this is a linear order, we may speak of the leading term of a polynomial / £ k[a?i,..., xn]: when / = aaxa + apx13 = aaxa + lot /3 1 reduces to y just using fy) and the reduced Grobner basis is ^3). The quotiend k[x,y]/I or perhaps rather }s\x,y\/^/l has a close connection to the solution set of fi = 0, J2 = 0. It consists of three points [0,0], [—1,1], [1,1] and therefore dimk[x, y]/Vl = 3. At the same time dimk[x, y]/I = 4, since the point [0, 0] should be taken "twice", concretely x(y — 1) ^ I, but (x(y — l))2 G /, so that x(y — 1) G V/x / (the function x{y — 1) vanishes on the three points, but not up to a sufficiently high order). o Lemma 1.18. If~LM.(f), LM(g) are coprime, then S(f,g) can be reduced to zero, using only f, 9- Proof. For simplicity, we may assume /, g monic. By assumption S(f, g) = LM(g)/ —LM(/)g and in each step we will subtract a multiple of the from tf where t is a term of g or adding a multiple of the form sg where s is a term of /, in such a way that in the end the S-polynomial will reduce to gf — fg = 0 (the point is that every term st turns up once with a plus sign and once with a minus sign and it can only be a leading term when s is a leading term of / or t is a leading term of g). □ HW 1. Solve the following system of polynomial equations using Grobner bases x2 + y + z = 1 x + y2 + z = 1 x + y + z2 = 1 6 1. Noetherian rings 1.2. The confluence approach In the case of one variable, understanding polynomials modulo g is quite simple computationally. One can always simplify any polynomial / to its remainder modulo g and two polynomials are congruent iff they give the same remainder. We will now try to outline a theory for more variables that still allows one to associate remainders (more precisely canonical forms) that are in bijection with congruence classes. The congruence relation will be expressed via a simpler relation of reduction. Consider a monic polynomial g = x13 — r with x13 > LMr. First, simply subtracting g from x13 yields r and we represent it as (replacement of x13 by r, like in a substitution in systems of equations). More generally, we get a similar rule by subtracting an axa-multiple axa ■ x13 axa ■ r and we may think of this again as replacing the copy of x13 in the product on the left by r. Yet more generally, if a polynomial / contains a term axa ■ x13, we write f f = f - axa ■ g ■ ■ ■ + axa ■ x13 + • • • —>g ■ ■ ■ + axa • r + • • • and in effect this yet again replaces this particular appearance of x13 by r. More precisely, we may say that / —>g f at xa ■ x13. Of course, if a = 0 we get /' = /. We may describe the result of / —>g f at xa ■ x13 equivalently in the following way: /' is the unique polynomial that differs form / by a scalar multiple of xa ■ g and has coefficient at xa ■ x13 zero. Using this, if / and /' differ by a scalar multiple of xa ■ g, reducing both polynomials at xa ■ x13 clearly yields the same /": / -+g f"g <" /' This implies easily that two polynomials are congruent modulo (g) iff they can be joined by an (arbitrarily long) zig-zag of reductions —>g. More generally, for a set of polynomials G we denote by —>q the union of the above reductions —>g with g ranging over G. In fact, we will denote by —>q the closure under iterations, so that / —>q f if / can be reduced to /' by a finite sequence of reductions —>g with g ranging over G. If we want to emphasize that there is exactly one reduction used we write / —>g f. Generalizing the above, we see that / and /' are congruent modulo (G) iff / and /' can be joined by a zig-zag of reductions —>q. Our goal for this section is to understand when zig-zags of reductions can be replaced by reductions - this will then give the canonical forms and decision for congruence, as promised. There are two important properties that a reduction can satisfy: termination and confluence. The termination property asserts that any sequence /q —>g fi • • • is eventually constant. Termination holds always as follows from the well foundedness of the monomial order (assume that the squence does not stabilize and consider, for each n, the biggest monomial in fn that admits a G-reduction; it must exist for otherwise fn would be reduced and the sequence would stabilize; but now the sequence of these monomials must stabilize; next, consider the sequence of the second biggest such monomials etc.; the stabilized monomials 7 1. Noetherian rings then form an infinite decreasing sequence, giving a contradiction). The termination implies that, starting from any given / and applying reductions, at some point we arrive at a reduced polynomial h, i.e. one that does not allow any nontrivial reduction. We will say that h is a normal form of /. Heading towards uniqueness of normal forms, the confluence property asserts that given any /, any two reductions of it admit a common reduction, i.e. we can complete the diagram: f- l l 4- Clearly, if this is the case and both /{ and f2 are reduced then they must be equal to /' and thus equal to each other. Consequently, a normal form of a polynomial is unique Proposition 1.19. Suppose that -^q is confluent. Then two polynomials are congruent modulo (G) iff they have the same normal form. In other words, the canonical map {normal forms} —> k[x]/(G) is a bijection. Proof. We have seen that f\ and /2 are congruent modulo {G) iff they can be joined by a zig-zag of G-reductions. Confluence implies that we may then replace this by the bottom span in •+ f *r and thus the normal form of f\ equals that of /' and symmetrically for /2 □ In the presence of termination, we will now show that the full strength of confluence is implied by a weaker version with both reductions of / being one-step reductions. Assuming this weaker version true, we temporarily call / bad if it admits two non-confluent reductions and we split each of them into its first step and the rest, as in the solid part of the diagram below. If none of f2, /" was bad then we could complete the picture starting from the top left square and obtain that also the two given reductions of / were confluent: l I l l 4- f'l--+f"- I I 4- --)■' I I 4- l I I 4- 8 1. Noetherian rings Thus, at least one of /{', f2, /" must be bad and we can then proceed inductively and construct in this way an infinite sequence of nontrivial reductions, yielding a contradiction. The bad news is that confluence does not always hold for —>q, but we will see that it does hold if (and only if) G is a Grobner basis. We will need a useful observation that uses the additive structure of polynomials: Lemma 1.20. Two reductions of f as in l l 4- are confluent if f2 — f[ —>g 0. Proof. Decompose the reduction as f " fi f" 0 with the first step happening at xa and apply corresponding reductions fl fl at xa ■, f fi at x°"■ It is then easy to see that /" = f2 — /{'. Proceeding in this way, we produce /{ —>q hi and f2 —>g h2 with 0 = h,2 — hi, so that hi = h2 is the required /'. □ Theorem 1.21. The reduction -^g i-s confluent iff G is Grobner. Proof. Assuming G Grobner, any h £ {G) may be reduced at its leading monomial to obtain h -^g h' with h' £ (G) smaller (in terms of its LM) so that we obtain h -^g 0 by well foundedness. Now for any two reductions f- l l 4- we have f2 — fi £ (G) and thus f2 — fi —>g 0, implying confluence through the previous lemma. In the opposite direction, if h £ (G) then h is congruent to 0 modulo (G) and thus they have the same normal form. Since 0 is reduced (having no term), this means in effect that h -^g 0. Finally, since this reduction must eliminate the leading monomial of h at some point, that leading monomial must be divisible by one of the LM gi and G is indeed Grobner. □ We have just seen that confluence is equivalent to V/ £ (G): / —>g 0 . We will now show that it is enough to check this condition for some very special elements: Consider gi,g2 £ G and denote by x13 the least common multiple of LM gi, LM g2. We define the S-polynomial S (91,92) -9i -92- LMgi LMg2' Clearly, the S-polynomial belongs to (G), showing the necessity in the next theorem. 9 2. Invariant theory Theorem 1.22. The reduction —»(; i-s confluent iff S(gi, 52) —>g 0 for each 51,52 £ G. Proof. To prove sufficiency, consider two one-step reductions / —>^ /{ and / —>g2 f2 as in the lemma. If they happen at different monomials then /{ — f2 is a linear combination of 51 and 52 with non-cancelling leading terms, so that we can easily reduce f2 — f[ —>g 0 using only gi, 52 (first use the one with the bigger leading monomial). If the reduction happens at the same monomial then this monomial must be divisible by x13 from the definition of the S-polynomial. Writing the corresponding term of / as axa ■ x13, it is easy to see that fx13 x13 \ fi - /i = ax°~ " —9i - Tin—92 = axa ■ S(gi,g2). Since this reduces to zero, the previous lemma gives confluence. □ This gives correctness of the Buchberger algorithm. Every time we add anything to G, the monomial ideal generated by the leading monomials of G increases, so the algortihm must terminate by Hilbert basis theorem. In that case, S(g±,g2) —0 for all 51,52 £ G, so the reduction —>q is confluent and thus G is Groobner. We will now give a few applications. • membership test: Given / and G, decide whether / £ (G). The algorithm first enlarges G to a Grobner basis and then reduces / to its normal form / —>q h. Now / £ (G) iff h = 0. • equality test: Given G and H, decide whether (G) = (H). The algorithm first enlarges both G and H to Grobner bases and then tests whether V5 G G: 5 £ (H) and also the symmetric version. • elimination ideal: For the lexicographical order of monomials with x > y, let G be a Grobner basis. Then k[y] n (G) = (k[y] n G), since any / £ k[y] n (G) reduces to zero, / —>g 0 and in this process we may only use elements of G lying in k[y] by our assumptions on the monomial order. • systems of polynomial equations: Let G be a system of polynomials over an algebraically closed field. Then G = 0 implies / = 0 iff some power of / lies in (G). Thus, the system implies some equation / = 0 with / 6 k[y] iff (G) contains some such / and by the above, this is equivalent to the Grobner basis containing some such /. This allows one to compute solutions of systems of polynomial equations to some extent: Find such an /, find its roots, plug in one after another into the remaining polynomials and thus continue with fewer variables. • intersection of ideals: One checks that (G) n (H) = k[x] n ((1 — t)G,tH), where the right hand side takes place inside k[t,x]. Thus, one may compute the intersection using the elimination ideal method, this time with t > x. 2. Invariant theory This is a nice application of the Hilbert basis theorem. We assume here that k is a field of characteristic coprime to the order of a finite group G (this condition will also be important for the representation theory later in the course). We will consider an action of G on the polynomial ring k[x]. The invariants (the collection of invariant polynomials in this case) is 10 2. Invariant theory the subset k[x] = {/ G k[x] | Va G G: a • / = /}. As an example, the symmetry group Sn acts on the variables and thus on the polynomials, e.g. (1 2) -x\x2 = x\x\. The main theorem in this respect is that for the elementary symmetric polynomials S\ — X\ -)-••• -\- Xn, S2 — X\X2 ~\~ ' ' ' ~\~ X^Xj -\- ' ' ' -\- Xn — \Xn^ . . . , Sn — X\ • • • Xn the canonical map k[y] ->• k[x], yi ^ Si is an isomorphism onto the invariants kfx]5™. The action of a G Sn on k[x] is obtained from the action on variables in two steps: k{x} k{x} k[x] k[x] The action on the set of variables induces a linear action on the vector space of linear forms (i.e. essentially on kn) and that induces an algebra action on the polynomial ring, i.e. it satisfies a • (/ + 9) = a • / + a • g, a • 1 = 1, a • (fg) = (a • /)(a • g). It follows that the G-invariants form a k-subalgebra. Theorem 2.1 (Hilbert's on finite generation of invariants). The k-algebra k[x]G is finitely generated, i.e. there exists a surjection k[y] -» k[x]G. Proof. Denote by i: the inclusion. In this way, we can think of k[x] as an algebra over k[x]G. We will now construct a retraction p in the category of k[x]G-modules k[x] by the formula p(f) = ' SaeG a ' f average °f the elements in the orbit of /). The compatibility of the action with the algebra structure gives for / £ k[x]G: a ■ (fg) = (a ■ f)(a ■ g) = f(a-g) so that the action is indeed by k[x]G-linear maps and consequently so is p. Now imp C k[x]G 11 3. Localization since 1 1 aeG 1 1 aeG 1 1 aeG 1 1 aeG = P(f) since as a runs over elements of G, ba runs over the same set of elements (in a different order, i.e. a i—> ba is a bijection). Finally pi = 1 since, for / £ k[x]G, we have 1 1 aeG ' ' aeG Now let / C k[x] be the ideal generated by the homogeneous elements of k[x]G of positive degree, so that k(>0)[x]G C / C k[x]. By Hilbert basis theorem, we get / = (/i,..., fk) with fi some of the above generators, i.e. homogeneous elements of k[x]G of positive degree. We claim that k[x]G is generated as a k-algebra by the same set of elements fi, ■ ■ ■ , fk- Since the G-action respects the degrees of polynomials, a polynomial is G-invariant iff all its homogeneous components are G-invariant that leaves us to prove / 6 k[x]G homogeneous =4> / G ...,We prove this by induction on degf. If degf = 0, there is nothing to prove, so assume that degf > 0. Now / G / = (/i,..., fk) so we have an expression / = fi9i H-----1- fk9k and we may assume that all the gi are homogeneous (replace the gi by their homogeneous components of the appropriate degrees and the equality will remain valid). Now apply the retraction p to get / = fip(gi) h-----1- fkp(gk) (both the left hand side and the fi are G-invariant and p is k[x]G-linear). By induction, we may assume that all p(gi) already lie in k[/i,..., fk]. Thus, the same is true for /. □ 3. Localization Definition 3.1. A local ring is a ring (commutative with 1) with a unique maximal ideal. Theorem 3.2. A ring A is local iff its non-units form an ideal. In that case this ideal is the unique maximal ideal of A. Proof. The implication follows from (a) C M for every non-unit a, the opposite implication is obvious. □ 12 3. Localization Definition 3.3. Let A be a ring and D C A a multiplicative subset, i.e. a subset satisfying 1 £ D and x,y ^ D =^ xy ^ D. The decomposition of A x D with respect to the equivalence relation (ai, B such that p = pX. A-r-^B / D~XA x Proof. Since ^ = A(a)A( D~1A we study the relationship between the ideals of A and those of D~1A. We have maps between these sets that clearly preserve the ordering A* : {ideals of A} {ideals of D~1A} : A* with A*(J) = = {a G A | f G J} that clearly preserves primeness (e.g. A/A_1(J) —> B/J is clearly injective and a subring of a domain is itself a domain) and with A„(J) = D-XA ■ A(/) = {§ G D-XA | a e /} D-1! (i.e. the ideal generated by the image A(7). Clearly A*(A*(J)) = J and in the opposite direction A*(A*(/)) = {a G A | 3d G D: da G /} 14 3. Localization We call this the D-saturation of / and also say that / is D-saturated if it equals its saturation, i.e. if da G / =4> a G / (division by d G D). Obviously, by restriction, we get a bijection A* : {D-saturated ideals of A} = {ideals of D^A} : A* Further, a prime ideal P is D-saturated iff it is disjoint from D (if saturated then d = dl G / =4> 1 G /, i.e. nonsense, so that d ^ /; if disjoint, one can divide by d showing D-saturatedness). A*: {prime ideals of A disjoint from D} = {prime ideals of D~1A} : A* Thus, if D = R \ P the left hand side consists of prime ideals contained in P and as such contains a maximal element P, implying that D~1A = Ap has a unique maximal ideal, namely D-ip = {« | a G p b g p| (alternatively, it consists exactly of the non-units of Ap). More generally, any ideal / that is maximal among those disjoint from D must be D-saturated, since its D-saturation is still a proper ideal disjoint from D, so that it is in fact a maximal D-saturated ideal and as such is a pullback of a maximal ideal of D~1A, hence prime. The point of the localization D~1A lies in having less ideals, in particular prime ideals, and this simplifies the structure theory of modules. We will see some examples of this. The localization of a module is defined similarly by universal property M-'^N A A ,11 ~ / p D-XM where ./V is assumed to be an D~1A module, i.e. an A-module in which the multiplication map d • : N —> N is an isomorphism (look at the action map D~1A —> End(A^) and employ the universal property of the localization D~1A). Straight from the definition we see that if the multiplication maps are isomorphisms on M then we can take A = id, i.e. D^M = M. In general, since Homj4(M, N) = Honu(M, HomD-ij4(L)"1A, N)) = HomD-ij4(L)"1A ®A M, N) the so called extension of scalars gives a concrete construction D~1M = D~1A 0a M. It is then important that D~1A is a flat A-module (see below) and thus the localization functor is exact. We will now give a second construction D^M = {§| x G M, d G D} where similarly to the case of A, it is imposed that ^ = | iff fex = fdy for some / G D. To prove that this gives the previous localization, one has to prove that the maps D^A^a Mi=± I) M. given by a/d® x \—> {ax)/d and 1/d® x <—i x/d, are well defined (the first is the extension of the canonical inclusion A: M —> D~1M) and inverse to each other. This implies easily that D~1A is flat since for /: M —?► N injective the induced D^1 f: D^M —> D~1N satisfies 15 4. Primary decomposition D^1f(x/d) = f(x)/d = 0 iff ef(x) = 0, i.e. /(ex) = 0 and ex = 0 by injectivity of /; finally this gives x/1 = 0. Alternatively, one can express D~1A = (JdeD d^1 A = colim.deD A where the maps in the diagram are exactly of the form e • : A —» A from the copy of A with index d to the copy with index ed. It remains to show that the colimit indeed gives D~1A (easy) and that the diagram is filtered (very easy). Again, for D = R \ P we denote MP = D~XM. Theorem 3.9. For an A-module M we have: M = 0 4^ VP maximal: Mp = 0. Before starting the proof we define the annihilator of x G M to be the ideal Ann(x) = AnnM(s) = {a G A \ ax = 0}. Clearly x = 0 iff Ann(x) 3 1. The fraction % G D~x A then annihilates X(x) = f, i.e. ^ = 0 iff Be G D: eax = 0 (i.e. ea G Ann (a;)) iff ^ G D^1 Ann(x), so that we finally get Ann(f) = D^1 Ann(i). (This implies, in particular, that x G ker A iff D n Ann(x) 7^ 0 since these are exactly ideals giving the trivial ideal in the localization D~1A.) Proof. The implication is clear, so assume that 0 7^ x G M. Then Ann(x) ^ A is a proper ideal and there exists a maximal ideal P D Ann (a;). Denoting D = A \ P as usual, we obtain DflAnn(x) = 0 so that D^1 Ann(x) ^ 1 is also proper. Since it equals Ann(j), we must have 0 / j £ Mp and this module is thus also non-zero. □ Corollary 3.10. For an A-linear map f: M —?► N we have: f is mono/epi/iso 4^ VP maximal: the localized map fp: Mp —> Np is such. Proof. This follows from the chain of equivalences: / mono iff ker / = 0 iff (ker/)p = 0 iff ker fp = 0 (since the localization, being exact, commutes with kernels) iff fp mono. □ 4. Primary decomposition Let R be a (possibly graded) noetherian ring and let M be an P-module. Let us investigate when multiplication by r G R on the module M is non-injective - we may say that r is a zero divisor on M because this exactly means that there exists a nonzero x G M such that rx = 0. We denote Ann(x) = AnnM(s) = {r G R \ rx = 0}, the so called annihilator of the element x; it is easy to see that this is an ideal. The zero divisors on M are thus exactly the elements of the union of all annihilators Ann(x) for i/0. Of course, it is enough to consider the maximal such and we will show that these are prime ideals. We say that a prime ideal p is an associated prime of the module M if p = Ann(x) for some x G M. The set of all associated primes of M is denoted Ass(M). We will now explain a useful characterization of annihilators: an P-submodule generated by x is isomorphic to Rx = Rj Ann(x) by the first isomorphism theorem applied to the R-linear map R —> M sending 1 1—> x, whose image is obviously Rx and whose kernel is Ann(x). Thus, equivalently, a prime ideal p is associated iff M contains a submodule isomorphic to the cyclic module R/p. 16 4. Primary decomposition Example 4.1. Ass(i?/p) = {p} because R/p is an integral domain and thus the multiplication by any nonzero element is injective, i.e. Ann(x) = p for i/O. Lemma 4.2. Every maximal element of {Ann(x) | x 7^ 0} is an associated prime. In particular, for R noetherian, every annihilator Ann(x), for x 7^ 0, is contained in some associated prime. Remark. It is also true that, for a multiplicative subset D, a maximal element {Ann(i) | Aim(x) n D = 0} is an associated prime. This was proved in an earlier version and may be needed at some point... Proof. Let Ann(x) be maximal and let rs £ Ann(x). Then either sx = 0 and thus s G Ann(x) or r £ Ann(sx) = Ann(x) by maximality. □ As a simple consequence, we obtain the following theorem: Theorem 4.3. Let R be a noetherian ring. Multiplication by r G R on an R-module M is injective iff r does not lie in any associated prime of M. □ This theorem is useful especially because we will show that Ass(M) is finite for every noetherian (i.e. finitely generate) module M. The main tool here will be a so called primary decomposition. We say that a module M is p-primary if Ass(M) = {p}. We also say that M is primary if it is p-primary for some prime ideal p. In the case that M is not primary, it contains two submodules P = R/p and Q = R/q and in that case Ass(PDQ) C Ass(P) nAss(<3) = {p} n{q} = 0. Since every nonzero module has some associated prime, we get P n Q = 0. Theorem 4.4. Let M be a finitely generated module over a noetherian ring R. Then there exists a finite collection of submodules Mi, for i = 1,... ,n, such that 0 = ("^ Mj and such that each M/Mi is pi-primary and the prime ideals pi are all distinct. If this expression is irredundant (i.e. no Mi can be removed) then Ass(M) = {pi,... ,pn}. Proof. Let us call an expression Mq = [~\Mi with all M/Mi primary a decomposition of the submodule Mq. We will show that if Mq has no decomposition then there exists a strictly larger submodule without a decomposition and this would contradict M being noetherian. Since Mq admits no decomposition, M/Mq cannot be primary (for otherwise Mq = Mq would be a decomposition). As above, there exist two submodules M\/Mq, M[/Mq C M/Mq with zero intersection, i.e. with M\ n M[ = Mq. If both M\ and M[ had decompositions we would obtain a decomposition for Mq by intersecting these, so one of them does not admit a decomposition, as claimed. We will now show Ass(M) C {p1;..., pn}. First we prove Ann(x) = AnnM/Ml(x) n • • • n AnnM/Mn(x) (for a nonzero x £ M, we have r £ Ann(x) iff rx = 0 iff Vi: rx G Mj iff Mi: r G Ann^/^.(x)). Assuming now that Ann(x) C Annjy^.(x) for all i, we pick Sj £ Ann^/^1(x) \ Ann(x). Their product s± ■ ■ ■ sn then lies in the intersection, hence in Ann (a;), but Si ^ Ann (a;), so Ann(x) is not prime. So for prime Ann(x) this must equal one of the Ann^/^.(x), and the latter can only be prime if it equals pi. For the opposite inclusion we need the irredundancy: It gives fli^j % 7^ 0 and this intersection thus contains some non-zero element, necessarily x ^ Mj, that has AnnM(s) = Amijy^(x) and, for some multiple y 6 Rx C f^\i-^j Mi, we obtain Ann^/^.(y) = Pj since M/Mj is Pj-primary. □ 17 4. Primary decomposition Finally, we will study the behaviour of primary decomposition under localization, so let D be a multiplicative subset. Lemma 4.5. Let x 6 M. A maximal element of {Ann(dx) | d 6 D} equals the D-saturation of Ann(x). In particular, for R noetherian, the D-saturation of every annihilator Ann(x) is an anni-hilator Ann(dox). Proof. Let Ann( D~1M we recall that Ann(x/1) = D^1 Ann(x) and since the localization gives a bijection {prime ideals of A disjoint from D} = {prime ideals of D~1A} (and those intersecting D give the full ring on the right hand side) we can determine the associated primes of D~1M: Ass(L>"1M) = {D~1P | P G Ass(M), I) n P = 0}. This takes a particularly simple form for a P-primary module M over a noetherian ring (is this necessary?): then either D~XM is D^P-primary when D n P = 0 or D^M = 0 when DflP / i (since then D~1M has no associated prime). Now apply this to a primary decomposition 0 = f^\Mi with M/Mj being Pj-primary. We get 0 = f|^-1M, with D^1 MJD^1 Mi being D^1 Pj-primary; when some D^1 M/D^1 Mi is zero, i.e. D~1Mi = D~1M, we may remove it from the decomposition. For a minimal associated prime Pj we then get only one non-zero submodule, namely 0 = D^Mj that together with the monomorphism (since the module M/Mj is Pj-primary, we have Ann(x/1) = D^1 Ann(x) C D~1Pj and is thus proper, showing that x/1 ^ 0) M--->D-lM M/Mj)->D-1M/D~1Mj gives that Mj = ker Xj and as such is unique. For completeness, still over a noetherian ring, we prove that for any prime P D Ann(x) there is an associated prime lying between these two: consider A: M —> Mp and observe that Ann(x/1) = Ann(i)p is non-trivial. It is thus contained in some associated prime D~1Q £ Ass(Mp). As above, this means that Q £ Ass(M) (this is a bit circular, it seems that the general version of Lemma 4.2 is needed to conclude that there exists an annihilator maximal among those disjoint from D = R \ P and as such is the prime Q as above). This 18 5. Chain complexes implies that any proper ideal / lies in prime that is minimal above it: since Ass(P/7) is finite, it contains a minimal element; by the above it must in fact be minimal among all primes containing / = Ann(l). As a final application of this, if / is P-primary then, in particular, P is the unique minimal prime above / (so smallest) and thus Proposition 4.6 gives V7 = p| q = p. ICQ prime In particular, we obtain the following consequence: rs 6 J 4 r £ \fl or s G / (the first condition means that r belongs to the unique associated prime of R/I, the second that s = 0 in R/I). In the opposite direction, when / satisfies this condition we get that \fl is the union of all the associated primes by Theorem 4.3, and these include all minimal primes above /. Since it is also the intersection of all minimal primes above /, there must be only one associated prime and / is necessarily primary (with associated prime \fl). So for a primary ideal /, the radical \fl is a prime ideal. The converse is generally not true, since being prime only means that there is a unique minimal prime over /, but some bigger prime may be associated as well. However, if \fl is a maximal ideal, then / is automatically primary. In particular, for a maximal ideal M, each power Mn is primary. It is interesting that Pn needs not be P-primary for a general prime P and its P-primary component p(n) = A*A*Pn is generally bigger and is called the n-th symbolic power of P. Proposition 4.6. The intersection of all prime ideals is the nilradical \/0 = {r£iJ nilpotent}. More generally f]PDI P = \fl■ Proof. Clearly every nilpotent element lies in every prime. Thus let a G R, denote D = {I, a, a2,...} the corresponding multiplicative subset and assume that a lies in every prime or, equivalently, every prime intersects D. Then the localization D~1R contains no prime and thus 1 = 0 in D~1R. By Proposition 3.5, this is equivalent to D containing zero, i.e. that some power of a is zero. The second point is obtained from the first by applying to the quotient ring R/I. □ 5. Chain complexes Definition 5.1. A sequence of R-modules A —B C is said to be exact at B if im/ = ker g. Similarly, one can define exactness of a longer sequence at any inner term. A sequence is exact, if it is exact at every inner term. Exercise 5.2. Characterize exactness ofO—> A —> B, A —> B —> 0, 0—> A —> B —> 0, 0^A^B^C,A^B^C^0&nd0^A^B^C^0 (the last is referred to as a short exact sequence). In particular prove that any short exact sequence is isomorphic to an "extension" 0 ->• A ^ B -» B/A ->• 0. In the condition im / = ker g, the inclusion C is equivalent to g o f = 0, under which one may form the quotient ker gj im / that measures the difference between the two submodules. One may thus express the exactness equivalently as g o / = 0 and ker g/ imf = 0. These two parts are the main idea of the definition of a chain complex. 19 5. Chain complexes Definition 5.3. A chain complex C is a diagram in which X, where An denotes the standard n-simplex, i.e. the convex hull of any (n + l)-tuple of affine independent points, e.g. Eq, ..., En £ Bn the standard point basis of the affine space xq + • • • + xn = 1. The operators di are now given by restricting to the faces as above. The differential on C{X) is yet again d = — de Rham cohomology: 0,nM = {smooth n-forms on M}. Here the differential points in the opposite direction QnM —> Qn+1M. We will formalize this later as a cochain complex and de Rham cohomology is the cohomology of this cochain complex. 20 5. Chain complexes Definition 5.5. A chain map f: C —» D between two chain complexes is a collection of homomorphisms fn for which the (ladder shaped) diagram -> Cri fn Cn-l — fn-l -tDn-x — commutes, i.e. df = fd. Every chain map induces maps Bn(C) Hn(C) -> Hn(D). We obtain Bn(D) and Zn(C) —> Zn(D) and thus also Proposition 5.6. The n-th homology forms a functor Hn: Ch(Mod^) Mod R- □ Definition 5.7. We say that a chain map / is a quasi-isomorphism if the induced map on homology is an isomorphism. As an example, a chain complex C is acyclic iff the unique map 0 —> C is a quasi-isomorphism iff the unique map C —> 0 is a quasi-isomorphism. We will now present a special class of quasi-isomorphisms, analogous to homotopy equivalences in topology. First we need the corresponding algebraic notion of homotopy. Definition 5.8. Let / and g be two chain maps C —> D. A chain homotopy from / to g is a collection of homomorphisms hn as in the (non-commutative!) diagram * C-n-l n—l such that dh + hd = g — f. We write h: / ~ g or / ~^ g or simply / ~ g if the homotopy is not important. A chain homotopy equivalence is a chain map /: C —?► D that admits a homotopy inverse, i.e. a chain map g: D —?► C together with homotopies gf ~ 1, fg ~ 1. Remark. Any continuous map between spaces induces a chain map between their singular chain complexes and any chain homotopy induces a chain homotopy (this is not completely straightforward). The simplicial situation is a bit more straightforward, but complicated enough to be explained at this point. We will give a nice interpretation (two in fact) of chain homotopy later. Proposition 5.9. Chain homotopic maps induce equal maps on homology. In particular, chain homotopy equivalences are quasi-isomorphisms. Proof. Let [z] G Hn(C) be represented by a cycle z. Then g(x) — f(x) = dh(x) + hd(x) o so that \g{x)\ = [f{x) + dh(x)] = [f(x)]. □ 21 5. Chain complexes Proposition 5.10. Chain homotopy equivalence is an equivalence relation that respects composition. We may thus form the homotopy category of chain complexes and chain homotopy classes of maps where chain equivalences are exactly the isomorphisms. Proof. We prove transitivity: if fi — fo = dh + hd and f2 — fi = dk + kd then f2 — fo = d{h+k)+{h+k)d. Similarly if f\—fo = dh+hd then gf\— gfo = g{dh+hd) = d{gh)+{gh)d. □ We index the modules in our chain complexes by integers, but we will be using a lot chain complexes indexed by non-negative integers only. One can extend such a chain complex by zeros and thus think of it as a chain complex in the original sense. In doing so, the non-negatively graded chain complex • • • —> C\ —> Co will also have the zero homology Ho{C) = Co/Bo{C) = coker(di) since every 0-chain is a cycle. Another variation, briefly mentioned above with connection to de Rham cohomology is that of a cochain complex. Another situation where cochain complexes arise naturally is upon applying contravariant functors to chain complexes - the direction of homomorphisms changes. We will distingiush notationally by using upper indices. Definition 5.11. A cochain complex C is a diagram -, jn-1 . -, . /—m—1 "_. /-in " ^ /—m+1 . in which dn o dn =0 for all n. We get notions of cochains, cocycles, coboundaries and cohomology, cochain maps and cochain homotopy in an obvious way. Again, non-negatively graded cochain complexes will play an important role and they will look C° -> C1 -> ■ ■ ■ so that the zeroth cohomology will be H°(C) = Z°(C)/0 = kerd0. Proposition 5.12. In a pullback square B^^C _i B'->C g' the induced map ker g —> ker ker (3 —> ker 7 —coker a —> coker (3 —> coker 7. If, in addition the map i is injective (i.e. the top exact sequence can be prolonged to the left by zero to a short exact sequence), so is the map ker a —> ker (3 and similarly for the surjectivity of the map B' —> C'. Proof. One first proves the version with short exact sequences in both rows. Since limits commute with limits, starting with the square B 4C B< ■ and applying kernels first in the horizontal and then in the vertical direction yields the same result as applying them in the opposite order, i.e. ker a is indeed the kernel of the map ker /3 —> ker 7 and this proves exactness at ker a and ker/3. By the dual argument, we are left to construct the "connecting homomorphism" 5 and to prove exactness at its domain and codomain. We define 5(C) = (i'r^p-Hc) where we need to verify that the preimage of /3p_1 amounts to showing that 0 = p'(3p c 6 ker 7. Now the preimage p -1 -1/ c) indeed exists. By exactness, this c) = 7pp '[c) = 7(c) and this holds since we assume is not unique and we have to show that the result does not depend on the choice. However, the choice is unique up to imi that is mapped by /3 to im(i'a) and further by (?') 1 1° una that is zero in coker a. Clearly, if c = p(b) for some b 6 ker /3 then the above prescription yields zero, so the composition Pp -1/ a[a) Now ker (3 —> ker 7 —> coker a is indeed zero. Let now c £ ker 7 be such that 5(c) = 0, i.e. p_1(c) + i(a) is still a preimage of c, and lies in ker/3, by an easy inspection. Finally, if i is not mono, replace A by imi (since this equals kerp, it admits a map a' to A' = kerj/) and apply the mono case. A- -»imj)- 0 -4 A' ^B p ->5 P , p' *C- -+0 The mono case also easily gets that the map ker a —> ker a' is epi and thus upon replacing ker a' by ker a, the sequence remains exact everywhere except at ker a, as claimed. □ 23 5. Chain complexes Remark. One can construct from a chain A " > B ^ > C a diagram (coming from certain map of double complexes) 0->A-VA®B-YB-J-0 /3a®l 0 ■ -4 B ■ 4C9B- -4C- -4-0 (the rows are not comprised of the inclusions and projections, they have to be twisted slightly), which gives the exact sequence relating kernels and cokernels of the maps a, /3a and /?. Proposition 5.14 (5-lemma). In a commutative diagram with exact rows if a, (3, 8, e are iso, then so is 7. More precisely, a is only required to be epi and e to be mono. Proof. Apply Lemma 5.20; denoting the image of the map B —» C for simplicity by BC etc., we obtain short exact sequences 0- 4C- 0- Pi -4 B'C - -+CD--4 CD' -40 -40 and the snake lemma gives that 7 is mono provided that /3^f and 7c) are mono. The second is easier, just apply the snake lemma in 0- 0- -4 CD' ■ 4D ->DE- Se ->D'E' -40 -40 to get 7c) mono if S is. The other condition is more complicated and involves the application of the snake lemma to 0- -4 AB- 4BC- -40 0- a/3 -4 MB' - 4B' Pi -*B'C -^0 to obtain /3^f mono provided that (3 is mono and a/3 is epi. Finally, a/3 epi follows from a epi by the last application of snake lemma where one needs to prolong the sequence one step to the left, say by kernels of the maps A —?► B and A' —> B'. Altogether, 7 is mono if (3 and 8 are mono and a is epi. Dually, 7 epi follows from (3 and 8 epi and e mono. □ The following is a converse to Proposition 5.12. I left it as an exercise, I think. 24 5. Chain complexes Proposition 5.15. In a commutative square if both g and g' are epi and the induced map ker g —> ker g' is an iso then it is a pullback square. Exercise 5.16. This is about self-duality of homology. Show that there exists a factorization coker dn+\-> ker dn-i dn+i ^ ■+ Cn-1 -Cn-2 im d. n+l im dri- ll—i and that the map on the top has kernel Hn{C) and cokernel Hn-i(C). The diagram is self-dual, so starting with a cochain complex, interpreting it as a chain complex in the opposite category and taking homology there yields exactly the cohomology of the original cochain complex. Theorem 5.17 (long exact sequence of homology). A short exact sequence 0 —> A —> B —> C —> 0 of chain complexes induces a natural long exact sequence of homology ----> Hn+1(C) A Hn(A) Hn(B) Hn(C) A Fn_i(A) Proof. Applying the previous exercise, we will consider the map coker kerdn_i for the involved chain complexes and write them as Cn{C)/Bn{C) —> Zn-i{C) etc. so that we obtain a diagram Cn{A)/Bn{A)-> Cn{B)/Bn{B) + Cn{C)/Bn{C)- -+0 0- ■+Zn_i(A). ■+Zn_i(5). ■+Zn_i(C) with exact rows (coker commutes with coker, similarly ker commutes with ker). Snake lemma gives a portion of the claimed long exact sequence, as required. □ Corollary 5.18. In a short exact sequence of chain complexes as above, A is acyclic iff B —?► C is a q-iso. Dually, C is acyclic iff A —?► B is a q-iso. Corollary 5.19. In a commutative diagram of chain complexes with exact rows if two of a, (3, 7 are q-iso's, so is the third. 25 5. Chain complexes Lemma 5.20. A long exact sequence ■ ■ ■ —> Cn+i —> Cn —> Cn-i —> ■ ■ ■ can be split into short exact sequences 0 —> Bn —> Cn —> Bn-i —> 0. Conversely, any collection of short exact sequences as above can be spliced into a long exact sequence. Proof. In a general chain complex, one has to replace the short exact sequences by 0 —> Zn —> Cn —> Bn—i —> 0 and add to these the short exact sequences 0 —> Bn —> Zn —> Hn —> 0 that define the homology as the quotient Zn/Bn. Again, one can splice such short exact sequences into a chain complex C with homology H. Definition 5.21. A resolution of a module A is a non-negatively graded chain complex C together with an "augmentation" map e: Cq —> A such that is an acyclic chain complex (the "augmented" chain complex). We say that C is a projective resolution if, in addition, C consists of projective modules. There is a nice "global" characterization of this, using the chain map e: C —> A[0] where A[0] denotes the chain complex whose only nonzero chains are in dimension zero and are A. Thus, the map is precisely ---->d->C0 e ---->0-> A Now the homology of the augmented chain complex agrees with the homology of C except in dimensions 0 and —1, where it is kere/i?o(C) and cokere. The first can be rewritten as ker(C0/B0(C) -U A) = kei(H0(C) A) while the second can be rewritten as the cokernel of the same map Hq{C) A. Since this is the induced map on homology, we are finished with the equivalence. We will give a different, more conceptual proof later. 26 5. Chain complexes Definition 5.22. Dually, a (co)resolution is a cochain complex C together with a coaugmen-tation map A —> C° such that the coaugmented cochain complex A -> C° -> C1 -> ■ ■ ■ is acyclic. An injective (co)resolution has all objects Cn injective. Definition 5.23. A functor F is additive if its action on morphisms C(c', c) —> T>(Fc', Fc) is a homomorphism of groups. Any additive functor F has an extension to a functor between chain complexes since it preserves composition and zero. We denote this extension again by F. Example 5.24. Horn functors Homp(A, —), Homp( —, A) (the second contravariant though), tensor product functors — (g>p A, A (g>p —. We will now discuss basic properties of functors related to exactness or, equivalently, homology. It is easy to see that FO = 0 from the characterization of 0 as an object where 1 = 0 (one may also view this as a miliary case of Lemma 5.29). We now make a couple of definitions: Definition 5.25. An additive functor F is said to be right exact if the image of an exact sequence A -> B -> C -> 0 is an exact sequence FA -> FB -> FC -> 0 Equivalently, F pereserves cokernels. (By Lemma 5.29, it preserves finite coproducts and this is thus equivalent to preservation of finite colimits.) A left exact functor is an additive functor that preserves kernels. A functor is exact if it preserves short exact sequences. Exercise 5.26. Show that a functor is exact iff it preserves all exact sequences iff it is left exact and right exact. In particular, an exact functor preserves acyclic chain complexes. A generalization of this is the following. Lemma 5.27. An exact functor F commutes with homology, i.e. Hn(FC) = FHn(C). In particular, F preserves q-iso's. Proof. This is so since homology is defined using kernels and cokernels. □ Example 5.28. The tensor product functor — (g>p A is right exact; it is exact iff A is flat. Similarly for the other tensor product functor. The horn functor Hoiiir(j4, —) is left exact; it is exact iff A is projective. Similarly for the other horn functor (here A should be injective for exactness), note however that this depends on writing the contravariant one as Mod^p —> Ab (and not as ModR Abop). Lemma 5.29. Additive functors preserve biproducts. Equivalently, additive functors preserve exactness of split short exact sequences. 27 6. Abelian categories Proof. A biproduct is a diagram i 1 A:-C "-B p 3 satisfying Since F preserves composition, addition, identities and zeros, the same is true for the image under F. □ Thus, any additive functor is exact on certain exact sequences. Over a field, any additive functor is exact. As we saw, this should mean that it preserves certain q-iso's. Here is a precise claim. Lemma 5.30. Additive functors preserve chain homotopies. Proof. The proof is practically the same as for the previous lemma: A chain homotopy is given by some formulas and these are presereved by additive functors. □ Example 5.31. Consider a short exact sequence O^Z^Z^Z/2^0 and apply — (g> Z/2; this yields 0 Z/2 Z/2 Z/2 -> 0 that is clearly not exact. Thus, — (g> Z/2 is not exact and does not preserve q-iso's (since C —> 0 is a q-iso while FC —> 0 is not). In the next sections, our main aim will be to measure the non-exactness of an additive functor. We will see that in the second short exact sequence the zero on the left can be replaced by a continuation - a long exact sequence of derived functors. 6. Abelian categories This is my personal note. The most complicated are those properties that relate limits and colimits. The definition of an abelian category is that of a finitely bicomplete Ab-enriched category where image equals coimage, i.e. where epi and mono together imply iso. The main application is that 0 A B C 0 is exact iff / and g form the kernel-cokernel pair (/ kernel of g and g cokernel of /; by definition) iff / is mono and g is its cokernel iff g is epi and / is its kernel. We will now apply this to pullbacks and pushouts. A square A'^^A r f B< ^B 28 6. Abelian categories is a pullback square iff (we chose the minus sign with accordance to sign conventions for double complexes) 9 I . (h f) 0 A' + B' @A -> B is exact (form the construction of the pullback using products and equalizers). Thus, provided that (h f) is epi (i.e. h and / are jointly epi) this becomes a short exact sequence and by the dual argument, the square will also be a pushout! Now assume more concretely that / is epi. Since cokernels of parallel maps in any pushout agree, coker/' = coker/ = 0 and /' is also epi! Snake lemma: The connecting homomorphism is constructed using the pullback and pushout in the diagram 0- -4-A -> A -» ker 7 ->0 -4 A' 4 5' / p -40 coker a >- 47' -40 that then gives a map X —> Y. By the above, ker 7 is the cokernel of the map A —> X, so in order to get a factorization of X —> Y through ker 7, we need to show that the composition A —> X —> Y is zero, but this is obvious. Similarly the composition X —> Y —> C is zero and so X —> Y also factors through coker a; it is a simple matter to show that it then factors through both at the same time, i.e. it induces a unique map 8: ker 7 —> coker a. The exactness at ker 7 needs to be checked and is probably not too difficult, but it is completely straightforward using elements, self-duality of homology: A with gf = 0 induces the following diagram B C 0-> im /-> B-> coker /-> 0 p 0-> ker g-> B-> coim g-> 0 and the connecting homomorphism in snake lemma gives an isomorphism kerp = coker i, where coker i is the usual definition of homology, while kerp is the dual version. Theorem 6.1. Any small abelian category A admits an exact fully faithful embedding into ^Mod for some ring R. Sketch proof. The Yoneda embedding y: A^[Aop,Ab] 29 7. Derived functors lands in the subcategory C of left exact functors. One shows that this is an exact localization and that y as a functor A —> C is exact (and still fully faithful). In addition, C admits all small coproducts; denoting P = ^2^e^y(A) one shows that this is a projective object that admits an epi (joint epi would suffice) into the image of every A G A. Thus, the representable functor H.omc{P, -):£->• End(p)Mod is also exact (since P is projective) fully faithful (we need to show that Hom_4(A, B) —> HomEnd(p)(Hom(P, A), Hom(P, B)) is an iso; clearly if the image of a: A —» B is zero then a = 0 by applying to any epi p: P -» A; for surjectivity, consider again an epi p: P -» A with kernel K; further form /: P -» K >—> P; then the image of p on the rhs is some p: P —» B and the image of 0 = pf is 0 = pf = pf so that p factors through coker f = A, giving a preimage). We return to the exact localization C, i.e. the category of objects injective (automatically orthogonal) w.r.t. coker y(e) —> 0, coker y{m) —> y{C) for any s.e.s. 0 —> A B4C->0 (the second is probably not necessary, see Weibel). One produces the localization functor from the small object argument. One then shows that in 0 y(A) -+ y(B) ^> y(C) -+ coker y(e) -+ 0 the cokernel W = coker y{e) is weakly effaceable (i.e. for any A and any x £ WA there exists an epi e: P -» A such that y{e){x) = 0 G WP) and that the localization of any weakly effaceable functor is zero. □ 7. Derived functors Derived functors of F at A are defined by taking a projective resolution P —> A[0] then applying F and taking homology, i.e. LnF{A) = Hn{FP). The main technical problem to solve is showing the independence of the choice of a projective resolution (and then obviously proving basice properties). Classically, one shows that between any two projective resolutions, there exists a chain map P-------+<3 A[0] and that any two chain maps are chain homotopic (i.e. the map is unique up to chain ho-motopy). Applycation of F preserves this and taking homology makes the comparison map unique. There are, however, situation where projective resolutions do not exist, only some weaker version. In such situations, the existence of maps directly from P to Q cannot be expected. There is a weaker version of uniqueness (very much in the modern higher categorical sense), namely the category of (weakly) projective resolutions P —> A[0], i.e. Ch(Mod^)proj/A[0] is contractible (i.e. its classifying space is) from which we will only need that any two objects can be connected by a zig-zag of morphisms and any two such zig-zags can be connected by a zig-zag of zig-zags (this is just one dimensional triviality). The proof is 30 7. Derived functors not more difficult and one can find the classical approach in all books on homological algebra that I know of (I might later add a short summary). We will be working with a collection of objects P, called P-projective objects or P-projectives, that is required to satisfy • every object admits an epi from a P-projective and • it is closed under kernels of epis, i.e. for a ses 0—> A —> B —> C —> 0 with P-projective B and C, the same is true of A. Typically, P is also assumed to be closed under finite biproducts, but we will not need this assumption. We say that P is adopted to F if in addition to the above assumptions F is exact on ses's of P-projectives. By splicing, it is then exact on bounded below les's of P-projectives as well. We can then construct a P-projective resolution of any object A in the following way. Construct inductively ses's 0 Kn -+ Pn Kn_! 0, starting from K-2 = 0 and P_i = A, so that K-\ = A as well, in such a way that Pn G P for each n > 0 (it exists by the first point). Then splice these ses's to get a les • • • Pi P0 A 0. We will denote this P-projective resolution e: P —> A[0] (with the above augmented chain complex the mapping cone of this augmentation map e). We will now show (only partially1, as required for our exposition) that the category of P-projective resolutions of a fixed A is weakly contractible. First we present a relative version of the above construction: Let /: A —> B be a map and e: Q —> B[0] a resolution, not necessarily P-projective. Then we can complete the following diagram P^^A 0] 0] in such a way that if / is epi then so is p (I guess that p is epi from dimension 1 onwards regardless of /!!! In fact, take the pullback of e along /[0], which is an epi q-iso - see below -and thus we only need to consider the case / = 1). One proceeds exactly as above but using 1In general, let Pi —> A[0] be an I-diagram of P-projective resolutions and replace it by a fibrant diagram Pi in the model structure with pointwise cofibrations and weak equivalences (here the fibrations of chain complexes are not necessarily surjective, but acycclic fibrations are and that should be enough); take the limit of this diagram and pullback P-► lim P/ r i- A[0] —>limyl[0] This then forms a resolution P —> A[0] and one can then replace it by a P-resolution with the map P —> lim P/ corresponding to a cone P => P/ Pi that together with the natural transformation shows that the category of P-resolutions is weakly contractible - thus, one should in fact assume that P/ is itself a P-resolution, i.e. that V should be closed under products (finite should be sufficient as we can assume T to be finite in the sense that NX is (locally) finite). 31 7. Derived functors a pullback square 0- -±Lr. -fLr n—l -+0 -> K. n—l -+0 p« — n—l -+0 (this requires observation that kernels and pullbacks commute and also that a pullback of epi is epi). This applies in particular to the following situation: given two resolutions P' —> A[0] and P" —> A[0] we can form their pullback P- ■+P" P' ■ 4i[0] and since epi q-iso's are closed under pullbacks (the epi part we know, then one takes kernels, which agree and are acyclic) we see that all the maps in the square are such. Replacing P by a P-projective resolution P as above, we thus get a span of epis between P-projective resolutions P' <— P —> P". We will need a further level of dimension: Given two spans P and P between P' and P" as above (i.e. two epis P —> P and P —> P) we may form their pullback over P and get P- "J P->P- ■+P" _l P'-> A 0] and finally resolve P by a P-projective P to get a span between spans: Now we finally utilize the above to the definition and properties of derived functors. We assume that V is adopted to F. Given an epi tp: P —> P' between P-projective resolutions of A, the kernel ker tp is then an acyclic chain complex of P-projectives and as such remains acyclic upon applying F. Thus, the map FP —> FP' is still an (epi) q-iso. Applying homology 32 7. Derived functors then yields an iso H*FP —> H*FP'. We thus get a diagram of isomorphisms H*FP' H*FP H*FP —H*FP H*FP' so that we get a well defined (i.e. unique) comparison isomorphism H*FP' = H*FP". We may thus define L*F(A) = H*FP where P —> A[0] is any P-projective resolution and we get that any two possible such definitions are isomorphic in a canonical way, allowing us to talk e.g. about individual elements of L*F(A). Now given a ses 0 A^ B -> C 0 we take an arbitrary P-projective resolution R —> C[0], then construct a P-projective resolution Q —> B[0] together with an epi Q —» R and finally take kernels to get 0->P->Q->R-^0 0 —> A 0]- —>B 0]- —v r7f 0]-^0 By the properties of V we conclude that P consists of P-projectives and, by 5-lemma, the left vertical map is a q-iso, making it a P-projective resolution. Thus, upon applying H*F to the top row we get a les consisting of the left derived functors L*F(A), L*F(B), L*F(C). Finally, if A is itself P-projective then the augmented chain complex P —> A[0] remains exact upon applying F and, thus, LqF(A) = FA and LnF(A) = 0 for n > 0. In particular, we get that P is contained in the collection P = {A \ LnF(A) = 0 for all n > 0}. Since this class satisfies the properties, it is thus the maximal such class for a given functor F. Remark. We will now show independence of LnF of the class V'. Thus, let Q be another class and consider a Q-resolution Q —> A[0], further a P-resolution P —> A[0] etc. as in P'-)• A[0] Q' -)■ A[0] P-)■ A[0] Q-► A[0] Since both composites P' —> P and Q' —> Q are epi q-iso's between complexes of P-projectives or Q-projectives, they remain q-iso's upon applying F so that the middle map FQ' —> FP is a q-iso by the 2-out-of-6 property, proving L^F = F. The uniqueness of this isomorphism follows by comparing to the maximal class above PCP=QDQ that is independent by the mere existence of an isomorphism showing that the above comparison maps can be thought of as comparison maps for the class V = Q and are thus unique. 33 8. Balancing Tor and Ext Remark. One should also prove that each LnF is additive and I thought that this would require the class V to be closed under finite biproducts, and it seems so. Proposition 7.1. A right exact functor F is exact iffVn > 0: LnF = 0 iff L\F = 0. 8. Balancing Tor and Ext We define Tor^(A, B) = Ln(-®RB)(A). There is a second candidate, namely Ln(A(S)R — )(B). One we show that these are the same, we will know that these can be defined using flat resolutions (since flat modules are acyclic). A similar situation arises for Ext^(A, B) = Rn(HomR(A, —))(B) and the symmetric version obtained from the contravariant horn functor. We will show that in both cases, the two derived functors are canonically isomorphic. We will concentrate on the tensor products since these are both covariant and thus easier. The two derived functors are obtained as homology of chain complexes P (g>p B and A (g>p Q where P —> A[0] and Q —> B[0] are projective resolutions. It thus seems more than logical to compare these using a span P • A 0 Q. The question is what this P (g>p Q should be. We can draw a diagram where we write only (g> for simplicity • • • <-Pp-l d will not be exactly one's first guess, resulting in the squares anti-commuting rather than commuting, i.e. dhdv = —dvdh. We will now make this kind of structure formal. Definition 8.1. A double (chain) complex is a diagram of modules Dp,q and homomorphisms dh: DM ->• Dp-ltq, dv: DM ->• DM-i that satisfy (dh + dv)2 = 0, i.e. dhdh = 0, dvdv = 0, dhdv + dvdh = 0. This way of presenting the axioms suggests the following definition. Definition 8.2. For a double complex D, the total complex Tot+ D is a chain complex with (Tot+D)n= ^2 dp« p+q=n and with differential d = dh + d?. There is another version of the total complex Totx D where the sum is replaced by the product. 34 8. Balancing Tor and Ext The two versions are useful for different applications, the first is related to left derived functors and the second for right derived functors. Since we concentrate on the tensor product case, we will stick to the sum version and will denote it simply by TotD. In order to make the tensor product into an example, we have to introduce signs. We define {f®g){x®y) = {-l)M-Mfx®gy where \g\ = \gx\ — \x\ is the degree of g (we will treat this more formally later). Thus the identity has degree |1| =0, while the differential has degree \d\ = —1. This gives as particular cases dh(x (g> y) = (d (g> l)(x (g> y) = dx (g> y, dv(x (g> y) = (1 (g> d)(x (g> y) = (-l)^x (g> dy and the squares clearly anti-commute with this notation (one can also prove this formally by first verifying (f®g)(h®k) = ( — l)^'^fh®gk and then using this to compute ( Q to denote the total complex Tot P g> Q of this double complex. We will now introduce two very useful special cases of the tensor product construction. The motivation comes from topology, where C(X xY) = C(X) g> C(Y) (also for Koszul sign convention), at least when one deals with cellular complexes where products of cells are cells. In this way one obtains the cylinder of C by tensoring with the chain complex of the interval: cylC = Tot(cylP[0] ® C) since R[0] is interpreted as a point and then the cylinder on the point is the interval; it remains to specify this interval: cylP[0] =---->0^ R-±+ R®R Denoting the 1-dimensional generator by e (edge) and the 0-dimensional generators by v-, u+ (initial and terminal vertices), we define de = u+ — v-. Exercise 8.3. Prove that chain maps cylC —> D are in bijection with triples (/, g, h) where / and g are chain maps C —> D and h is a chain homotopy / ~ g. In topology, one can recover the two involved maps from a homotopy (as restrictions to the two ends of the cylinder), while in homological algebra, this is not the case - the best one can get is the difference g — f = dh + hd. Another example that we will need is the cone. We define similarly coneC = Tot(coneP[0] C) where again the chain complex cone R[0] =----> 0 R -A R has differential de = v, i.e. d = 1. We will now draw a picture of the double complex 35 8. Balancing Tor and Ext conei?[0] (g> C and a simpler realization thereof: Rv (g> Ci Re (g> Ci l(gid ife (g> C0 <-Re (g> C0 Ci^—Ci Co [— Co where the minus sign comes from (1 (g> d)(e (g> a;) = ( —l)'e'e (g> da; = e (g> (—da;), since \e\ = 1. Clearly the zeroth column form a subcomplex (since R[0] C conei?[0] and then one applies the tensor product), so coneC. The quotient is the first column, i.e. the chain complex C[l] - called the suspension of C - is just C shifted by one dimension up and with opposite differential (again, the quotient of R[0] ^ conei?[0] is R[l] and C[l] = Tot R[l] (g> C). What comes now is a concrete description of the pushout C- D -> cone C ■+ cone / (with horizontal cokernels C[l], see below) that results from replacing the subcomplex C C coneC by D via /. It is the total complex of the double complex Dn- d ■Ci ■C0 Similarly to the case of coneC, we get a subcomplex and a quotient, forming a short exact sequence 0 D -> cone / C[l] 0 Exercise 8.4. Verify that the connecting homomorphism in the homology long exact sequence is the map Hn+i{C\l\) = Hn{C) —> Hn{D) induced by /. Conclude that / is a q-iso iff cone/ is acyclic. Apply to the augmentation map. 36 8. Balancing Tor and Ext Proposition 8.5. Let D be a first quadrant double complex, i.e. such that Dp,q = 0 whenever p < 0 or q < 0. If D has exact columns, i.e. if for each p the chain complex (DPt.,dv) is acyclic, then Tot D is acyclic. Dually, the same conclusion holds for first quadrant double complexes with exact rows. Remark. This version works for right halfplane double complexes (or upper halfplane complexes in the second case), but the Totx-version requires this stronger assumption, I think. Proof. Denote = D. As above, the zeroth column forms a subcomplex L>o,« with quotient D^l\ obtained by removing the zeroth column. Continuing this way, we obtain short exact sequences 0 -4 DPi. -4 Tot -4 Tot Z)(p+1) -4 0 which shows that the natural projection maps Tot D = Tot L>(0) -4 Tot D{1) -4 • • • are all q-iso's. Since TotL>(n+1) is concentrated in dimensions > n + 1, it has zero Hn and thus the same is true for Tot D. □ Now consider the double complex obtained as a tensor product of the augmented chain complex P and the chain complex Q, i.e. A Qo <---- This has exact rows since these are obtained by tensoring the augmented chain complex P with a projective Qq. Thus, the total complex is acyclic. Since it is (up to suspension) the cone of the map e ® 1: P ® Q —> A® Q, this map is a q-iso. Symetrically, the map 1 (8> £: P ® Q —> P ® B \s also a q-iso and we obtain the following theorem: Theorem 8.6 (balancing of Tor). There exists a natural isomorphism Ln(- (g> B)(A) = Ln(A (g) —){B) between the derived functors of the tensor product functor. In fact, one can see easily that for a right exact bifunctor F, we only need that F(P, —) should be exact for any projective P as well as F( — , Q) for any projective Q and we obtain LnF(A,-)(B) LnF(-,B)(A) We will now shortly comment on the derived functors of the horn functor. The covariant one is easier, so we start with this: i?n(Hom(A,-))(B) = Hn(Bom(A, I)) 37 8. Balancing Tor and Ext where B[0] —> I is an injective resolution. The situation of the other horn functor is exactly the same when interpreted in the opposite category; translating to the ordinary category of modules, we get Pn(Hom(-,P))(A) = Hn(Rom{P, B)) where P —> A[0] is a projective resolution. Again, we can form a double cochain complex Hom(P, /) and its total complex Totx Hom(P, /) that admits a cospan Hom(P, B) -+ Totx Hom(P, I) <- Hom(A, I) with both maps q-iso's by an analogous argument. Theorem 8.7 (balancing of Ext). There exists a natural isomorphism Pn(Hom(A,-){B) = Pn(Hom(-, B){A) between the derived functors of the horn functor. We will now study horn complexes from a different perspective - related, but it may be easier to forget about what we did up to now. So let C and D be chain complexes and construct a chain complex Hom(C, D) with the aim of giving the category of chain complexes the closed symmetric monoidal structure. Symmetry is perhaps worth mentioning first, since it is given by a (not so much now) surprising isomorphism B (g> C -=-» C (g> B, x (g> y i-> (-l)\x\'\y\y (g> x. Now our goal is the adjointness B®C ^ D B -+ Hom(C, D) We will first study this on the level of the underlying graded modules (i.e. ignore the differentials). This becomes Bn EL Hom(Cfc, Dn+k) so we want to endow the graded module Hom(C, D)n = \\k Hom(Cfc, Dn+k) with a differential so that the map at the top is a chain map iff the map at the bottom is. The differential will be derived from the requirement that the counit is a chain map, by observing that the counit is (as usual) the evaluation map ev: Hom(C,P>) ®C -> D, f®c^fc. We will denote the differential on the horn complex by D and we thus require dev = ev(P> (g> 1 + 1 (g> d), that by applying to / (g> c amounts to d(fc) = (Df)c + (-l)Mf(dc). We can thus write Df = df — ( — l)\f\fd = [d, f] (the graded commutator). It is rather straightforward that this indeed makes Ch(Mod^) into a closed symmetric monoidal category. 38 9. Ext and extensions The O-chains of Hom(C, D) are by the construction not-necessarily-chain maps f:C—>D. The 0-cycles are those that satisfy Df = 0, i.e. df-fd = 0 and these are exactly the chain maps.2 (Chain maps of degree n are defined as n-cycles, i.e. they are required to satisfy df = { — l)nfd.) For two chain maps /, g, i.e. 0-cycles, we have [/] = [g] in F0(Hom(C, D)) iff g - f e B0(Rom(C, D)) iff there exists h G Hom(C, D)± with Dh = g — f; this means dh + hd = g — f and this h is a chain homotopy from / to g. As a result H0(Hom(C,D)) = [C,D] the group of chain homotopy classes of chain maps. This is another explanation of chain homotopy (we had definition, then as a map from the cylinder, now as a homology relation in the horn complex). 9. Ext and extensions We will now apply the derived functor Ext1 to study extensions of modules, i.e. short exact sequences B ^ X ^ A^O We start with a simple question: When does the sequence split? By applying Hom(A, —) we obtain an exact sequence 0 Hom(A, B) -+ Hom(A, X) -+ Hom(A, A) Ext1(A, B) -+ Ext1^, X) -+ ■ ■ ■ Clearly, £ admits a splitting iff 1 £ Hom(A, A) lies in the image (more precisely, any preimage is such a splitting A —> X) iff d{l) = 0. We define 0(f) =d{l) G Ext\A,B) and we just observed that this is the (unique) obstruction to the existence of a splitting. Lemma 9.1. f splits iff 0(f) = 0. In particular, it splits when Ext1(A, B) =0. □ Naturality of the class 9 with respect to maps of ses's should be rather clear: we need f: 0->B->X-> A-^0 C: 0->B->Y-> A-^0 2This corresponds to the fact that the unit is R[0] and maps out of R[0] are exactly the 0-cycles. 39 9. Ext and extensions (the map X —> Y is then necessarily an iso by 5-lemma) so that the portions of long exact sequences of derive functors Hom(A, A) —Ext1 (A, B) Hom(A, A) —Ext1 (A, B) have both vertical maps identities - they are induced by maps in the transformation above so we require these to be identities. We will then say that the extensions £ and Q are isomorphic and we see that then 0{£) = 9{Q). We have just defined, for fixed A and B, a map 9: {extensions O^B^X^A^ 0}/iso ->• Ext1 (A, B). Theorem 9.2. The above map 6 is bijective. Proof. We first produce a map in the opposite direction. Let B —> I be an embedding of B into an injective module and let C be the cokernel so that we have a ses (-.O^B^I^C^O. The les of Ext (A, —) then gives ----> Hom(A, I) ^ Hom(A, C) -+ Ext1 (A, B) -+ Ext1 (A, I) -+ ■ ■ ■ (since / is injective). Thus, for any a 6 Ext1(A, B) there exists a preimage /': A —» C and any other preimage is of the form / + pg for g: A —> I. We form a pullback of the ses above along / and obtain C: Concretely Xf = {(x, a) \ p{x) = /(a)} and we thus have an isomorphism Xf Xf+pg, (x,a) ^ (x + g(a),a) that respects the inclusion of B and projection onto A so this is in fact an isomorphism of extensions = if+pg- This finishes the construction of the inverse mapping, we need to verify that these are indeed inverse to each other. We thus study the obstruction #(£/) of the obstruction above. Again, the transformation of ses's gives a transformation between the sequences of derived functors Hom(A, A) —^ Ext1 (A, B) Hom(A, C) —^ Ext1 (A, B) 40 9. Ext and extensions and this means precisely = d(l) = df*(l) = d(f) = a by construction (/ was chosen as a preimage of a). It remains to show that the constructed inverse mapping is surjective, i.e. that every extension is obtained as a pullback from Q: C: Start by extending the map i into the injective / along the inclusion B —> X, as suggested in the diagram. We then complete the diagram by a map /: A —» C that is just the induced maps on cokernels. Since both X —> A and p are epi and the induced map on kernels is an iso, Proposition 5.15 yields that the square is indeed a pullback. □ Example 9.3. Since Ext1(Z/m, Z/n) = 0 if gcd(m,n) = 1, every ses 0 ->• Z/n ->• X ->• Z/m ->• 0 splits. We will prove later a more general result for non-commutative groups. Since Ext as a derived functor is additive in both variables, we get Ext1(P, A) = 0 for any finite abelian groups of coprime orders (split into a direct sum and apply the above). Interestingly, since Ext1 (A, 5) is an abelian group, the same must be true for the set of extensions up to isomorphism. It is instructive to transport the addition along the above isomorphism. The result looks as follows. Take two extensions and consider their biproduct 0 B®B ^ X ®Y -+ A® A 0. Now take the pullback as in the above proof along the diagonal A —» A ® A to obtain an extension of A by B ® B. Perform the dual construction, i.e. form the pushout along the codiagonal B ® B —» B (i.e. the addition) and finally obtain an extension of A by B; this is the sum of the original extensions. Remark. The higher groups Extn(A, B) are in bijection with classes of longer extensions 0 ^ B ^ Xn^---->Xi^A^0 modulo an equivalence relation generated by not necessarily invertible transformations 0-v B-> Xn->----> Xx-f A-> 0 0- There is a way of explaining this in a more natural way as follows. Consider the middle part as a chain complex X concentrated in dimension 1 through n and rewrite the exact sequence as 0 B[n] -> X -> A[l] -> 0, a ses of chain complexes. The natural transformation above becomes 0- ->5 n\ ^A\ -+0 n\ ■+0 41 10. Homological dimension and the middle map is a q-iso by 5-lemma. To make this comparison complete, one should show that chain complexes that are not concentrated in dimensions 1 through n can be truncated to the latter (easy). 10. Homological dimension We have just shown that Ext1(—, —) = 0 iff every ses splits and we know that vanishing of Ext1 (A, —) is equivalent to A being projective so Ext1(—, —) = 0 is also equivalent to every module being projective and dually also to every module being injective. We will now study higher dimensional analogues of such statements. Definition 10.1. A projective dimension of a module A, denoted pd(A), is defined to be the length of the shortest projective resolution of A, i.e. pd(A) < n iff A admits a projective resolution ----► 0-> Pn->----► P0 A There are similar notions of a flat dimension and injective dimension (the second using injective coresolutions): A 1°-►----► In-► 0-► • • • Lemma 10.2. TFAE 1. pd(A) < n, 2. in any exact sequence 0 Mn -+ Pn_i ----> P0 A -+ 0 with Pi projective, also Mn is projective, 3. Extn+1(A,-) = 0. Proof. The implications 2 1 3 are trivial. Thus, let Extn+1(A, —) = 0 and consider an exact sequence 0 Mn -+ Pn_i ----> P0 ^ A -+ 0. Denoting Mq = A, we split it into ses's 0 Mfc+i Pfc ^ Mk ^ 0. Applying Ext(— ,B) yields, by projectivity of P& the following isomorphisms for i > 1: Extm(Pfc,P) Exti+1(Mfc, B) Ext^Mfc+i, B) 1° ->----> I*'1 -> Md -> 0 and conclude 0 = Extd+1(R/J, B) ^ Ext1(i?/J, Mn), i.e. Hom(i?, Mn) -» Horn (J, Mn) and Mn is injective by Baer criterion. Definition 10.4. The number from the previous corollary is called the global dimension of R and denoted gl. dim(i?). Similarly, one can prove that the supremum of flat dimensions of modules does not depend on the side and equals the smallest n for which Torn+1 = 0, called Tor. dim(i?). Example 10.5. A ring R has global dimension 0 iff Ext1 = 0 iff every module is projective iff every module is injective. A ring R has global dimension 1 iff Ext2 = 0 iff in every ses 0—> M —> P —> A —> 0, the module M is projective; since A could be arbitrary, e.g. the cokernel of an arbitrary inclusion M —> P, this is equivalent to a submodule of a projective module being projective. Dually, this is equivalent to a quotient of an injective module being injective. Any PID has global dimension 1: It can be proved by induction on n that a submodule of Rn is free of rank < n, starting from n = 1 where this is just the definition of a PID. Theorem 10.6 (Hilbert on szyzygies). gl. dim Ik [a; i,..., xn] = n. More generally, if'gl. dimi? = d then gl. dimi?[x] = d + 1. The full strength of the Hilbert theorem on szyzygies gives part 2 of Lemma 10.2 with projective replaced by free. Theorem 10.7 (Kunneth). Assume that Tor.dimi? < 1, i.e. that every submodule of a flat module is flat. Let C be a chain complex of flat modules and A and module. Then there exists a natural ses (unnaturally split) 0 Hn{C) ®A^ Hn(C ®A)^ Ton(Fn_i(C), A) -+ 0. Proof. Apply Tor(—, A) to the ses 0 —> Zn —> Cn —> Bn-i —> 0 of flat modules to obtain a ses of chain complexes 0^ Z (B[l] 0 A)n+1 (Z 0 A)n -+ Hn(C ®A)^ (B[l] 0 A)n (Z 0 A)n_i 43 11. Group cohomology The connecting homomorphism dn is the canonical map i (g> 1: Bn® A ->• Zn® A that fits into a les of Tor(—, A) applied to 0 —> Bn —> Zn —> Hn —> 0, yielding (recalling that Zn must be flat) 0 ->• Tor(Fn, A) ->• Bn A Zn A ->• Hn A ->• 0. In other words, coker9n = Hn (g> A and ker9n_i = Tor(i7n_i, A). Thus, one may replace the les above by a ses with Hn{C (g> A) in the middle, surrounded by the cokernel and the kernel. □ 11. Group cohomology This is a particular derived functor for modules over the group ring ZG for a group G. It is a free abelian group on the set G, i.e. its elements are formal Z-linear combinations of elements of the group G, say ■ g with only finitely many nonzero coefficients ag G Z. The multiplication is extended Z-linearly from the multiplication in G, i.e. (Y] ah-h) ■ h-k) = ^2(ahbk) ■ hk = ^( ^ ahbk) ■ g. hk=g More abstractly, the free abelian group functor turns finite products into finite tensor products, Z(X x Y) = TLX (g> Zy, i.e. it is strongly monoidal. Thus, the multiplication in G induces ZG®ZG^ Z(G xG) —> ZG and similarly for the unit (which is then just the element 1 £ G interpreted as an element of ZG). A ZG-module is then equivalently an abelian group M together with an action of G via homomorphisms of groups, i.e. a ■ (x + y) = a ■ x + a ■ y. This is easily seen to be so by interpreting the module structure as a ring homomorphism ZG —> End^(M) and by the freeness of ZG, this is induced uniquely by a group homomorphism G —> Autz(M). Another point of view is that this is a functor G —> Ab hitting M (or in the first interpretation an Ab-enriched functor ZG —> Ab). Example 11.1. The symmetry group Sn acts on y®n by permuting the vectors in the tensor product. An important construction is that of the invariants (y®n)Sn, i.e. the submodule of tensors that are invariant under the action, i.e. such that t- a = t (since it is naturally a right action, i.e. a right ZG-module). Another related construction are the coinvariants (y®n)sn, i.e. the quotient by the congruence generated by t ■ a ~ t. When chark = 0, these are two equivalent definitions of the n-th symmetric power SnV. Definition 11.2. The invariants of a ZG-module M is the submodule MG = {x G M | Va G G: a ■ x = x}. The coinvariants of a ZG-module M is the quotient module Mq = M/(a ■ x ~ x \ a £ G, xG M). 44 11. Group cohomology The first is the limit of the diagram g£m the second is the colimit of the same diagram. There is another intrepretation of the same, using the trivial ZG-module Z. In general any abelian group admits a trivial action where a ■ x = x for any a G g. Lemma 11.3. mg = HomZG(Z, m) and mg = m ®ZG Z (for a right zg-module m). Proof. The point is that Z = (ZG)G, since the congruence identifies exactly the generators of ZG. Thus, _/: Z ->• m_ /: (ZG)/(a-x ~ x) —>■ M f:ZG^M such that /(a • g) = /(g) m G M such that a ■ m = m TU- m G M and similarly M(g>ZGZ = M(g>ZGZG/(a-x ~ x) = (M®iGZG)/(m®a-x ~ m(g>g) = M/(a-m ~ m) = MG. Perhaps, it is better to relate it to Homz and □ Definition 11.4. The n-th group homology with coefficients in a ZG-module M is Hn(G;M) = Ln(-)G(M) = Tor^G(M,Z) or Tor^G(Z,M). The n-th group cohomology with coefficients in a ZG-module M is Hn(G; M) = Rn(-)G(M) = Ext^G(Z, M). We will study these via a projective resolution of Z G ModZG. Example 11.5. Denote by Ck the cyclic group of order k written multiplicatively (i.e. it is Z/fc but that is usually written additively), with elements l,t, • • • ,tfc_1 and with tk = 1. A projective resolution was constructed in the tutorial, where the norm N = tk~1 + • • • + 1 denotes the sum of all the elements of the group • • •-ZGfc-^ ZGfc-ZGfc evi Now compute the homology of G& with coefficients Z, i.e. apply — ®zck ^—fZ —k—fZ —®—f\ and take homology to obtain n = 0 Hn(Ck;Z) = { Z/k n = 1,3,5,... 0 n = 2,4,6,... 45 11. Group cohomology Example 11.6. Denote by Coo the infinite cyclic group with elements powers tk of the generator t. The group ring ZCqo is then the ring of Laurent polynomials. A projective resolution was constructed in the tutorial ----► 0-► ZCoo ZCoo evi Now compute the homology of C^ with coefficients Z, i.e. apply — ®zca ---->0->Z—^Z and take homology to obtain H-niCoo; Z) Remark. It can be shown that Hn(G;Z) = Hn(BG;Z) equals the singular homology of the classifying space BG = K(G, 1). Thus, despite G = Ck being finite, the classifying space BCk is infinite dimensional (and also ZCj. has infinite global dimension). On the other hand BCoo — S1 is homotopy equivalent to a circle. We will now construct a general projective resolution of Z G Mod^G> the so called bar resolution. It has two versions - reduced and unreduced. We start with the second. Definition 11.7. The unreduced bar resolution is the chain complex Bu with chains Bl = ZG(G x • • • x G) = Z(G x(Gx-xG)) where we denote the zg-generators as [51 (8>- • •g'ffn] and thus the z-generators as g[g\®- ■ -®gn]-The differential in this complex is d = ^2( — l)ldi for ZG-linear operators d0[gi (g> • • • (g> gn] = gi ■ [g2 (8> • • • 0 gn] di[gi (8> • • • (8> gn] = [gi 0 • • • 0 gigi+i 0 • • • 0 gn] 4[9l®'"®9n] = [9l®'--® fln-l] and with augmentation e: Pq —> z, e[] = 1. The (unreduced) bar resolution is the quotient B of Bu by the subcomplex spanned by [<7i (g> • • • <8> 1 <8> • • • <8> Z[0] is indeed an augmented chain complex was done in the tutorial. We will now show that the generators that contain 1 somewhere span a subcomplex. In the expression for d[gi (g> • • • (g> 1 (g> • • • (g> gn] with 1 at position i, all terms contain this very same 1 except for the contributions di-i and di that give (-I)*"1 • [51 ® • • • ® ® • • • ® 5„] + (-1)* • [51 ® ■ ■ ■ ® lfli ® • • • ® 5n] = 0 (the same generator with opposite signs). We will now show that Bu —> Z[0] is a q-iso in Ch(Mod^g) or equivalently in Ch(Ab), since the homology is computed the same way in Mod^g and Ab. We will prove this by showing that the augmented chain complex is chain homotopy equivalent to the zero complex in Ch(Ab), i.e. that it admits a contraction h: 0 ~ 1, dh + hd = 1. Since the chain homotopy will only be Z-linear, we define it on the Z-generators by setting h(g ■ [51 ® ■ ■ ■ ® gn}) = [5 ® 51 ® ■ ■ ■ ® gn}. Easily di+\h = hdi and, thus, all terms in dh + hd cancel out with the exception of d$h = 1. We need to treat separately the cases involving the augmentation {dh + he){g ■ []) = d[g] + hi = 5 •[]-[]+ hi so that we need to set hi = []. Finally (eh + hO)l = e[] = 1. The same formula works for the reduced version. □ Now we study examples. Obviously Hq(G; Z) = Zg = Z and this corresponds to Hq(BG; Z) = Z since BG is always connected. We proceed to H\ so we write out explicitly the lower dimensions of the unreduced bar resolution Bu =----> ZG{[g ® h}} ZG{[g}} ZG{[}}. The coinvariants ( —)g = ^ ®zg ~ then replace the free ZG-modules by the corresponding free Z-modules, i.e. Z®ZGBu = ---^Z{[g®h}}^ Z{[g}} ^ Z{[]}. We will now compute the first homology of this complex, i.e. H\{G\Z). The differential in the original bar resolution takes d[g] = g\\ — [] and after quotienting out the action this becomes zero. Going up by one dimension d[g (g> h] = g[h] — [gh] + [5] becomes on coinvariants ---->Z{[g®h}}->Z{[5]}-°-->Z{[}} [5 ® h] 1-► [h] - [gh] + [5] Altogether we get H1(G;Z)=Z{[g]}/([gh}~[g} + [h}) 47 11. Group cohomology the free abelian group generated by the elements of the group with addition forced to equal the original group multiplication. This is easily seen to give the abelianization Gab of G, e.g. by its universal property g G-► A T A / / \ / [9] Gab This corresponds to the fact that for a path connected space X we have H\(X\Z) = 7ri(X)at, and th{BG) = tvi{K{G, 1)) = G. Exercise 11.9. Show that M) = Der(ZG; M)/ PDer(ZG; M) using the concrete de- scription of the cochain complex Homz(j(Z, M) below. Here a derivation of a ring G with coefficients in an i?-i?-bimodule M is a group homomorphism D: R —» M satisfying the Leibniz rule D{r ■ s) = Dr • s + r • Ds. A principal derivation is one of the form Dx(r) = rx — xr for some x £ M. In the case R = 7LG and a left ZG-module M made into a right ZG-module trivially (as above), the formulas become D(g ■ h) = Dg + g ■ Dh, Dx(g) = gx - x. We will now study certain extensions of groups very closely related to H2(G; M). We will restrict to certain extensions (to be specified in a minute via a certain action) 1 M A X G -+ 1 where we write all groups multiplicatively and assume M commutative (later on, we will rewrite M additively, but at this point it would seem rather confusing). There is an action of X on M by conjugation (since it is the kernel of p and thus a normal subgroup: X —> Aut(M), x 1—> (m 1—> xmx^1 = xm) and by commutativity, the restriction to M is trivial and thus this action factors through X/M = G and we denote the action by the power on the left as above, i.e. am. Definition 11.10. Let M be a ZG-module. An extension 1 M A X G -+ 1 is to be understood as an extension of groups with M commutative and such that the given G-action agrees with the conjugation action coming from the extension. Our aim will be to classify the extensions for a fixed G £ Grp and M £ icMod. We will now choose a based section of p, i.e. a mapping a: G —> X satisfying p{a(a)) = a and p(l) = 1 (i.e. thinking of G = X/M it is a mapping that picks a representative in each class and picks 1 £ 1M). Now we may rewrite the conjugation action as am = u(a) ■ m ■ a(a)~1. 48 11. Group cohomology If a happens to be a homomorphism then the extension is split and we will see that it is then isomorphic to the so called semidirect product M x G. We will now construct the so called factor set that is an obstruction to a being a homomorphism: [a, b] = u(a) ■ a(b) ■ a(ab)^1. By the based property, we get [a, 1] = 1 = [1, b] and we say again that the factor set is based. We will now explain the importance of the factor set: Given an extension and a based section, the mapping M x G —> X, (m, a) m ■ a (a) is a bijection with inverse x \—> (x • a(p(x))~1,p(x)). We may thus transport the group structure from X to M x G and obtain an isomorphic group in a somewhat "canonical form" that will allow us to compare two extensions: We first compute the product of the images of (m, a) and (n,b) inside X: m ■ a{a) ■ n ■ a{b) = m ■an ■ a{a) ■ ZG{[a | b | c]} ZG{[a | b}} -> ZG{[a}} -> ZG{[}} HomZG(£, M) = •••<— Mapb(G3, M) <- Mapb(G2, M) <- Mapb(G1, M) <- Mapb(G°, M) where Mapb denotes the set of all "based" mappings, i.e. those whose value is 1 whenever one of the arguments is 1. Thus, the factor set is a 2-cochain. The differential of a 2-cochain is (remember that we write the group M multiplicatively and the action as a power) (5(f) (a, b, c) = a(f(b, c) ■ (f(ab, c)_1 • if (a, be) ■ if (a, The equation from the lemma claims exactly this. Any other based section differs by a'(a) = /3(a) • a(a) for some based mapping (3: G —> M and we compute the corresponding factor set: Lemma 11.12. [a,b]' = [a,b] ■ (S/3)(a,b), i.e. the two factor sets differ by a 2-coboundary. Proof. This is again a simple computation: [a, b}' = a'(a)a'(b)a'(ab)-1 = I3(a)a(a)l3(b)a(b)a(ab)-1 ^(ab)-1 = 13(a) -a(3(b) ■ [a,b] ■ y9(a6)_1 with all factors in the commutative group M and with (5(3)(a, b) = a(3(b) ■ /3(a6)_1 • (3(a). □ We may thus summarize this technical part by stating that there is a well defined 2-cohomology class associated with an extension, called the factor set [—, —] £ H2(G; M). Theorem 11.13. The mapping {extensions 0 M -> X -> 0}/iso H2(G; M), associating to an extension its factor set, is bijective. Proof. We first show that the mapping is injective. Given two extensions X and X' with factor sets [—, —] and [—, — ]' that are cohomologous, i.e. differ by a coboundary 6(3, one can change the based section of X by (3 to obtain a new based section with corresponding factor set [—, — ]'. Now the construction above gives isomorphisms X = M X[_ G = X'. To prove surjectivity, let tp be a 2-cocycle and consider the group M x^G. If we equip it with the obvious based section a(a) = (1, a) then the corresponding factor set will be [a, b] = a(a)a(b)a(ab)^1 = (l,a)-(l,b)-(l,ab)-1 = ((f(a, b), ab) ■ (1, ab)^1 = ( 0, i.e. the order of any element divides k. Proof. We will show that the multiplication by k map on B is homotopic to the map that is zero in all dimensions except dimension 0 where it is multiplication by N = ^2qeQg- Bo -+Bn N Bo -*Bn We define h[gi \ ■ ■ ■ \ 9n] = {-i)n+1 - Y.^i \ ■ ■ ■ \ 9n \ g]. geG Clearly dih = —hdi (thanks to the above alternating sign) so that everything in dh + hd cancels out except dn+1%1 I • • • I On] = ^[Oi \ ■ ■ ■ \ g-n] = k ■ [gx \ ■ ■ ■ \ gn] geG so that dh + hd = k as claimed except in dimension 0 where (dh + hd)[] = d(- 5>]) = £D - g[] = k[] - N[]. geG geG Now apply either M ®zg ~ or Hom^G(— > M) to obtain a chain homotopy between the corresponding maps on the resulting chain complexes. In (co)homology the maps become equal. □ Corollary 11.15. Let G and M be finite with gcd(|G|, |M|) = 1. Then Hn(G;M) = 0 and Hn(G\ M) = 0 for n > 0. Consequently, any extension 0—> M —> X —> G —> 0 splits, i.e. is isomorphic to the semidirect product M x G. Proof. The multiplication by k = \G\ is both zero by the previous theorem and an isomor- phism, since it is induced by an isomorphism M —> M (let I = \M\ and ak + bl = 1; then the inverse is clearly the multiplication by a). □ Remark. This is a generalization of Example 9.3 to the case of nonabelian G. The theorem holds even for nonabelian M and is proved from the above abelian case by "group theoretic induction" (descreasing order be quotienting out the centre, I think). 12. Flatness is stalkwise We use this opportunity to talk about various special instances of flat modules. The main goal is to prove the theorem Theorem 12.1. Let R be a commutative ring. An R-module A is flat iff for every maximal ideal P C R the localization Ap is a flat Rp-module. 51 12. Flatness is stalkwise The main ingredient of the proof is the so called flat base change for Tor. Let S be an i?-algerba that is flat as an i?-module. Then for an i?-module A and S-module B we have Tor^(A, B) Tor^(A ®R S, B) Proof of the claim. Consider a projective resolution P —> A[0]. Extending the scalars via the exact functor S <3R — we thus obtain a resolution P ®R S -+ A ®R S[0] that is easily seen to be projective again (the extension takes R i—> S, thus free to free and thus projective to projective). We may thus use this to compute Tor^(A ®R S, B) = Hn(P ®R S ®s B) = Hn(P ®R B) = Tor^(A, B). □ Now we are ready to prove the theorem. Proof of the theorem. Assuming A flat, we get ToT%p{Ap,B) = Toi%p{A®R RP,B) = Toi^{A,B) = 0 and Ap is flat. In the opposite direction, assuming Ap flat over Rp, we need to show Tor^(A, B) = 0 and this is equivalent to Tor^(A, B)p = 0 for all P maximal. Since Tor is some homology group and localization is exact, we may view this as Hn(Q ®R B)P = Hn((Q ®R B)P) = Hn{Qp ®Rp BP) = Toi%p(AP, BP) =0. □ We will now show that over local rings, flat modules are very close to free modules. First a general result. Definition 12.2. An i?-module A is finitely presentable if there exists a ses Rs -+ Rl -+ A -+ 0 for some finite s and t. Exercise 12.3. Show that for a finitely presentable A and any ses 0 K -+ L -+ A^O with L finitely generated, also K is finitely generated. Consider the following map B ®RA* = HomR(R, B) ®R Homi?(A, R) Homi?(A, B). Proposition 12.4. If A is finitely presentable and B is flat then this map is an isomorphism. Proof. We proved this at the tutorial. The idea is to prove this for A = R, then for finite coproducts, then for cokernels (using B flat). □ Corollary 12.5. If A is finitely presentable and flat, it is projective. 52 13. Simplicial resolutions Proof. In the map A (g> A* —> Hoiiir(j4, A) the simple tensor <2j (g> ?f maps to the composition A-> R-> A and a finite sum of such clearly maps to \vnJ ai A > iT —-A. If this happens to be the preimage of the identity then A is a direct summand of Rn and is thus projective. □ Remark. This implies that over a noetherian ring, gl. dim(i?) = Tor. dim(i?): This is because we can translate this claim to equality in sup{pd(i?/J) | J C R} > sup{fd(i?/J) | J C R} = d. Now over a noetherian ring the cyclic modules R/J admit a resolution by f.g. free modules, so we consider an exact sequence of f.g. modules 0 Md -+ Pd_! ----> P0 R/J 0 and conclude that is flat; as it is also f.p., it must be projective and pd(R/J) < d, as claimed. Over a local ring, projective modules are exactly free modules (Kaplansky theorem). We will prove a simpler version. Theorem 12.6. A finitely generated projective module over a commutative local ring is free. Proof. Let A be a f.g. projective module. Then A/MA = R/M (g>p A is a f.d. vector space over the residue field R/M. Let a±,..., an £ A be such that their images in A/MA form a basis. These give an i?-linear map p: Rn —> A, ei i-^ a«. Since A is projective, the s.e.s. 0 kerp Rn -> A -> 0 splits and is thus preserved by R/M ®r —. Since p yields an iso by assumption, we must have kerp/Mkerp = 0, i.e. kerp = Mkerp and Nakayama lemma yields kerp. □ 13. Simplicial resolutions Let C be a category. A monad on C is a monoid in the strictly monoidal category ([C, C], o) of endofunctors of C. Thus, it is an endofunctor T equipped with two natural transformations fi:ToT^T, ?j: 1 ->• T, the multiplication and the unit, satisfying the associativity and unitality axioms = /io(lo/j), /io(loT|) = l= /jo(i)ol). 53 13. Simplicial resolutions More concisely, one requires natural transformations : Tk —> T that are closed under compositions. This means that 1 = fi± (as a composition of zero /Xfc's) and e.g. and similarly for the unary There is a universal strictly monoidal category with a monoid. We will give a concrete description and then, instead of showing the universal property, give the unique instance of it that we are interested in, i.e. to monads. It is the category A of all finite ordinals (topologists would only consider non-empty ordinals; this would correspond to a non-unital version) [n] = {0 < 1 < • • • < n} with [0] = {0} and [—1] =0. Morphisms in A are the order preserving maps and the monoidal product is the "join" [m] * [n] = [m + 1 + n] or more intuitively {0 < • • • < m} * {0 < • • • < n} = {0 < ■■ ■ m < m + 1 < • • • < m + 1 + ra} Q<— [0] the unique map. We will now outline why A is a universal strictly monoidal category with a monoid: any map in A can be decomposed canonically as 0 12 3 a join of the /Xfc's. Proposition 13.1. For any monad T, there is a unique stricti monoidal functor A sending [0] to T. [C,C] 54 13. Simplicial resolutions Proof. We must send [n] i—> Tn+1 and a map decomposed, as above, into a join of /x^'s to the composition of the corresponding transformations fik : Tk —> T. □ Example 13.2. Any adjunction F: C ± 'T> :G induces a monad on C with T = GF and n: 1 ->• GF the unit and /i = GeF: GFGF ->• GF the multiplication. Dually, it gives a comonad on T> with _L = FG and e: FG —> 1 the counit and 5 = Fr]G: FG ->• FGFG the comultiplication. Now dually to the above proposition, a comonad gives a strict monoidal functor Aop —> [T>,T>], sending [0] to _L. By composing with the evaluation at A G T>, this gives a functor Aop —> T>, [n] i—> _Ln+1A; functors of this shape are called (augmented) simplicial objects in V. Example 13.3. There is an adjunction F: Ab; ± '^GMod :U, where U is the forgetful functor and U is the extension of scalars, F{A) = ZG (8> A. The induced comonad on ^G^od is then again _LA = ZG (8> A with counit _LA —?► A, r (g> a i—?► ra the ZG-multiplication in A and comultiplication _LA —?► _L2A, r®a ^ r®\®a. The induced augmented simplicial object for A G zGMod looks ■-d° V J ZG (g> ZG (g> A ZG®i —^ A rf2 (with the right most do the augmentation). The maps di are of the form 1*- • • -*1, i.e. they are all induced by the counit and are all given by mulitiplication of a pair of neighbours in the tensor product. The unnamed maps are the so called degeneracy maps and are induced by the comultiplication (i.e. 1 is inserted at various points). In general, an (augmented) simplicial object Xn = X[n] in an abelian category, such as the category ^G^od above, gives an (augmented) chain complex, the so called Moore chain complex of the simplicial object: • • • -> A2 -> Ai -> Ao -> A_i with the last d: Xq —> X-i the augmentation and with all d = J^( —l)*R®R(g)R(g)R^R(g)R(g)R^R(g)R R. augm Here again d = ^2( — l)ldi and di(r0 (g> • • • (g> rn) = r0 (g> • • • (g> Viri+i (g> • • • (g> rn 55 14. Representation theory Hochschild cohomology of R with coefficients in an i?-i?-bimodule A is Hn(R; A) = Hn(HomR_R(BR, A)). Dually the Hochschild homology is Hn{R-A)=Hn{BR ®R_R A) but some care has to be taken with the tensor product (it coequalizes right action and a left action but also the other way around, i.e. xr (g> a = x (g> ra but also rx (g> a = x (g> ar). One can show that again Hl(R;A) = Der(i?; A)/ PDer(i?; A) and that H2(R;A) corresponds to the so called square zero extensions, i.e. A C X is a square zero ideal A2 = 0 such that X/A ^ i?. At the tutorial we discussed operations on the Hochschild cohomology H*(R;R) and Deligne conjecture. 14. Representation theory The lectures were following the text by John Bourke, but simplified some parts considerably. We will thus give an exposition that concetrates on these parts were the lectures departed from the text. We will concentrate on representations of finite groups, so all our groups will be assumed to be finite. Definition 14.1. A representation of a group G over a field k is a kG-module V. Equivalently, this is a ring homomorphism kG —> End^(F). By restriction of scalars along the inclusion k C kG, the abelian group V becomes a vector space over k. Since the elements of the field and elements of the group G commute inside kG, we obtain a group homomorphism G-->Endk{V) N N Autk(V) = GL(V) Thus, equivalently a representation is a vector space V over k together with a homomorphism of groups G GL(V). Example 14.2. Dihedral group Dg with 8 elements, i.e. the group of symmetries of a square, has a canonical action on M2 (via these symmetries). As we explained above, kG is a Hopf algebra with comultiplication induced by the diagonal 5: kG k(G x G) ^ kG ® kG and counit by the constant map kG k* ^ k This allows us to make a tensor product of two representations into a representation: a ■ (v (g> w) = av ■ aw 56 14. Representation theory for a G G, but not for more general elements of kG. Formally, the multiplication is kG ® V ® W > kG ® kG ® V ® W 1(^pm > kG ® V ® kG ® W V ® W. We stress that the tensor product here is over k and not over kG. Most importantly, this monoidal structure is closed, i.e. there exists an internal horn that we constructed at the tutorial u®v ->• w U -> [V, W] On the level of vector spaces, we must have [V, W] = Hom^V, W) and it remains to come up with a G-action so that the bottom map is kG-linear iff the top map is. We concluded 9iP = gipg~1, Mw) = 9 • via'1 -v). In particular, a fixed point for the action is tp such that gp> = tp or, equivalently, gp = pg, i.e. p is kG-linear (this corresponds to the fact that the unit is k and maps from the unit are exactly the fixed points). [V,Wf = BomkG(V,W). We will now present an important tool - the projection it: U —> UG onto the fixed points. Here we have to assume that chark{ |G|, typically chark = 0. n(u) = J77T • gv" IGI 1 1 geG This has two properties: im(7r) C U and tt\jjg = id, both easily verified. Definition 14.3. A kG-module U is said to be irreducible (simple) if its only quotients (equivalently submodules) are U and 0 (equivalently 0 and U). In other words there exist only two extensions 0-^0->U->U-^0 0->U->U-^0-^0 The module 0 is not considered irreducible. Definition 14.4. A kG-module U is said to be indecomposable if C/ = V © W only for V = 0, W = U and V = U, W = 0. In other words, the only split extensions are as above. Obviously every irreducible module is indecomposable. Theorem 14.5 (Maschke). if char Ik { |G| then every short exact sequence in ^G^od splits. Consequently, every indecomposable kG-module is irreducible. Proof. Let V C U be a submodule. Since the inclusion splits over k, we get a projection p: U —> V. So p G [U, V] and we may apply the projection ir: [U,V] —> [U, V]G = ~Hom^Q(U, V) to it to obtain 1 1 geG It remains to check that it is still a projection onto V, i.e. that Tr(p)(v) = v for v G V. But in this case g~1v G V as well and in the formula above we may ignore p (being identity on V). □ 57 14. Representation theory In the tutorial, we showed that even if chark | the module kG is still injective (the group algebra kG is self-injective). Together with kG being noetherian, it follows that projective and injective modules coincide (we only proved =4>). Corollary 14.6. Every (finite dimensional) representation splits into a direct sum of irreducible representations. Proof. By induction on dim^ U. □ Remark (Jordan-Holder theorem). A composition series for U is a finite filtration 0QU1Q...CUn = U with filtration quotients Qi = Ui/Ui-i irreducible. In our case U = ® t/j. The theorem says that the collection of the Q^s is independent of the fultration. So consider Qr. u Qn Un-1 TT1 Now consider the sum Un-i + U^l_1. Since this lies between Un-\ and U and the quotient is the irreducible module Qn, this must equal either Un-\ or U and similarly for U'n_1. Out of the four possibilities, only two make sense. One possibility is that Un-\ = Un-\ + U'n_1 = U^l_1 in which case we may apply induction on this common submodule. The other possibility is that Un-i + U'n_i = U and we get with filtration quotients equal on the opposite sides since the square is a pushout. Now apply induction to the smaller modules Un-\ and U'n_i. The left most path has filtration quotients Qn and those of Un-i, i.e. Qn and Q'n and the filtration quotients of Un-i H U^l_1. Symmetrically, the same is true for the right path and thus these are equal. (In more general contexts, one has to prove alongside that Un-\ fl Vn-\ admits a composition series, for finite dimensional representations this is clear.) Corollary 14.7. The decomposition of a finite dimensional representation into a direct sum of irreducible representations is unique up to the order of submodules. □ 58 14. Representation theory Corollary 14.8. Let kG = U\ © • • • © Un be a decomposition of kG into a direct sum of irreducible representations. Then any irreducible representation is isomorphic to one of the Ui. Proof. Let U be an irreducible representation. Any 0 7^ x G U gives a kG-linear map kG -> U, 1 ^ x whose image must be equal to U by irreducibility. This epi splits by Maschke theorem, so kG = U ®V = U © Vi © • • • © Ffc- By uniqueness, U must be one of the U^s. □ Theorem 14.9 (Schur). Let pi: U —» V be a kG-linear map between irreducible kG-modules. Then either p = 0 or p> is an isomorphism. If k is algebraically closed then any p: U —> U is a multiplication by some A G k, i.e. p(u) = Xu, i.e. BomkG(U,U)=k. Proof. Since ~keip C U is a submodule, either p is mono or zero. Similarly imp C V is a submodule, so either tp is zero or epi. The second part is similar: p: U —?► U is k-linear so has some eigenvalue A. Then ker( f/ is kG-linear. For U irreducible, it must be multiplication by some A G k. Since this holds for any g, all subspaces of U are kG-submodules and U must be one-dimensional. If every irreducible representation is one-dimensional then the action G^GL(F)^kx lands in a commutative group so the multiplication by g and by h on V commute. Thus, the same is true in a direct sum of irreducible representations, i.e. in any representation and, in particular, in kG. Thus g ■ h ■ 1 = h ■ g ■ 1. □ 59 15. Characters of groups Example 14.13. We studied the two-dimensional representation of Dg and we tried to show that it is irreducible over C. The reflection across some line £ has invariant subspaces 0, £, £^ and C2. Since Dg contains reflections across the lines x = 0 and x = y, the only common invariant subspaces are 0 and C2. Thus, we have 8 = |G| = 22 + l2 + l2 + l2 + l2 since there always exists a trivial one-dimensional representation. In the tutorial, we described all four one-dimensional representations. 15. Characters of groups Definition 15.1. Let U be a representation. The function X = XU ■ G -+ k, g ^ tr(gx : U -+ U) is called a character of G (associated to the representation U). It is said to be an irreducible character if U is irreducible. The basic property is that isomorphic representations give equal characters and that x{gh) = x{hg) or x(9^l9~1) = xQ1)- A function G —> k is a class function if it is constant along each conjugacy class. We write C{G) for vector space of class functions. We thus have X G C(G). Lemma 15.2. dimC(G) = |G/conj|. Proof. This is rather obvious since C(G) = kG/conJ. □ We will now restrict to k = C. Our goal now will be to show that the irreducible characters form an orthonormal basis of C(G), for which we have to introduce an inner product on C{G). We could do so right now, but we will get to the definition naturally by studying characters of induced representations. • Xuev = Xu + Xv- □ • XU®V = Xu ■ XV- This follow from writing g ■ ej = ^i a%-ei and g -ej = bfejl so that 9 • (ej 0 ei) = gej (g> geL = ^ a* ej (g> ^ bfej: = ^ Ojbfei (g> ej: i k i,k and the sum across the diagonal equals i,k i k A more conceptual proof uses string diagrams and the corresponding definition of trace (equivalently the contraction of tp £ T^). • XU* =Xu = XU- We have shown in the tutorial that U* = U as representations, where U* = [U, k] is the dual vector space with action g • rj = ng^1 and U is U with complex multiplication z * u = z ■ u. The operator gx remains the same but its matrix in U is complex conjugate of the matrix in U. 60 15. Characters of groups • X[U,V] = Xv®u* = Xv • XV- • dimUG = j^-Y.geGXu(g)- The trace of every projection equals the dimension of its image (just write the matrix of the projection in a basis formed by vectors from the image and vectors from the kernel). Applying this to the projection it: U —» U with image UG gives the left hand side. The right hand side is obtained from the concrete formula for tt. The last two points then give the following theorem. Theorem 15.3. dimHomkG([/, V) = dim[[/, V]G = ^ • ZgeG Xv{g)xu{g)- □ For class functions fi, f2 £ C{G) we define their inner product (/i,/2) = ~E/iW'A(j)' 1 1 geG Up to the factor 1/|G|, this is the standard inner product (so in particular it indeed is an inner product). Corollary 15.4. For irreducible representations U, V we have . . (l ifC/ = F (Xy'Xl/) = \0 MU?V so that irreducible characters form an orthonormal system in C{G). □ Corollary 15.5. Two finite dimensional representations U, V are isomorphic iff xu = XV- Proof. Decomposing both into a direct sum of irreducible representations, with aj, 6j the multiplicities of the irreducible Vi, we get (xUiXVi) = (Ylj ajXVj) = ai an the number of conjugacy classes, i.e. dimG(G). Since the irreducible characters form an orthonormal, hence linearly independent, system in G(G) we must have equality and they must generate G(G). We have thus proved: Theorem 15.10. The irreducible characters form an orthonormal basis of the space G(G) of class functions. □ 16. Representations of symmetry groups Sn This was rather informative and I followed very closely John's notes. 17. Integrally closed rings, valuation rings, Dedekind domains In this section all rings will be commutative with 1 as usual, but additionally also domains. The motivation for Dedekind domains is the existence and uniquness of factorization of ideals into a product of prime ideals. This clearly holds for PID's since this is then just the UFD property. However, many important examples are Dedekind domains but not PID's. For example, the coordinate ring k[V] of an irreducible smooth curve over an algebraically closed field (or in fact any field, I think) is such an example. The smoothness is a local property and we will introduce and study these rings in terms of their localizations at (nonzero) prime ideals. Dedekind domains will be rings whose localizations at nonzero primes are discrete valuation rings; these are closely related to integrally closed rings so we start with them. 62 17. Integrally closed rings, valuation rings, Dedekind domains Definition 17.1. Let R be a domain and let K be its fraction field. We say that an element of K is integral over R if it is a root of a monic polynomial from R[x]. We say that R is integrally closed (or normal) if every element of K that is integral over R lies in R. (One may prove that the collection of integral elements forms an intermideate ring R C R C K and the condition says R = R.) Proposition 17.2. Every UFD is integrally closed. Proof. Let a/b 6 K be integral over R and we may assume that a and b are coprime. Then it satisfies (a/b)n+rn_1(a/b)n-1 + ---+r0 = 0. Clearing the denominators and expressing an from this we get an = -b ■ {rn^ď1-1 + • + roU1-1). This means that b \ an and by coprimality we get that b is a unit, so that a/b 6 R. □ We will now show that the property of being integrally closed is local, or in fact stalkwise (here Rp are the stalks of the affine scheme Spec R): Theorem 17.3. A domain R is integrally closed iff for every prime/maximal ideal P C R the localization Rp is integrally closed. Proof. We interpret the ring R and all its localizations Rp as subrings of the fraction field K that is clearly also the fraction field of Rp. In the =4> direction, let a £ K be a root of a monic polynomial from i?p[x]. We may thus write an + rn_i/dn_i • a11'1 + ■■■ + r0/d0 = 0. Denote d = dn-± ■ ■ ■ do ^ P, we multiply this equation by dn and get (ad)n + rn^1d/dn^1(ad)n^1 -\-----h r0dn/d0 = 0. This shows ad integral over R and by the assumption ad 6 R, implying a £ Rp. In the opposite direction Z is required to satisfy • v is surjective, • v(a ■ b) = v(a) + v(b), • v(a + b) > min{u(a), v(b)}. 63 17. Integrally closed rings, valuation rings, Dedekind domains Remark. It may be advantegeous to extend v to all of K by declaring v(0) = oo. For any discrete valuation on a field K, the inverse image {0} U ?;_1(No) is closed under addition and multiplication, by the above properties, and contains 1 by virtue of the easily checked v(l) = 0. The structure of DVRs is rather rigid, as we will now explore. Any element t with v(t) = 1 will be called a local parameter for R and we will fix an arbitrary choice of such. Proposition 17.5. • Units in a DVR R are exactly the elements u G R with v(u) = 0. Every nonzero element r £ i? can be written uniquely as r = u ■ tn with u £ Rx (where clearly n = v{r)). • A domain R is a DVR iff it is a UFD with a unique irreducible element, up to associat-edness. • A DVR is a local ring with the unique maximal ideal M = {r e R \ v(r) > 0} = (£). Every nonzero ideal is of the form Mn = (tn) (so that R is in fact a PID and noetherian). Conversely, if R has nonzero ideals exactly Mn = {tn), it is a DVR. • The prime ideals of a DVR R are exactly 0 and M so that R has Krull dimension 1, i.e. the longest chain of primes consists of one inclusion - in this case 0 C M. Proof. This is all fairly straightforward. For the first point, use v^u^1) = —v(u); further, since r/tn G K has valuation 0, it is a unit of the ring. For the second point, the implication =4> is exactly the first point. For the implication <=, observe that every nonzero element of the fraction field K can be written uniquely as k = u ■ tn with i £ Z and we may thus introduce v{k) = n. For the third point, observe that r | s iff v{r) < v(s) so that every nonzero ideal / is generated by any nonzero element of minimal valuation. By the first part, we may write it as r = u ■ tn and thus / = (r) = (tn). In the opposite direction, R must then be a PID, hence UFD. Since irreducible elements of a PID, up to associatedness, correspond precisely to nonzero prime ideals and the only such is (t), there is a unique irreducible and the second point applies. The last point is clear. □ Theorem 17.6. For a domain R, the following conditions are equivalent • R is a DVR, • R is a noetherian local ring whose unique maximal ideal is nonzero and principal, • R is a noetherian local ring of Krull dimension 1 that is also integrally closed. Proof. We have proved that the first point implies the other (except we did not mention explicitly DVR UFD integrally closed). It remains to prove that any of the other conditions imply that R is a DVR. Start with the second point. Let M = (t) be the maximal ideal of R. We will show that all nonzero proper ideals are of the form Mn. Clearly /CM and we claim that there exists the largest n for which / C Mn. Otherwise, / would lie in the intersection M°° =f f]n Mn that we will show to be zero by Nakayama lemma: M°° is finitely generated since R is noetherian and clearly satisfies M • M°° = M, thus M°° = 0. Thus let a G / C Mn = (tn) with a <£ Mn+1. 64 17. Integrally closed rings, valuation rings, Dedekind domains We may write a = u ■ tn and by assumption u ^ M =4> u G Rx. Thus a is associate to tn and thus already a alone generates Mn; we get / = Mn = (tn), as claimed. Now assume the third set of conditions. We will prove all conditions in the second point, where M = 0 would imply that R has Krull dimension 0, so it remains to show that M is principal. By Nakayama lemma M2 C M, for equality would give M = 0. Let t G M \ M2. We claim that M = (t). Clearly / = (t) is a proper nonzero ideal, thus contained in a unique prime ideal M. Proposition 4.6 gives \fl = M and this implies Mn C / for some n by finite generation of M. Starting from this, we will show inductively Mn C / =4> Mn_1 C /, finishing with M C / C M, as claimed. Thus let x G Mn_1. Since we want to show that x G / = (t), we consider the element x/t G X of the fraction field and we want x/t G R, which we prove by exploiting the fact that R is integrally closed. Consider the multiplication by x/t: x/t - : M ->• R Clearly x/t ■ M C 1/t • Mn C i?. Now the image must be a submodule, i.e. an ideal, and we claim that it cannot be the trivial ideal R: for otherwise there would exist m G M such that x/t ■ m = 1, i.e. t = xm G Mn and we obviously assume n > 2 and t <£ M2. Thus the image of the multiplication map must be contained in M: x/t - : M ->• M Now the Cayley-Hamilton-Nakayama-like argument below gives a monic polynomial F G R[x], such that the multiplication by F(x/t) is a zero map. Since K is a field and M 7^ 0, this implies that F(x/t) = 0 in K, as required. □ Theorem 17.7. Lei S be a (commutative) R-algebra and let M be an S-module that is finitely generated over R. Then for every s G S there exists a monic polynomial F G R[x] such that F(s) ■ x = 0 for all x G M, i.e. F(s) lies in the kernel of S —?► End(M). Remark. Keeping R commutative, we may replace a non-commutative S by its commutative subalgebra R.[s] C S and apply the theorem to it, getting the same conclusion even for S non-commutative. Proof. Write M = R{x±,..., xn} and express the action of s G S on M in two ways with respect to this generating set: (x\, • • • , Xf/) • sE — (s • X\, . . . , S • Xf/) — (x\, . . . , Xr, All ••• rln\ \fnl ' ' ' Tnn) \X\,.. . , Xn) " A where A denotes the n-by-n matrix in the formula, with elements in R. One can write this concisely as (x1,...,xn)-(sE-A) = (0,...,0). Multiplying by the adjoint matrix gives ..., xn) ■ det(sE - A) = (0,...,0), i.G. tliG multiplication by &Qt(sE — .A) annihilates the generators x\,..., xn and thus M. We may set F(x) = det(xE - A) G R[x]. □ We may now apply this characterization of DVRs to introduce Dedekind domains. Importantly, the last condition localizes well, so we define: 65 17. Integrally closed rings, valuation rings, Dedekind domains Definition 17.8. A Dedekind domain is a noetherian domain of Krull dimension 1 that is integrally closed. Theorem 17.9. Let R be a domain. TFAE • R is a Dedekind domain, • R is noetherian and for all nonzero primes P, the localization Rp is a DVR. Proof. We have proved that R is integrally closed iff Rp is integrally closed and it remains to show the same for the Krull dimension, but this is easy, since localization at P picks out of the prime ideals of R those that are contained in P. The point is that the primes in a DD and in a DVR form the following posets: These clearly correspond to one another. □ Now we want to show an interpretation of a DD in terms of fractional ideals. Definition 17.10. Let R be a domain with a fraction field K. A fractional ideal is an i?-submodule A C K of the form 1/d • I for an ideal ICR. Remark. Over a noetherian domain, this is equivalent to A being a finitely generated i?-submodule of K. We introduce a product of fractional ideals similarly to that of ideals, i.e. AB is the ideal generated by the products ab, for a G A and b G B. Clearly (1/d • I)(l/e • J) = l/(de) ■ IJ so that this product is indeed a fractional ideal. Clearly the unit is R so that we get an induced notion of an invertible fractional ideal A as that for which there exists a fractional ideal B such that AB = R. Example 17.11. A principal fractional ideal is one of the form (k) = (r/d) = 1/d ■ (r) for k = r/d £ K. Clearly, this has inverse (k^1). In a principal ideal domain, these are all examples. Consider, for a nonzero fractional ideal A, the following fractional ideal A' = {k G K | kA C R} (since A contains some element d G R, we have A'd C R so that A' = 1/d - I for the ideal / = A'd). By definition, A'A C R and we will prove that the equality holds iff A is invertible, in which case A^1 = A'. The implication =4> is obvious, so assume that A is invertible. Then A^1 C A' and consequently R = A'1 A C A'A C R and so we must get equality everywhere and A' is also an inverse. But inverses are unique in monoids. Theorem 17.12. In a Dedekind domain, every fractional ideal is invertible. In addition, every nonzero proper ideal ICR admits a unique decomposition I = P\- ■ ■ Pr into a product of prime ideals. 66 17. Integrally closed rings, valuation rings, Dedekind domains Proof. Let A be a fractional ideal and consider the fractional ideal A' as above. We need to show that A'A = R. This means that the inclusion A'A —> R is an isomorphism and we know that this may be checked on localizations. These are3 (Ap)'AP = {A')PAp = {A'A)P RP (we think of the localization Ap as the Pp-submodule generated by A). These will be isomorphisms provided that Ap is invertible. This follows from Rp being a PID. Now let I C R be an ideal. The primary decomposition of / is / = p n • • • n is with each ij primary, say Ass R/h = {Pi}. Thus, 7j is contained in a unique maximal ideal Pi and consequently these are pairwise comaximal, i.e. ij + Ij = R, giving I = h---Is by the following proposition. Now the Pj-primary component /j of / is uniquely determined since Pi is minimal over / and in fact Ii/I is the kernel of the localization map R/I —> Rpi/Ipi or, slightly better, /j is the preimage of Pp. under the localization map Aj: R —> Rpi. Now since Pp. is a DVR, the ideal Pp. is a power Mk% of the maximal ideal and thus pulls back to the corresponding power Pk%, since this power is Pj-primary Pk\ thus (P \ Pj)-saturated, and clearly maps to M-\ The uniqueness follows easily from all primes being invertible: for if Pi • • • Pr = Qi • • • Qs then for any prime Qj we have P± - ■ ■ Pr C Qj so Pj C Qj. Symmetrically Qy C Pj C Qj and appying this for Qj minimal must give equality. We may this common prime in the group of invertible fractional ideals and proceed by induction. □ In fact, these are both equivalent conditions, i.e. a domain where every nonzero fractional ideal is invertible is a Dedekind domain (or a field). Also, a domain where every nonzero proper ideal factors uniquely into a product of prime ideals is a Dedekind domain (or a field). The first claim is not difficult: One shows that all invertible fractional ideals must be finitely generated (AB = R implies AB = R for some finitely generated fractional subideal A C A by looking at 1 G R; by uniqueness of inverses A = A), hence R is noetherian. Every localization Rp at a nonzero prime P will also have all nonzero fractional ideals invertible (the fractional ideals are of the form Ap and as such admit an inverse A'p). Thus, it remains to show that every noetherian local ring with all fractional ideals invertible must be a DVR. Let t G M \ M2 and consider the fractional ideal £_1M with inverse (t)M_1 C MM'1 = R. Now (t)M_1 ^ M since otherwise t G (t) C M2, so (t)M^1 = R giving (t) = M as required. The second claim is much more complicated. Proposition 17.13. Assume that I + J = R. Then IJ = I n J. More generally if Ii are pairwise comaximal then I\- ■ ■ Ir = I\ n • • • fl Ir. Proof. The containment IJCIDJ holds always, so let z G / fl J. Write x + y = 1, giving z = (x + y)z = xz + zy with both terms in I J. 3The first equality requires finite generation A = R{ai,..., ar}. Clearly (A')p C (Ap)' (since e.g. (A')pAp = (A'A)p C Rp) and the righ hand side consists of k G K such that kat G Rp, implying the existence of d ^ P such that dkat G R, i.e. dk G A', and thus k = (dk)/d G (A')p. 67 18. Some interesting exercises The general case is obtained by application to I± ■ ■ ■ Ir-i and Ir once we show that these are comaximal which is a bit tricky. So let X{ + yi = 1 with X{ G I{ and yi G Ir. Then we have 1 = [x\ + yi) ■ ■ ■ (xr-i + yr-i) = x\ ■ ■ ■ xr-i + terms containing some yi G I\ ■ ■ ■ Ir-i + Ir- d V__* V eir Remark. I would say that Ii are comaximal if Ij + Hi^j h = R and the second part shows that pairwise comaximal implies comaximal, which I find a bit surprising. 18. Some interesting exercises A left adjoint is right exact. A left adjoint is exact iff its right adjoint preserves injectives (for the reverse implication, it should be useful that Hom( —, /) preserves and jointly reflects exactness - here 0 = Hn(Hom(C, I)) = ~Rom(HnC, I) so it remains to show that it jointly reflects zero; then use Hom(i?—,/) = Hom(— ,GI) for / and thus also GI injective). Both adjoints are exact iff Ext*(Fx, y) = Ext* (a;, Gy) (apply the previous to a projective resolution of F and/or to an injective resolution of y). A square with vertical maps mono and horizontal maps epi is a pullback iff it is a pushout (the respective maps are jointly epi and jointly mono). Prove that a square is a pushout square iff the induced map on (say vertical) cokernels is iso and on kernels is epi. (Make the square into a double complex, and form the long exact sequence of homology groups for the columns and the total space.) Dually, it is a pullback square iff the induced map on kernels is iso and on cokernels is mono. Let 0—> A —> B —> C —> 0 be a ses and g: Y —> B an arbitrary map. By factoring Y —?► B —?► C through its image Z, construct a diagram Prove that the left square is a pullback square (not so much interesting I guess, but it gives a concrete construction of g-1(A); it would be more challenging to start from the pullback, take the cokernel and show that the induced map on cokernels is mono). More interestingly, reprove that noetherian modules are closed under extensions: Assume that A and C are noetherian and that Y C B is a submodule and apply the above to this inclusion. You will need to show that an extension of f.g. modules is f.g. Prove that in a ses 0 A^ B -+ C ^0 if C is f.p. and B is f.g. then also A is f.g. (write C as a cokernel Rs R1 —> C to B, getting a diagram R* C 0 and lift Observe that coker/ = cokerg with the latter f.g., so that we get Rs -4 A -)• coker g -4 0 68 19. Possible essay topics and conclude that A is f.g.). over a noetherian ring, every f.g. module has a projective resolution consisting of f.g. free modules. truncations and homology. injective generators, e.g. Hom(P, Q/Z); related to the exercise about exact left adjoints. derived functors are universal <5-functors (i.e. Hom(Tt, L*F) = Hom(To,P) it is a right adjoint to the 0-component functor; homology is such a functor on non-negatively graded chain complexes); if T* is a <5-functor, define V = {A | Vn > 0: TnA = 0}. If there is enough P-projectives then T* is universal. a functor is additive iff it preserves biproducts (binary, but maybe zero is also needed), in particular, any left or right adjoint is automatically additive! 19. Possible essay topics COMMUTATIVE ALGEBRA: flatness, faithful flatness, going up/down - Matsumara: Commutative algebra Grobner bases and primary decomposition, radicals etc. - Robbiano etal.: Computational aspects of commutative algebra combinatorics and commutative algebra, face ring of a simplicial complex - Stanley: Combinatorics and commutative algebra noncommutative localization Hilbert functions, Hilbert polynomials, Koszul resolutions (overlap to homological algebra) symbolic powers of an ideal local properties of commutative rings (e.g. flatness) Morita equivalence, Morita invariance HOMOLOGICAL ALGEBRA: derived categories/model categories point of view cohomology of associative/commutative/Lie algebras (Hochschield, Andre-Quillen, Chevalley-Eilenberg) Aoo-algebras (and and L^, possibly P^-algebra) spectral sequences Galois cohomology, Tate cohomology sheaf theory, introduction of Ext and Tor abelian categories simplicial methods, Dold-Kan correspondence derived Morita equivalence homotopy limits and colimits general Kiinneth theorem Eilenberg-Zilber theorem for simplicial abelian groups differential graded algebras, general Ext vs extensions satellites, 5-functors, universal <5-functors (universal property of derived functors) REPRESENTATION THEORY: representation theory of Lie groups (including Weyl group) representation theory of Lie algebras (including Weyl group) 69 19. Possible essay topics modular representation theory (when characterstics divides the order of the group) bialgebras, Hopf algebras, Frobenius algebras representation ring and equivariant stable homotopy theory 70