• Keine Ergebnisse gefunden

Dividing FPSs

Im Dokument Algebraic Combinatorics (Seite 46-66)

3.3.1. Conventions

We shall make ourselves at home in the ringK[[x]]a bit more. (Recall that Kis a fixed commutative ring.)

Convention 3.3.1. From now on, we identify each a ∈ K with the constant FPS a= (a, 0, 0, 0, 0, . . .)∈ K[[x]].

15Quick reminder on why this is true: Ifpandqare two univariate polynomials (with rational, real or complex coefficients) that are equal at infinitely many points (i.e., if there exist infinitely many numbers zsatisfying p(z) = q(z)), then p = q (because the assumption entails that the differencepqhas infinitely many roots, but this entailspq=0 and thus p=q). See [20f, Corollary 7.5.7] for this argument in more detail.

This is motivated by the fact thata=a+0x+0x2+0x3+· · · for any a∈ K.

Convention 3.3.1 does not cause any dangerous ambiguities, because we have a+b =a+b and

a−b =a−b and

a·b =a·b for any a,b ∈ K

(check this!), and because the zero and the unity of the ringK[[x]] are 0 and 1, respectively.

Furthermore, I will stop using boldfaced letters (like a,b,c) for FPSs. (I did this above for the sake of convenience, but this is rarely done in the literature.) 3.3.2. Inverses in commutative rings

We recall the notion of aninversein a commutative ring:

Definition 3.3.2. Let L be a commutative ring. Let a∈ L. Then:

(a) An inverse (or multiplicative inverse) of a means an element b ∈ L such that ab=ba=1 (where the “1” means the unity of L).

(b)We say that aisinvertible inL (or aunitof L) if a has an inverse.

Note that the condition “ab = ba = 1” in Definition 3.3.2(a) can be restated as “ab = 1”, because we automatically have ab = ba(since L is a commutative ring). I have chosen to write “ab =ba =1” in order to state the definition in a form that applies verbatim to noncommutative rings as well.

Example 3.3.3. (a) In the ringZ, the only two invertible elements are 1 and

−1. Each of these two elements is its own inverse.

(b)In the ring Q, every nonzero element is invertible. The same holds for the rings Rand C(and, more generally, for any field).

Our next goal is to study inverses of FPSs in K[[x]], answering in particular the natural question “which elements of K[[x]] have inverses”. But let us first prove their uniqueness in the generality of an arbitrary commutative ring:

Theorem 3.3.4. Let L be a commutative ring. Let a ∈ L. Then, there is at most oneinverse of a.

Proof. Let b and c be two inverses of a. We must prove thatb =c.

Since b is an inverse of a, we have ab = ba = 1. Since c is an inverse of a, we have ac = ca = 1. Now, we have b(ac)

|{z}

=1

= b·1 = b and (ba)

|{z}

=1

c = 1·c = c.

However, because of the “associativity of multiplication” axiom in Definition 3.2.1, we have b(ac) = (ba)c. Hence, b = b(ac) = (ba)c = c. This proves Theorem 3.3.4.

Theorem 3.3.4 allows us to make the following definition:

Definition 3.3.5. Let L be a commutative ring. Let a ∈ L. Assume that a is invertible. Then:

(a) The inverse of a is called a1. (This notation is unambiguous, since Theorem 3.3.4 shows that the inverse of ais unique.)

(b)For any b ∈ L, the product b·a1is called b

a (orb/a).

(c) For any negative integer n, we define an to be a1n

. Thus, then-th power an is defined for eachn∈ Z.

The following facts are easy to check:

Proposition 3.3.6. Let L be a commutative ring. Then:

(a)Any invertible element a ∈ Lsatisfies a1 =1/a.

(b)For any invertible elements a,b∈ L, the elementabis invertible as well, and satisfies(ab)1 =b1a1 =a1b1.

(c) If a ∈ L is invertible, and if n ∈ Z is arbitrary, then an = a1n

= (an)1.

(d) Laws of exponents hold for negative exponents as well: If a and b are invertible elements of L, then

an+m = anam for all n,m∈ Z;

(ab)n = anbn for all n∈ Z;

(an)m = anm for all n,m ∈Z.

(e)Laws of fractions hold: If aand c are two invertible elements of L, and if band d are any two elements of L, then b

a+d

c = bc+ad

ac and b a· d

c = bd ac. (f) Division undoes multiplication: Ifa,b,c are three elements of L with a being invertible, then the equality c

a =bis equivalent to c =ab.

Proof. Exercise. (See, e.g., [19s, solution to Exercise 4.1.1] for a proof of parts(c) and (d)in the special case where L = C; essentially the same argument works in the general case. The remaining parts of Proposition 3.3.6 are even easier to check. Note that the(ab)1=b1a1 part of part(c)would hold even if L was a noncommutative ring.)

3.3.3. Inverses in K[[x]]

Now, which FPSs are invertible in the ring K[[x]] ? For example, we know from (5) that the FPS 1−x is invertible, with inverse 1+x+x2+x3+· · ·. On the other hand, the FPS x is not invertible, since Lemma 3.2.15 shows that any product ofx with an FPS must begin with a 0 (but the unity of K[[x]] does not

begin with a 0). (Strictly speaking, this is only true if the ring K is nontrivial – i.e., if not all elements of K are equal. If K is trivial, then K[[x]] is trivial, and thus any FPS inK[[x]] is invertible, but this does not make an interesting statement.)

It turns out that we can characterize invertible FPSs in K[[x]] in a rather simple way:

Proposition 3.3.7. Let a ∈ K[[x]]. Then, the FPS a is invertible in K[[x]] if and only if its constant term

x0

a is invertible inK.

Proof. =⇒: Assume that a is invertible in K[[x]]. That is, a has an inverse b ∈

b. Comparing this with x0

a is invertible in K (with inverse x0 invertible inK). Thus, its inverse a01 is well-defined.

Now, we want to prove that a is invertible in K[[x]]. We thus try to find an inverse ofa.

We work backwards at first: We assume thatb = (b0,b1,b2, . . .) ∈ K[[x]]is an inverse fora, and we try to figure out how this inverse looks like.

Since b is an inverse of a, we have ab = 1 = (1, 0, 0, 0, . . .). However, from This can be rewritten as the following system of equations:

I claim that this system of equations uniquely determines (b0,b1,b2, . . .). In-deed, we can solve the first equation (1 =a0b0) for b0, thus obtaining b0 = a01 (sincea0is invertible). Having thus foundb0, we can solve the second equation (0 = a0b1+a1b0) for b1, thus obtaining b1 = −a01(a1b0) (again because a0 is invertible). Having thus found bothb0 and b1, we can solve the third equation (0=a0b2+a1b1+a2b0) forb2, thus obtainingb2 =−a01(a1b1+a2b0). Proceed-ing like this, we obtain recursive expressions for all coefficientsb0,b1,b2, . . . of b, namely













b0 =a01,

b1 =−a01(a1b0),

b2 =−a01(a1b1+a2b0),

b3 =−a01(a1b2+a2b1+a3b0), . . . .

(31)

(This procedure for solving systems of linear equations is well-known from lin-ear algebra – it is a form of Gaussian elimination, but a particularly simple one because our system is triangular with invertible coefficients on the diago-nal. The only complication is that it has infinitely many variables and infinitely many equations.)

So we have shown that if b is an inverse of a, then the entriesbi of the FPSb are given recursively by (31). This yields thatb is unique; alas, this is not what we want to prove. Instead, we want to prove thatb exists.

Fortunately, we can achieve this by simply turning our above argument around:

Forget that we fixedb. Instead, we define a sequence(b0,b1,b2, . . .)of elements of K recursively by (31), and we define the FPS b = (b0,b1,b2, . . .) ∈ K[[x]]. Then, the equalities (30) hold (because they are just equivalent restatements of the equalities (31)). In other words, we have

(1, 0, 0, 0, . . .)

= (a0b0, a0b1+a1b0, a0b2+a1b1+a2b0, a0b3+a1b2+a2b1+a3b0, . . .). However, as before, we can show that

ab= (a0b0, a0b1+a1b0, a0b2+a1b1+a2b0, a0b3+a1b2+a2b1+a3b0, . . .). Comparing these two equalities, we find ab = (1, 0, 0, 0, . . .) = 1. Thus, ba = ab = 1, so that ab = ba = 1. This shows that b is an inverse of a, so that a is invertible. This proves the “⇐=” direction of Proposition 3.3.7.

We note a particularly simple corollary of Proposition 3.3.7 whenKis a field:

Corollary 3.3.8. Assume that K is a field. Let a ∈ K[[x]]. Then, the FPS a is invertible in K[[x]]if and only if

x0 a6=0.

Proof. An element of K is invertible in K if and only if it is nonzero (since K is a field). Hence, Corollary 3.3.8 follows from Proposition 3.3.7.

3.3.4. Newton’s binomial formula

Let us now return to considering specific FPSs. We have already seen that the FPS 1−x is invertible, with inverse 1+x+x2+x3+· · ·. We shall now show an analogous result for the FPS 1+x. Its invertibility follows from Proposition 3.3.7, but it is better to derive it by hand, as this also gives a formula for the inverse:

Proposition 3.3.9. The FPS 1+x∈ K[[x]]is invertible, and its inverse is (1+x)1 =1−x+x2−x3+x4−x5± · · ·=

nN

(−1)nxn.

First proof of Proposition 3.3.9. We have (1+x)·1−x+x2−x3+x4−x5± · · ·

=1·1−x+x2−x3+x4−x5± · · ·

| {z }

=1x+x2x3+x4x5±···

+x·1−x+x2−x3+x4−x5± · · ·

| {z }

=xx2+x3x4+x5x6±···

=1−x+x2−x3+x4−x5± · · ·+x−x2+x3−x4+x5−x6± · · ·

=1

(since all powers ofxother than 1 cancel out). This shows that 1−x+x2−x3+ x4−x5± · · · is an inverse of 1+x (since K[[x]]is a commutative ring). Thus, 1+xis invertible, and its inverse is(1+x)1 =1−x+x2−x3+x4−x5± · · ·=

nN

(−1)nxn. This proves Proposition 3.3.9.

Second proof of Proposition 3.3.9. We have (1+x)·1−x+x2−x3+x4−x5± · · ·

= (1+x)·1

| {z }

=1+x

−(1+x)·x

| {z }

=x+x2

+ (1+x)·x2

| {z }

=x2+x3

−(1+x)·x3

| {z }

=x3+x4

+ (1+x)·x4

| {z }

=x4+x5

−(1+x)·x5

| {z }

=x5+x6

± · · ·

= (1+x)−x+x2

+x2+x3

x3+x4

+x4+x5

x5+x6

± · · ·

=1

(since we have a telescoping sum in front of us, in which all powers ofx other than 1 cancel out). This shows that 1−x+x2−x3+x4−x5± · · · is an inverse of 1+x (since K[[x]] is a commutative ring). Thus, 1+x is invertible, and its inverse is (1+x)1 = 1−x+x2−x3+x4−x5± · · · =

nN

(−1)nxn. This proves Proposition 3.3.9.

Proposition 3.3.9 shows that the FPS 1+x is invertible; thus, its powers (1+x)n are defined for all n ∈ Z (by Definition 3.3.5 (c)). The following for-mula – known asNewton’s binomial theorem16– describes these powers explicitly:

Theorem 3.3.10. For eachn ∈Z, we have

The reader may want to check that the particular case n = −1 of Theorem 3.3.10 agrees with Proposition 3.3.9. (Recall Example 2.0.2!)

Of course, Theorem 3.3.10 should look familiar – an identical-looking for-mula appears in real analysis under the same name. However, the result in real analysis is concerned with infinite sums of real numbers, while our Theorem 3.3.10 is an identity between FPSs over an arbitrary commutative ring. Thus, the two facts are not the same.

We will prove Theorem 3.3.10 in a somewhat roundabout way, since this gives us an opportunity to establish some auxiliary results that are of separate interest (and usefulness). The first of these auxiliary results is a fundamental property of binomial coefficients, known as theupper negation formula(see, e.g., [19fco, Proposition 1.3.7]):

Proof of Theorem 3.3.11 (sketched). Ifk<0, then this is trivial because both −n

16orNewton’s binomial formula

A moment of thought reveals that the right hand sides of these two equalities

. This proves Theorem 3.3.11.

(Quick exercise: Rederive Example 2.0.2 from Theorem 3.3.11.) Next, we show a formula for negative powers of 1+x:

Proposition 3.3.12. For eachn ∈N, we have (1+x)n =

Proof of Proposition 3.3.12. We proceed by induction on n:

Induction base: Comparing

xk. In other words, Proposition 3.3.12 holds forn =0.

Induction step: Let j∈ N. Assume that Proposition 3.3.12 holds for n= j. We must prove that Proposition 3.3.12 holds forn =j+1.

We have assumed that Proposition 3.3.12 holds forn =j. In other words, we have

Now, we want to prove that Proposition 3.3.12 holds for n = j+1. In other words, we want to prove that

(1+x)−(j+1) =

kN

(−1)k

(j+1) +k−1 k

xk.

In view of (1+x)−(j+1) = (1+x)j·(1+x)1 and (j+1) +k−1=j+k, this equality can be rewritten as

(1+x)j·(1+x)1 =

kN

(−1)k

j+k k

xk.

Since 1+x is invertible, we can equivalently transform this equality by multi-plying both sides with 1+x; thus, it becomes

(1+x)j=

kN

(−1)k

j+k k

xk

!

·(1+x). So this is the equality we must prove.

We do this by simplifying its right hand side: (here, we have substitutedk1 fork

in the sum)

(since the two sums differ only in theirk=0 addend)

+

Multiplying both sides of this equality by(1+x)1, we obtain

In other words, Proposition 3.3.12 holds for n = j+1. This completes the induction step. Thus, Proposition 3.3.12 is proved.

We can rewrite Proposition 3.3.12 using negative binomial coefficients:

Corollary 3.3.13. For each n∈ N, we have (1+x)n =

Proof of Corollary 3.3.13. Proposition 3.3.12 yields (1+x)n =

We can now easily prove Newton’s binomial formula:

Proof of Theorem 3.3.10. Letn ∈Z. We must prove that(1+x)n = Ifn∈ N, then this follows by comparing

(1+x)n = (x+1)n =

(by the binomial theorem)

=

with

Hence, n is a negative integer, so that −n ∈ N. Thus, Corollary 3.3.13 (ap-plied to−ninstead ofn) yields

(1+x)−(−n) =

but this was clearly a step forward.

3.3.5. Dividing by x

Let us see how this all helps us justify our arguments in Section 3.1. Proposition 3.3.7 justifies the fractions that appear in (4), but it does not justify dividing by the FPS 2x in (10), since the constant term

x0

(2x) is surely not invertible.

And indeed, the FPS 2x is not invertible; the fraction 1

2x is not a well-defined FPS.

However, it is easy to see directly which FPSs can be divided by x(and thus by 2x, if K = Q), and what it means to divide them by x. In fact, Lemma 3.2.15 shows that multiplying an FPS by xmeans moving all its entries by one position to the right, and putting a 0 into the newly vacated starting position.

Thus, it is rather clear what dividing by xshould be:

Definition 3.3.14. Let a= (a0,a1,a2, . . .)be an FPS whose constant terma0is 0. Then, a

x is defined to be the FPS(a1,a2,a3, . . .). The following is almost trivial:

Proposition 3.3.15. Let a∈ K[[x]]and b ∈ K[[x]]be two FPSs. Then, a = xb

x in Definition 3.3.14 (when a has constant term 0), we can also define a

2x when 2 is invertible inK (just set a 2x = 1

a

x). Thus, the fraction 1±√

1−4x

2x in (10) makes sense when the±sign is a−sign (but not when it is a+sign), at least if we interpret the square root√

1−4x as ∑

Let us use this occasion to state two simple lemmas (vaguely related to Defini-tion 3.3.14) that will be used later on:

Lemma 3.3.16. Let a ∈ K[[x]] be an FPS with

x is well-defined. Moreover, it is easy to see that a =x· a

x

17. Hence, there exists an h ∈ K[[x]] such that a = xh(namely, h= a

x). This proves Lemma 3.3.16.

Lemma 3.3.17. Let k ∈ N. Let aK[[x]] be any FPS. Then, the first k coefficients of the FPS xkaare 0.

Proof of Lemma 3.3.17. We must show that [xm] xka

= 0 for any nonnegative integer m < k. But we can do this directly: If m is a nonnegative integer such thatm <k, then (20) (applied toxk, aand minstead ofa, band n) yields

which is exactly what we wanted to show. Thus, Lemma 3.3.17 is proved.

(Alternatively, we could prove Lemma 3.3.17 by writing a in the form a = (a0,a1,a2, . . .)and observing that xka=

0, 0, . . . , 0

| {z }

ktimes

,a0,a1,a2, . . .

. This follows by applying Lemma 3.2.15 a total ofk times, or more formally by induction on k.)

Lemma 3.3.17 has a converse; here is a statement that combines it with this converse:

Lemma 3.3.18. Let k ∈ N. Let f ∈ K[[x]] be any FPS. Then, the first k coefficients of the FPS f are 0 if and only if f is a multiple of xk.

Here, we use the following notation:

Definition 3.3.19. Let g ∈ K[[x]] be an FPS. Then, a multiple of g means an FPS of the form gawith a∈ K[[x]].

(This is just a particular case of the usual concept of multiples in a commu-tative ring.)

Proof of Lemma 3.3.18. The statement we are proving is an “if and only if” state-ment. We shall prove its “only if” (i.e., “=⇒”) and its “if” (i.e., “⇐=”) directions separately:

=⇒: Assume that the first k coefficients of the FPS f are 0. We must show that f is a multiple of xk.

Write f as f = (f0, f1, f2, . . .). Then, the first k coefficients of the FPS f are f0, f1, . . . , fk1. Hence, these k coefficients f0, f1, . . . ,fk1 are 0 (since we have assumed that the first k coefficients of the FPS f are 0). In other words, fn =0 for eachn∈ {0, 1, . . . ,k−1}. Hence, k

1 n=0

fn

|{z}

=0

xn =k

1 n=0

0xn =0.

Now,

f = (f0, f1, f2, . . .) =

nN

fnxn =

k1 n

=0

fnxn

| {z }

=0

+

n=k

fn xn

|{z}

=xkxn−k (sincenk)

=

n=k

fnxkxnk

=xk

n=k

fnxnk.

In other words, f = xka for a =

n=k

fnxnk. This shows that f is a multiple of xk. Thus, the “=⇒” direction of Lemma 3.3.18 is proved.

⇐=: Assume that f is a multiple ofxk. In other words, f =xkafor some a∈ K[[x]]. Consider this a. Now, Lemma 3.3.17 yields that the first k coefficients of the FPS xka are 0. In other words, the first k coefficients of the FPS f are 0 (since f =xka). This proves the “⇐=” direction of Lemma 3.3.18.

The proof of Lemma 3.3.18 is now complete, as both directions have been proved.

Another lemma that will prove its usefulness much later concerns FPSs that are equal up until a certain coefficient. It says that if f and g are two FPSs whose first n+1 coefficients agree (for a certain n ∈ N), then the same is true of the FPSsa f and agwhenever ais any further FPS. In more details:

Lemma 3.3.20. Let a, f,g∈ K[[x]]be three FPSs. Letn ∈N. Assume that [xm] f = [xm]g for eachm ∈ {0, 1, . . . ,n}. (33) Then,

[xm] (a f) = [xm] (ag) for eachm ∈ {0, 1, . . . ,n}.

Proof of Lemma 3.3.20. Let m ∈ {0, 1, . . . ,n}. Then, m ≤ n. Hence, each j ∈ {0, 1, . . . ,m} satisfies j ≤m≤nand thus j∈ {0, 1, . . . ,n} and therefore

h xji

f =hxji

g (34)

(by (33), applied tojinstead ofm). However, (21) (applied tom, aand f instead ofn, aand b) yields

[xm] (a f) =

m j=0

h xmji

hxji f

| {z }

=[xj]g

(by (34))

=

m j=0

h xmji

hxji g.

On the other hand, (21) (applied tom, a and ginstead ofn, a and b) yields [xm] (ag) =

m j=0

hxmji

hxji g.

Comparing these two equalities, we obtain [xm] (a f) = [xm] (ag). This proves Lemma 3.3.20.

A consequence of Lemma 3.3.20 is the following fact:

Lemma 3.3.21. Let u,v ∈ K[[x]] be two FPSs such that v is a multiple of u.

Let n∈ N. Assume that

[xm]u =0 for eachm∈ {0, 1, . . . ,n}. (35) Then,

[xm]v =0 for eachm∈ {0, 1, . . . ,n}.

Proof of Lemma 3.3.21. We have assumed that v is a multiple of u. In other words,v=uafor some a∈ K[[x]]. Consider this a.

For eachm∈ {0, 1, . . . ,n}, we have [xm]u =0 (by (35))

= [xm]0 (since the FPS 0 satisfies [xm]0=0). Hence, Lemma 3.3.20 (applied to f =u and g=0) yields that

[xm] (au) = [xm] (a·0) for eachm∈ {0, 1, . . . ,n}. (36) Now, for eachm ∈ {0, 1, . . . ,n}, we have

[xm]v= [xm] (au) (sincev=ua= au)

= [xm] (a·0)

| {z }

=0

(by (36))

= [xm]0 =0.

This proves Lemma 3.3.21.

We can derive a further useful consequence from Lemma 3.3.21:

Lemma 3.3.22. Let a,b,c,d ∈ K[[x]] be four FPSs. Letn∈ N. Assume that [xm]a= [xm]b for each m∈ {0, 1, . . . ,n}. (37) Assume further that

[xm]c= [xm]d for each m∈ {0, 1, . . . ,n}. (38) Then,

[xm] (ac) = [xm] (bd) for each m∈ {0, 1, . . . ,n}. Proof of Lemma 3.3.22. For eachm ∈ {0, 1, . . . ,n}, we have

[xm] (a−b) = [xm]a−[xm]b (by (19))

=0 (by (37)).

Moreover, the FPS ac−bc is a multiple of a−b (since ac−bc = (a−b)c).

Hence, Lemma 3.3.21 (applied tou= a−b and v=ac−bc) shows that

[xm] (ac−bc) =0 for each m∈ {0, 1, . . . ,n} (39) (since we have[xm] (a−b) =0 for each m∈ {0, 1, . . . ,n}).

For eachm∈ {0, 1, . . . ,n}, we have

[xm] (c−d) = [xm]c−[xm]d (by (19))

=0 (by (38)).

Moreover, the FPS bc−bd is a multiple of c−d (since bc−bd = b(c−d) = (c−d)b). Hence, Lemma 3.3.21 (applied to u = c−d and v = bc−bd) shows that

[xm] (bc−bd) =0 for each m∈ {0, 1, . . . ,n} (40) (since we have[xm] (c−d) = 0 for eachm ∈ {0, 1, . . . ,n}).

Now, let m ∈ {0, 1, . . . ,n}. Then, (19) yields [xm] (ac−bc) = [xm] (ac)− [xm] (bc). Comparing this with (39), we obtain[xm] (ac)−[xm] (bc) =0. In other words, [xm] (ac) = [xm] (bc). On the other hand, (19) yields [xm] (bc−bd) = [xm] (bc)−[xm] (bd). Comparing this with (40), we obtain[xm] (bc)−[xm] (bd) = 0. In other words, [xm] (bc) = [xm] (bd). Hence, [xm] (ac) = [xm] (bc) = [xm] (bd). This proves Lemma 3.3.22.

3.4. Polynomials

3.4.1. Definition

Let us take a little side trip to relate FPSs to polynomials. As should be clear enough from the definitions, we can think of an FPS as a “polynomial with (potentially) infinitely many nonzero coefficients”. This can be easily made precise. Indeed, we can define polynomials as FPSs that have only finitely many nonzero coefficients:

Definition 3.4.1. (a) An FPS a ∈ K[[x]] is said to be a polynomial if all but finitely many n ∈ Nsatisfy [xn]a =0 (that is, if all but finitely many coeffi-cients ofa are 0).

(b) We let K[x] be the set of all polynomials a ∈ K[[x]]. This set K[x] is a subring of K[[x]] (according to Theorem 3.4.2 below), and is called the univariate polynomial ring overK.

For example, 2+3x+7x5 is a polynomial, whereas 1+x+x2+x3+· · · is not (unlessKis a trivial ring).

The definition of a “polynomial” that you have seen in your abstract alge-bra course might be superficially different from that in Definition 3.4.1; but it necessarily is equivalent. (In fact, Definition 3.4.1(a)can be restated as “a poly-nomial means a K-linear combination of the monomials x0,x1,x2, . . .”, and it is clear that the monomials x0,x1,x2, . . . in K[[x]] are K-linearly independent;

thus, the polynomial ringK[x]as we have defined it in Definition 3.4.1(b) is a free K-module with basis x0,x1,x2, . . .

. The same is true for the polynomial ringK[x]that you know from abstract algebra. Moreover, the rules for adding, subtracting and multiplying polynomials known from abstract algebra agree with the formulas fora+b, ab and a·bthat we gave in Definition 3.2.5.)

We owe a theorem:

Theorem 3.4.2. The setK[x]is a subring of K[[x]](that is, it is closed under addition, subtraction and multiplication, and contains the zero 0 and the unity 1) and is a K-subalgebra of K[[x]].

Proof of Theorem 3.4.2 (sketched). Exercise. (The hardest part is to show thatK[x] is closed under multiplication. But this, too, is easy: Let a,b ∈ K[x]. Then, all but finitely many n ∈ N satisfy [xn]a = 0 (since a ∈ K[x]). In other words, there exists a finite subset I ofNsuch that

h xii

a =0 for alli ∈N\I. (41)

Similarly, there exists a finite subset J ofNsuch that h

xji

b =0 for all j∈ N\ J. (42)

Consider theseI and J. Now, letS be the subset{i+j | i ∈ I and j∈ J} ofN.

This setS is again finite (since I and J are finite), and we can easily see (using (20)) that

[xn] (ab) = 0 for alln∈ N\S.

Thus, all but finitely many n ∈ Nsatisfy [xn] (ab) = 0 (since S is finite). This shows thatab ∈ K[x]. Hence, we have shown that K[x] is closed under multi-plication. The remaining claims of Theorem 3.4.2 are similar but easier.)

3.4.2. Evaluation

As we now know, polynomials are just a special case of FPSs. However, they have some features that FPSs don’t have in general. The most important of these features is substitution. To wit, we can substitute an element of K, or more generally an element of any K-algebra, into a polynomial (but generally not into an FPS). Before we explain how, let us recall the notions of rings and K-algebras:

Definition 3.4.3. The notion of aring(also known as a noncommutative ring) is defined in the exact same way as we defined the notion of a commutative ring in Definition 3.2.1, except that the “Commutativity of multiplication”

axiom is removed.

Examples of noncommutative rings18 abound in linear algebra:

• For anyn ∈N, the matrix ringRn×n (that is, the ring of alln×n-matrices with real entries) is a ring. This ring is commutative if n ≤ 1, but not if n>1.

More generally, ifKis any ring (commutative or not), then the matrix ring Kn×n is a ring for everyn ∈N.

18Note that the word “noncommutative ring” does not imply that the ring is not commuta-tive; it merely means that commutativity is not required. Thus, any commutative ring is a noncommutative ring.

• The ringHof quaternions is a ring that is not commutative.

• If M is an abelian group, then the ring of all endomorphisms of M (that is, the ring of allZ-linear maps from Mto M) is a noncommutative ring.

(Its multiplication is composition of endomorphisms.)

Next, let us recall the notion of aK-algebra. Recall that K is a fixed commu-tative ring.

Definition 3.4.4. AK-algebrais a set Aequipped with four maps

⊕: A×A → A, : A×A → A, : A×A → A,

*: K×A→ A

and two elements −→

0 ∈ Aand −→

1 ∈ Asatisfying the following properties:

1. The set A, equipped with the maps ⊕, and and the two elements

−→

0 and−→

1 , is a (noncommutative) ring.

2. The set A, equipped with the maps⊕, and*and the element −→ 0 , is aK-module.

3. We have

λ *(ab) = (λ *a)b =a(λ*b) (43) for all λ∈ K and a,b∈ A.

(Thus, in a nutshell, a K-algebra is a set A that is simultaneously a ring and aK-module, with the property that the ring Aand theK-module Ahave the same addition, the same subtraction and the same zero, and satisfy the additional compatibility property (43).)

Consequently, a K-algebra is automatically a ring and a K-module. Thus, all the notations and shorthands that we have introduced for rings and for K-modules will also be used forK-algebras. For example, if Ais aK-algebra, then both maps : A×A → A and *: K×A → A will be denoted by · unless there is a risk of confusion. (There is rarely a risk of confusion, since the two maps act on different inputs: a·b means ab if abelongs to A, and means a * b if a belongs to K. Often, even when an element a belongs to both A and K, the elements ab and a * b are equal, so confusion cannot arise.)

Examples ofK-algebras include:

• the ring Kitself;

• the ring K[[x]] of FPSs (we have defined the relevant maps in Definition

• the ring K[[x]] of FPSs (we have defined the relevant maps in Definition

Im Dokument Algebraic Combinatorics (Seite 46-66)