• Keine Ergebnisse gefunden

Non-integer powers

Im Dokument Algebraic Combinatorics (Seite 98-112)

=hx0i f

g ·hx0i g

| {z }

=1

(by (22))

=hx0i f g

and thus x0 f

g = x0

f = 1, so that f

g ∈ K[[x]]1). This shows that K[[x]]1 is closed under division. Thus, Proposition 3.7.10(b)is proven.

The two groups in Proposition 3.7.10 can now be connected through Exp and Log:

Theorem 3.7.11. The maps

Exp :(K[[x]]0,+, 0)→(K[[x]]1,·, 1) and

Log : (K[[x]]1,·, 1) →(K[[x]]0,+, 0) are mutually inverse group isomorphisms.

Proof of Theorem 3.7.11 (sketched). Lemma 3.7.9 yields that these two maps are group homomorphisms25. Lemma 3.7.8 shows that they are mutually inverse.

Combining these results, we conclude that these two maps are mutually inverse group isomorphisms. This proves Theorem 3.7.11.

Theorem 3.7.11 helps us turn addition into multiplication and vice versa when it comes to FPSs, at least if the constant terms are the right ones. This will come useful rather soon.

3.8. Non-integer powers

3.8.1. Definition

Now, let us again recall Example 2 from Section 3.1. In order to fully justify that example, we still need to explain what√

1−4x is.

More generally, let us try to define non-integer powers of FPSs (since square roots are just 1/2-th powers). Thus, we are trying to solve the following prob-lem:

25Here, we are using the following fact: If(G,,eG)and(H,,eH)are any two groups, and if Φ:G His a map such that every f,g GsatisfyΦ(f g) =Φ(f)Φ(g), thenΦis a group homomorphism.

Problem: Devise a reasonable definition of thec-th power fc for any FPS f ∈ K[[x]]and any c∈ K.

Here, “reasonable” means that it should have some of the properties we would expect:

• It should not conflict with the existing notion of fc for c ∈ N. That is, if c ∈ N, then our new definition of fc should yield the same result as the existing meaning that fc has in this case (namely, f f · · · f

| {z }

ctimes

). The same should hold forc ∈ Zwhen f is invertible.

• Rules of exponents should hold: i.e., we should have

fa+b = fafb, (f g)a = faga, (fa)b = fab (73) for all a,b∈ K and f,g∈ K[[x]].

• For any positive integern and any FPS f ∈ K[[x]], the 1/n-th power f1/n should be ann-th root of f (that is, an FPS whose n-th power is f). (This actually follows from the previous two properties, since we can apply the rule(fa)b = fab toa=1/nand b =n.)

Clearly, we cannot solve the above problem in full generality:

• The power 01 cannot be reasonably defined (unlessK is trivial). Indeed, 01·01 would have to equal 01+1 = 00 = 1, but this would contradict 01·01=01·0 =0.

• The power x1/2 cannot be reasonably defined either (unless K is trivial).

Indeed, there is no FPS whose square isx. This will be proved in Exercise A.2.8.1(a).

• Even the power(−1)1/2 cannot always be defined: There is no guarantee that K contains a square root of −1 (and if K does not, then it is easy to see thatK[[x]]does neither).

However, all we want is to make sense of√

1−4x, so let us restrict ourselves to FPSs whose constant term is 1. Using the notation from Definition 3.7.6 (b), we are thus moving on to the following problem:

More realistic problem: Devise a reasonable definition of thec-th power fc for any FPS f ∈ K[[x]]1and any c ∈K.

Besides imposing the above wishlist of properties, we want thisc-th power fc itself to belong to K[[x]]1, since otherwise the iterated power (fa)b in our rules of exponents might be undefined.

It turns out that this is still too much to ask. Indeed, if K = Z/2, then the FPS 1+x ∈ K[[x]]1has no square root (you get to prove this in Exercise A.2.8.1 (c)), so its 1/2-th power(1+x)1/2 cannot be reasonably defined.

However, if we assume (as in Convention 3.7.1) thatK is a commutative Q-algebra, then we get lucky: Our “more realistic problem” can be solved in (at least) two ways:

1st solution: We define

(1+x)c :=

kN

c k

xk for eachc ∈ K,

in order to make Newton’s binomial formula (Theorem 3.3.10) hold for arbi-trary exponents26. Subsequently, we define

fc := (1+x)c[f −1] for any f ∈ K[[x]]1 andc ∈ K (74) (in order to have(1+g)c =

kN

c k

gk hold not only for g=x, but also for all g ∈ K[[x]]0).

It is clear that the FPS fc is well-defined in this way. However, proving that this definition satisfies all our wishlist (particularly the rules of exponents (73)) is highly nontrivial. Some of this is done in [Loehr11, §7.12], but it is still a lot of work.

Thus, we shall discard this definition of fc, and instead take a different way:

2nd solution: Recall the mutually inverse group isomorphisms Exp :(K[[x]]0,+, 0) →(K[[x]]1, 1) and Log : (K[[x]]1,·, 1)→ (K[[x]]0,+, 0)

from Theorem 3.7.11. Thus, for any f ∈ K[[x]]1 and anyc ∈Z, the equation Log(fc) = cLogf

holds (since Log is a group homomorphism). This suggests that we define fc for all c ∈ K by the same equation. In other words, we define fc for all c ∈ K by setting fc =Exp(cLogf) (since the map Exp is inverse to Log). And this is what we shall do now:

26Note that c

k

= c(c1) (c2)· · ·(ck+1)

k! is well-defined since K is a commutative Q-algebra.

Definition 3.8.1. Assume that Kis a commutativeQ-algebra. Let f ∈ K[[x]]1 and c ∈K. Then, we define an FPS

fc :=Exp(cLogf) ∈K[[x]]1.

This definition of fc does not conflict with our original definition of fc when c ∈Zbecause (as we said) the original definition of fc already satisfies Log(fc) = cLog f and therefore fc =Exp(cLogf).

Moreover, Definition 3.8.1 makes the rules of exponents hold:

Theorem 3.8.2. Assume thatK is a commutativeQ-algebra. For any a,b ∈K and f,g∈ K[[x]]1, we have

fa+b = fafb, (f g)a = faga, (fa)b = fab. Proof. Easy exercise (Exercise A.2.8.2).

Now, let us return to Example 2 from Section 3.1. In that example, we had to solve the quadratic equation

C(x) =1+x(C(x))2 for an FPSC(x) ∈ Q[[x]]. Let us writeC forC(x); thus, this quadratic equation becomes

C =1+xC2.

By completing the square, we can rewrite this equation in the equivalent form (1−2xC)2=1−4x.

Taking both sides of this equation to the 1/2-th power, we obtain

(1−2xC)21/2 = (1−4x)1/2

(since both sides are FPSs with constant term 1). However, the FPS 1−2xC has constant term 1; thus, the rules of exponents yield

(1−2xC)21/2 = (1−2xC)2·1/2 =1−2xC. Hence,

1−2xC =(1−2xC)21/2 = (1−4x)1/2. This is a linear equation inC; solving it for C yields

C= 1 2x

1−(1−4x)1/2.

This is precisely the “square-root” expression for C = C(x) that we have ob-tained back in Section 3.1, but now we have proved it rigorously.

3.8.2. The Newton binomial formula for arbitrary exponents

Is Example 2 from Section 3.1 fully justified now? No, because we still need to prove the identity (12) that we used back there. Since we are defining powers in the 2nd way (i.e., using Definition 3.8.1 rather than using (74)), it is not immediately obvious. Nevertheless, it can be proved. More generally, we can prove the following:

Theorem 3.8.3 (Generalized Newton binomial formula). Assume that K is a commutativeQ-algebra. Letc ∈ K. Then,

(1+x)c =

kN

c k

xk.

The following proof illustrates a technique that will probably appear prepos-terous if you are seeing it for the first time, but is in fact both legitimate and rather useful.

Proof of Theorem 3.8.3 (sketched). The definition of Log yields

Log(1+x) = log◦

(1+x)−1

| {z }

=x

=log◦x =log (by Proposition 3.5.4(g), applied tog =log).

Now, let us obstinately compute(1+x)c using Definition 3.8.1 and the defi-nitions of Exp and Log. To wit: LetP denote the set{1, 2, 3, . . .}. By Definition 3.8.1, we have

(1+x)c

=Exp(cLog(1+x)) =Exp

clog

since Log(1+x) =log

=exp◦clog

(by the definition of Exp)

=exp◦ c

n1

(−1)n1 n xn

!

since log=

n1

(−1)n1 n xn

!

=exp◦

n1

(−1)n1 n cxn

!

=

mN

1

m!

n1

(−1)n1 n cxn

!m

(75)

(by Definition 3.5.1, since exp=

nN

1

n!xn =

mN

1 m!xm).

Now, fix m ∈ N. We shall expand

n1

(−1)n1 n cxn

!m

. Indeed, we can replace the “ ∑

n1

” sign by an “ ∑

nP” sign, sinceP ={1, 2, 3, . . .}. Thus,

n

1

(−1)n1 n cxn

!m

=

nP

(−1)n1 n cxn

!m

=

nP

(−1)n1 n cxn

!

n

P

(−1)n1 n cxn

!

· · ·

nP

(−1)n1 n cxn

!

| {z }

mtimes

=

n1P

(−1)n11 n1 cxn1

!

n

2P

(−1)n21 n2 cxn2

!

· · ·

nmP

(−1)nm1 nm cxnm

!

(here, we have renamed the summation indices)

=

(n1,n2,...,nm)∈Pm

(−1)n11 n1 cxn1

! (−1)n21 n2 cxn2

!

· · · (−1)nm1 nm cxnm

!

(by a product rule for the product ofm sums27). Hence, Now, forget that we fixedm. We thus have proved (76) for eachm ∈N.

Now, (75) becomes

27This product rule says that

n1

∈A1 left hand side of this equality are summable. We leave it to the reader to convince himself of this rule (intuitively, it just says that we can expand a product of sums in the usual way, even when the sums are infinite) and to check that the sums we are applying it to are indeed summable.

Now, let k ∈ N. Let us rewrite the “middle sum”

mN

(n1,n2,...,nm)∈Pm; n1+n2+···+nm=k

1 m! · (−1)n1+n2+···+nmm

n1n2· · ·nm cm on the right hand side as a finite sum. Indeed, a com-position of k shall mean a tuple (n1,n2, . . . ,nm) of positive integers satisfying n1+n2+· · ·+nm = k. (For example, (1, 3, 1) is a composition of 5. We will study compositions in more detail in Section 3.9.) Let Comp(k) denote the set of all compositions of k. It is easy to see that this set Comp(k) is finite28. Now, we can rewrite the double summation sign “ ∑

mN

(n1,n2,...,nm)∈Pm; n1+n2+···+nm=k

” as a single summation sign “ ∑

(n1,n2,...,nm)∈Comp(k)

” (since Comp(k) is precisely the set of all tuples(n1,n2, . . . ,nm) ∈Pm satisfyingn1+n2+· · ·+nm =k). Hence, we obtain

m

N

(n1,n2,...,nm)∈Pm; n1+n2+···+nm=k

1

m! · (−1)n1+n2+···+nmm n1n2· · ·nm cm

=

(n1,n2,...,nm)∈Comp(k)

1

m! · (−1)n1+n2+···+nmm

n1n2· · ·nm cm. (78) Forget that we fixed k. Thus, for each k ∈ N, we have defined a finite set Comp(k) and shown that (78) holds.

28Proof. Let (n1,n2, . . . ,nm) Comp(k). Thus,(n1,n2, . . . ,nm)is a composition ofk. In other words,(n1,n2, . . . ,nm)is a finite tuple of positive integers satisfyingn1+n2+· · ·+nm=k.

Hence, all its m entries n1,n2, . . . ,nm are positive integers and thus are 1; therefore, n1+n2+· · ·+nm 1+1+· · ·+1

| {z }

mtimes

= m, so that m n1+n2+· · ·+nm = k. Thus,

m∈ {0, 1, . . . ,k}.

Furthermore, the sum n1+n2+· · ·+nm is to each of its m addends (since its m addends n1,n2, . . . ,nm are positive). In other words, we have n1+n2+· · ·+nm ni for eachi∈ {1, 2, . . . ,m}. Thus, for eachi∈ {1, 2, . . . ,m}, we haveni n1+n2+· · ·+nm=k and thereforeni∈ {1, 2, . . . ,k}(sinceni is a positive integer). Hence,

(n1,n2, . . . ,nm)∈ {1, 2, . . . ,k}m [

`∈{0,1,...,k}

{1, 2, . . . ,k}`

(sincem∈ {0, 1, . . . ,k}).

Now, forget that we fixed (n1,n2, . . . ,nm). We thus have shown that (n1,n2, . . . ,nm) S

`∈{0,1,...,k}

{1, 2, . . . ,k}` for each(n1,n2, . . . ,nm) Comp(k). In other words, Comp(k) S

`∈{0,1,...,k}

{1, 2, . . . ,k}`. Since the set S

`∈{0,1,...,k}

{1, 2, . . . ,k}` is clearly finite (having size

`∈{0,1,...,k}

k`), this entails that the set Comp(k)is finite as well, qed.

(Incidentally, we will see in Section 3.9 that this set Comp(k)has size 2k−1fork1, and size 1 fork=0.)

Using (78), we can rewrite (77) as (1+x)c

=

kN

(n1,n2,...,nm)∈Comp(k)

1

m! · (−1)n1+n2+···+nmm n1n2· · ·nm cm

xk. (79) Now, recall that our goal is to prove that this equals

k

N

c k

xk. This is equivalent to proving that the equality

(n1,n2,...,n

m)∈Comp(k)

1

m! · (−1)n1+n2+···+nmm n1n2· · ·nm cm =

c k

(80)

holds for each k ∈ N(because two FPSs u =

kNukxk and v =

kNvkxk (with uk ∈ K and vk ∈ K) are equal if and only if the equality uk =vk holds for each k∈ N).

Thus, we have reduced our original goal (which was to prove (1+x)c =

kN

c k

xk) to the auxiliary goal of proving the equality (80) for each k ∈ N.

However, this doesn’t look very useful, since (80) is too messy an equality to have a simple proof. We are seemingly stuck.

However, it turns out that we are almost there – we just need to take a bird’s eye view. Here is the plan: We fix k ∈ N. Instead of trying to prove the equality (80) directly, we observe that both sides of this equality are polyno-mials (with rational coefficients) in c. (Indeed, the left hand side is clearly a polynomial in c, since it is a finite sum of “rational number times a power of c” expressions. The right hand side is a polynomial in c because

c k

= c(c−1) (c−2)· · ·(c−k+1)

k! .) Thus, the polynomial identity trick (which we learnt in Subsection 3.2.3) tells us that if we can prove this equality (80) for each c ∈N, then it will automatically hold for eachc ∈K(since the two polynomials that yield its left and right hand sides will have to be equal, having infinitely many equal values). Hence, in order to prove (80) for each c ∈ K, it suffices to prove it for each c ∈ N. Now, how can we prove it for each c ∈ N ? We forget that we fixedk, and we remember that the equality (80) (for allk ∈ N) is just an equivalent restatement of the FPS equality (1+x)c =

kN

c k

xk (that is, the equality we have originally set out to prove). However, we know for sure that this equality holds for each c ∈ N(by Theorem 3.3.10, applied to n = c).

Hence, the equality (80) also holds for eachc ∈ N(and eachk ∈ N). And this is precisely what we needed to show!

Let me explain this argument in detail now, as it is somewhat vertigo-inducing.

We forget that we fixedK and c. Now, fixc ∈ N. Thus, c ∈ NZ. Hence, in However, (79) (applied toK=Q) shows that

(1+x)c =

Comparing these two equalities, we obtain

k

N

Comparing coefficients in this equality, we see that

(n1,n2,...,n

m)∈Comp(k) for eachk ∈N. This is an equality between two rational numbers.

Now, forget that we fixed c. We thus have shown that (81) holds for each k∈ Nand eachc∈ N.

Let us now fix k ∈ N. We have just shown that the equality (81) holds for eachc ∈N. In other words, the two polynomials

f :=

Hence, the polynomial f −g has infinitely many roots in Q (since there are infinitely many c ∈ N). Since f −g is a polynomial with rational coefficients,

this is impossible unless f −g =0. We thus must have f −g =0, so that f =g.

In other words,

(n1,n2,...,n

m)∈Comp(k)

1

m! · (−1)n1+n2+···+nmm n1n2· · ·nm x

m =

x k

(82) holds in the polynomial ringQ[x].

Now, forget that we fixedk. We thus have shown that the equality (82) holds for eachk ∈N.

Now, fix a commutative Q-algebra K and an arbitrary element c ∈ K. For eachk∈ N, we then have

(n1,n2,...,n

m)∈Comp(k)

1

m! · (−1)n1+n2+···+nmm n1n2· · ·nm c

m =

c k

(83) (by substituting c for x on both sides of the equality (82)). Consequently, we can rewrite (79) as

(1+x)c =

kN

c k

xk. This proves Theorem 3.8.3.

The method we used in the above proof is worth recapitulating in broad strokes:

• We had to prove a fairly abstract statement (namely, the identity(1+x)c =

kN

c k

xk).

• We translated this statement into an awkward but more concrete state-ment (namely, the equality (80)).

• We then argued that this concrete statement needs only to be proven in a special case (viz., for allc ∈ Nrather than for all c ∈ K), because it is an equality between two polynomials with rational coefficients.

• To prove this concrete statement in this special case, we translated it back into the abstract language of FPSs, and realized that in this special case it is already known (as a consequence of Theorem 3.3.10).

Thus, by strategically switching between the abstract and the concrete, we have managed to use the advantages of both sides.

Now that Theorem 3.8.3 is proved, Example 2 from Section 3.1 is fully justi-fied (since we can obtain (12) by applying Theorem 3.8.3 toK=Qandc=1/2).

3.8.3. Another application

Let us show yet another application of powers with non-integer exponents and the generalized Newton formula. We shall show the following binomial iden-tity:

Proposition 3.8.4. Let n∈ Cand kN. Then,

k i=0

n+i−1 i

n k−2i

=

n+k−1 k

.

Proposition 3.8.4 can be proved in various ways. For example, a mostly combinatorial proof is found in [19fco, Exercise 2.10.7 and Exercise 2.10.8]29. We shall give a proof using generating functions instead.

Proof of Proposition 3.8.4. Define two FPSs f,g ∈ C[[x]]by f =

iN

n+i−1 i

x2i (84)

and g =

jN

n j

xj. (85)

(We will soon see why we chose to define them this way.) Multiplying the two

29Specifically, [19fco, Exercise 2.10.7] proves Proposition 3.8.4 in the particular case whenn N; then, [19fco, Exercise 2.10.8] extends it to the case when n R. However, the latter argument can just as well be used to extend it to arbitrarynC.

equalities (84) and (85), we find

Hence, thexk-coefficient of this FPS f g is hxki

rewriting this sum as ∑

(here, we have replaced the condition “2i ≤ k” under the summation sign by the weaker condition “i ≤k”, thus extending the range of the sum; but this did not change the sum, since all the newly introduced addends are 0 because of the vanishing

Note that the right hand side here is precisely the left hand side of the identity we are trying to prove. This is why we defined f and g as we did. With a bit of experience, the computation above can easily be reverse-engineered, and the definitions of f and g are essentially forced by the goal of making (87) hold.

Anyway, it is now clear that a simple expression for f g would move us for-ward. So let us try to simplify f and g. For g, the answer is easiest: We have

g =

we need a few more steps. Proposition 3.3.12 yields

Substituting−x2for x on both sides of this equality, we obtain

(this follows by substituting −x forx on both sides of (88))

=

Thus, Proposition 3.8.4 is proved.

Im Dokument Algebraic Combinatorics (Seite 98-112)