Ueber die algebraischen Formen, deren Hesse’sche ...a-wachi/hawaiislpwlp/07...I = AnnR(F) :=...

Post on 02-Jul-2019

212 views 0 download

Transcript of Ueber die algebraischen Formen, deren Hesse’sche ...a-wachi/hawaiislpwlp/07...I = AnnR(F) :=...

Introduction to the paper of Gordan-Noether in Math.

Annalen bd 10, 1876

Ueber die algebraischen Formen, deren Hesse’sche

Determinante identisch verschwindet

“Homogeneous polynomials whose Hessian determinant

identically vanishes”

by Junzo Watanabe

Department of Mathematics, Tokai University

September 11, 2012, Workshop in Hawaii

1

1 Why are we interested in homogeneous polynomials

with zero Hessian?(f such that

∣∣∣ ∂2f∂xk∂xl

∣∣∣ = 0)

R = K[x1, x2, · · · , xn]

A polynomial ring.

A = R/I = ⊕Adi=0

A graded Artinian Gorenstein algebra.

∃F ∈ R; degree F = d such that

I = AnnR(F ) := {p(x1, · · · , xn) ∈ R|p(∂1, · · · , ∂n)F = 0},2

Remark

I contains no linear forms. ⇔ F is a form in n vari-

ables properly. (No variable can be eliminated by a

linear change of variables.)

Definition

A has the strong Lefschetz property if

∃L = ξ1x1 + · · · + ξnxn, (ξi ∈ K) such that

×Ld−2i : Ai → Ad−i,

is bijective for i = 0, 1, 2, · · · .

3

Proposition

Assume that F is a form in n variables properly.

FCAE:

1.×Ld−2 : A1 → Ad−1 is not a bijection for any linear

form L.

2. The Hessian determinant of F is identically zero.

This is why we are interested in homogeneous

polynomials with zero Hessian.

4

2 A history of Hessian

In 1851, 1856

Otto Hesse (1811 - 1874) wrote two papers in Crelle’s

Journal and “proved” that if the Hessian determinant

identically vanishes, then a variable can be eliminated

by means of a linear transformation of the variables.

Hesse’s claim is not true in general. In fact his “proof”

was weird and the validity of the proof was doubted

from the beginning. On the other hand it must have

been easy to see that Hesse’s claim is true for binary

forms as well as quadrics.

− − − − − − − − − − − − − − − − − − −−5

1875

Moritz Pasch proved that Hesse’s claim is true for

ternary and quaternary cubics.

− − − − − − − − − − − − − − − − − − −−

6

1876

P. Gordan- M. Noether, Math. Annalen bd. 10

reached a correct statement.

Gordan-Noether’s results (among other things):

1. If the Hessian determinant identically vanishes, a

variable can be eliminated by means of a birational

transformation. Moreover they proved:

2.Hesse’s claim is true if the number of variables is at

most four. And    

3. In C[x1, · · · , x5], they determined all homogeneous

forms with zero Hessian.

7

It seems that all these results and the method to prove

them had been forgotten completely. It is because, in

my opinion, they do not have applications and do not

relate to other things.

Now we have the necessity to understand their paper

and to rewrite their proof from the view point of

contemporary algebra.

− − − − − − − − − − − − − − − − − − −−

1985

Professor Hiroshi Yamada (1932-2009) wrote a paper in

which he tried to give proofs for these facts. He did not

quite succeed, and the paper was not published. I call

8

the unfinished paper “Yamada’s Notes.”

− − − − − − − − − − − − − − − − − − −−

1995

J. Watanabe gave a 15-minuet talk at Northeastern

University: AMS Regional meeting

Let A = ⊕di=0Ai be a Gorenstein ring.

(∀) Ld−2 : A1 → Ad−1 is not a bijection ⇔ The form

corresponding to A (in Macaulay’s inverse system) has

zero Hessian. And I said that Gordan-Noether’s result

could be used.

D. Eisenbud commented: “Are you sure of their

results?”

9

− − − − − − − − − − − − − − − − − − −−2000

A. Geramita suggested to me that I should write about

the result in his preprint series. So I wrote a paper in

Queen’s Papers in Pure and Appl. Math., Vol. 119,

pp. 171-178. What I wrote is :

Ld−2 : A1 → An−1 is not bijective ∀L⇔

F has zero Hessian

− − − − − − − − − − − − − − − − − − −−10

2003

T. Harima, J. Migliore, U. Nagel, and J. Watanabe

wrote a paper in J. algebra, quoted Gordan-Neother’s

results. We had to say “we have not confirmed them.”

− − − − − − − − − − − − − − − − − − −−

I knew the existence of the following paper only two

weeks ago.2004

Cristoph Lossen:

“When does the Hessian determinant vanish

identically?”

Bull Braz Math Soc, New Series 35(1), 71-82.

11

− − − − − − − − − − − − − − − − − − −−

I knew the existence of the following paper only a week

ago.2008

Alice Garbagnati and Flavia Repetto:

“A geometric approach to Gordan-Noether’s and

Franchetta’s contribution of a question posed by Hesse”

ArXiv Math 0802-0905v1[math.AG].

− − − − − − − − − − − − − − − − − − −−

2009

12

T. Maeno and J. Watanabe (Illinois J. Math.) defined

“higher Hessians,” and proved that:

A zero-dimensional graded Gorenstein algebra A has

the strong Lefschetz property ⇔ All higher Hessians

of F do not vanish identically, where F is the algebraic

form corresponding to A.

{Lefschets elements} =

d/2∩j=0

{j-th Hessian = 0}.

− − − − − − − − − − − − − − − − − − −−

2012

We are writing a preprint “On the theory of13

Gordan-Noether,” where we give proof for most of

their results. Probably there are new things that are

not contained in the above cited papers. We could not

have written it without Yamada’s Notes.

− − − − − − − − − − − − − − − − − − −−

14

In Part I of this lecture I want to give an outline of

proof for the first statement of Gordan-Noether’s

results.

Theorem

If Hessian determinant is identically zero, then a vari-

able can be eliminated by means of a birational trans-

formation.

To understand the paper of Gordan-Noether, it is help-

ful to know some examples:

15

Example 1

f =∏

1≤i<j≤n

(xi − xj)

has zero Hessian, but R/I has the strong Lefscehtz

property. This is not a contradiction, because a vari-

able can be eliminated by means of a linear transfor-

mation.

y1 = x1 − x2, y2 = x2 − x3, · · · , yn−1 = xn−1 − xn.

yn = x1 + · · · + xn. The partials has the relation:

f1 + · · · + fn = 0.

AnnRf is generated by the elementary symmetric

functions.

16

Example 2

f = u2x + uvy + v2z

is the simplest example of a homogeneous form with

zero Hessian which does not reduce to a form in fewer

variables. The Gorenstein algebra that corresponds to

it is the trivial extension of

K[u, v]/(u, v)3

by the canonical module. It does not have the SLP.

Replace u 7→ u, v 7→ v, x 7→ x − v2z/u2, y 7→ y +

vz/y, z 7→ 0, then

f = u2

(x − (

v2

u2)z

)+ uv

(y + (

v

u)z

)17

In other words, z has disappeared and f has been re-

duced to

u2x′ + uvy′.

It seems that Hesse did not known this example. It is

clear that Gordan and Noether knew this was the sim-

plest counter example to Hesse’s claim.

18

Geometric meaning of u2x + uvy + v2z

H. Nasu showed me: V (u2x + uvy + v2z) ⊂ P4 is the

generic projection of the image of the Segre embedding

P1 × P2 → P5.

19

Example 3

Let m1, · · · ,m10 be the 10 monomials of degree 3 in

K[u, v, w]. Let xi be 10 new variables and let

f = m1x1 + m2x2 + · · · + m10x10.

Then the Gorenstein algebra that corresponds to it is

the trivial extension of

K[u, v, w]/(u, v, z)4

by the canonical module. Clearly it does not have the

SLP, because the Hilbert function is

1 13 12 13 1.

This is also an example of non-unimovdal Gorenstein

Hilbert series.20

(As I said, )

u2x + uvy + v2z

is the simplest counter example to Hesse’s claim.

u2x + uvy + v2z + w3

is another such example in SIX variables.

(u2x + uvy + v2z)w

is another such example in SIX variables.

21

In the first part I am going to give a proof only for (A),

but we should be thinking of (B) and (C) also.

THEOREM (Gordan-Noether, 1876)

(A) If the Hessian determinant identically vanishes,

then a variable can be eliminated by means of a

birational transformation of the variables.

(B) Hesse’s claim is true if n ≤ 4.

(C) n = 5, all forms with zero Hessian can be deter-

mined.

22

Gordan-Noether discovered that a form with zero Hes-

sian satisfies a linear partial differential equation which

has striking properties.

It helps to know:

There exists one particular partial diff equation which

yields more partial diff equations. So in fact a form

with zero Hessian satisfies a system of partial diff

equations. For (A) and (B) first one is enough. For

(C) more diff equation are necessary.

There are at least TWO topics:

1. How does the diff equation arise from a form with zero

23

Hessian?

2.What are the solutions of the system of partial diff.

equations. In other words, how and what are the poly-

nomials with zero Hessian?

Let me start with a partial differential equation without

saying anything about Hessians.

24

3 Partial differential equations

In the polynomial ring R = K[x1, x2, . . . , xn], we

consider the partial differential equation:

h1(x)∂

∂x1f + h2(x)

∂x2f + · · · + hn(x)

∂xnf = 0,

where hi = hi(x) ∈ R are polynomials.

Put h = (h1, · · · , hn). The set of solutions in R is de-

noted by Sol(h;R). It is a subring of R.

We will always assume that hi are homogeneous of the

same degree. Even in this case Sol(h;R) may not be

finitely generated. So we do not treat it generally, but

we consider certain special cases.

25

Consider the case hi(x) are constants. So hi(x) = ai ∈K, a = (a1, · · · , an) = 0. Then the differential equation

a1∂

∂x1f + a2

∂x2f + · · · + an

∂xnf = 0 (1)

is essentially

∂xnf = 0.

Then in this case Sol(a;R) = K[x1, · · · , xn−1].

We want to describe it without making a linear change

of variables.

The above observation shows that the set of solutions

of (1) is a subring of R generated by (n−1) linear forms.

26

In fact the algebra Sol(a;R) can be described as follows:

Sol(a;R) = K[∆ij|1 ≤ i < j ≤ n],

where

∆ij =

∣∣∣∣∣ai ajxi xj

∣∣∣∣∣ .If an = 0, then

∆1n = a1xn − anx1

∆2n = a2xn − anx2

...

∆n−1,n = an−1xn − anxn−1

is a basis of Sol(a;R).Any ∆ij is a linear combination

27

of ∆in and ∆jn.

∆ij =1

an(aj∆in − ai∆jn).

1 ≤ i < j ≤ n − 1.

In short, we may regard An as a union of lines and

Sol(a;R) is the set of functions which take the same

values on the lines.

28

Theorem

Suppose that f(x) ∈ Sol(a;R). Then

1. Sol(a;R) = K[{∆ij|1 ≤ i < j ≤ n}]. (In any case

this is a polynomial ring over K in n− 1 variables.)

2. If we assume that an = 0, then Sol(a;R) is gener-

ated by ∆1n,∆2n,∆n−1,n as an algebra over K.

3. f(x) = f(x + ta). ∀t ∈ K′, where K′ is any exten-

sion field.

4. If f is not a constant, f(a) = 0 (Moreover, if

deg f = 1, then ∂∂xj

f(a) = 0 ∀j.)

29

We consider a system of linear differential equations: a(1)1

∂∂x1

f + a(1)2

∂∂x2

f + · · · + a(1)n

∂∂xn

f = 0

a(2)1

∂∂x1

f + a(2)2

∂∂x2

f + · · · + a(2)n

∂∂xn

f = 0(2)

We may assume that the matrix (a(i)j )i=1,2;j=1,··· ,n has

rank two. Thus the system is essentially

∂xn−1f =

∂xnf = 0.

Then the set of solutions in R is:

Sol(a(1), a(2);R) = K[x1, x2, · · · , xn−2].

If we want to describe the subring without making a

30

linear change of variables, then we can say

Sol(a(1), a(2);R) = K[∆ijk|1 ≤ i < j < k ≤ n],

where

∆ijk =

∣∣∣∣∣∣∣∣a(1)i a

(1)j a

(1)k

a(2)i a

(2)j a

(2)k

xi xj xk

∣∣∣∣∣∣∣∣ .If

∣∣∣∣∣a(1)1 a(1)2

a(2)1 a

(2)2

∣∣∣∣∣ = 0, for example, we can choose

{∆12λ|λ = 3, 4, · · · , n}

as a minimal set of generators for the subalgebra.Now we will treat an arbitrary number of equations.

31

Consider the system of linear equations:a(1)1

∂∂x1

f + a(1)2

∂∂x2

f + · · · + a(1)n

∂∂xn

f = 0

a(2)1

∂∂x1

f + a(2)2

∂∂x2

f + · · · + a(2)n

∂∂xn

f = 0...

a(r)1

∂∂x1

f + a(r)2

∂∂x2

f + · · · + a(r)n

∂∂xn

f = 0

(3)

(The rank of the coefficient matrix is r.)

The set of solutions is:

Sol(a(1), · · · , a(r);R) = K[∆j1j2···jr|1 ≤ j1 ≤ j2 · · · ≤ jr+1 ≤ n],

32

where

∆j1j2···jrjr+1=

∣∣∣∣∣∣∣∣∣∣∣∣∣∣

a(1)j1

a(1)j2

· · · a(1)jr+1

a(2)j1

a(2)j2

· · · a(2)jr+1

... ... ...

a(r)j1

a(r)j2

· · · a(r)jr+1

xj1 xj2 · · · xjr+1

∣∣∣∣∣∣∣∣∣∣∣∣∣∣.

In any case we can find n− r linearly independent ele-

ments, such that they generate the subring Sol(a(1), · · · , a(r);R)

33

In the above argument all coefficients are constants.

We would have the same result if we treat the coeffi-

cients as indeterminates INDEPNDENT of x1, · · · , xn.u(1)1

∂∂x1

f + u(1)2

∂∂x2

f + · · · + u(1)n

∂∂xn

f = 0

u(2)1

∂∂x1

f + u(2)2

∂∂x2

f + · · · + u(2)n

∂∂xn

f = 0...

u(r)1

∂∂x1

f + u(r)2

∂∂x2

f + · · · + u(r)n

∂∂xn

f = 0

(4)

Then we can describe the set of solutions in the ring

R = K[{u(i)j }][x].

To give a strict proof we need the theory of determi-

nantal ideal.

34

We want to introduce notation.

If L ⊂ Kn such that L = ⟨a(1), · · · , a(r)⟩, we write

Sol(L;R) for Sol(a(1), · · · , a(r);R).

If a = (a1 : · · · : an) ∈ Pn−1, then the value

a1∂f

∂x1+ · · · + an

∂f

∂xn

is determined up to a scalar multiple. So it makes sense

to speak of solutions of the differential equation. If L ⊂Pn−1 is a linear subspace, we use the same notation

Sol(L;R) = Sol(a(1), · · · , a(r);R).

35

This is the end of this section. We will define a self-vanishing system.

Now we consider

h1∂

∂x1F + · · · + hn

∂xnF = 0,

where hj = hj(x).

36

4 Self-vanishing system and some properties

Caution: In this context “system” is a “vector.”

Definition

Suppose that h = (h1, · · · , hn) is polynomial vector.

(hj = hj(x) ∈ K[x1, . . . , xn]). Then h is

a “self-vanishing system (SVS)” if hj ∈ Sol(h;R) ∀j.

In other words h is an SVS if hj (∀j) is a solution of

the differential equation:

h1∂

∂x1F + · · · + hn

∂xnF = 0

37

Example 1

A constant vector h = (a1, a2, . . . , an) ∈ Kn is obvi-

ously a self-vanishing system.

Example 2

Let hj ∈ K[x] be homogeneous polynomials (of the

same degree). Suppose that h = (h1, . . . , hn) satisfy

the following conditions.

1. h1 = · · · = hr = 0, for some integer r; 1 ≤ r < n.

2. The polynomials hr+1, . . . , hn are functions only in

x1, . . . , xr.

Then h is a self-vanishing system of forms.

38

This should be called “Gordan-Noether type.”

See how it is if r = 1, 2 or n − 1, n − 2.

As you will see later, an SVS arises from a form with

zero Hessian.

I wish I knew if there were other types of SVS if they

come from forms with zero Hessian.

If SVS which come from Hessian were all Gordan-Noether

type, then you could determine all forms with zero Hes-

sian.

39

Self-vanishing system behaves like a constant vector.

Theorem

Suppose that h = (h1(x), · · · , hn(x)) is an SVS.

FCAE.

1. f(x) = f(x + th). ∀t ∈ K′, where K′ is any exten-

sion field.

2. f(x) ∈ Sol(h;R).

In particular ∆ij =

∣∣∣∣∣hi hj

xi xj

∣∣∣∣∣ ∈ Sol(h;R)

Proof. Let y = (y1, · · · , yn) be a set of variables. Define

40

the operator Dyx by

Dyx = y1∂

∂x1+ · · · + yn

∂xn.

Then we have

f(x + yt) = f(x) +1

1!Dyxf(x)t + · · · +

1

d!Dd

yxf(x)td.

Define the operator Dhx by

Dhx = h1∂

∂x1+ · · · + hn

∂xn.

If h is self-vanishing, then we have

f(x + ht) = f(x) +1

1!Dhxf(x)t + · · · +

1

d!Dd

hxf(x)td.

QED.

41

Corollary

Suppose that h = (h1(x), · · · , hn(x)) is a self-

vanishing system. Let f(x) ∈ Sol(h;R)(= the set of

solutions). Then

1. f(h1, · · · , hn) = 0.

2. In particular hj(h1, · · · , hn) = 0 ∀j.

3. f(x) = f(s1, s2, · · · , sn−1, 0), where

si(x) = xi −hi(x)

hn(x)xn, (1 ≤ i ≤ n).

(Assumed that hn = 0.)

4. f(x)g(x) ∈ Sol(h;R) ⇒ f(x), g(x) ∈ Sol(h;R).

42

Proof. 1. Look at the d times polarization of f .

3. In the expression f(x) = f(x + th) substitute

t = −xn

hn.

Then, since ith component of (x + th) is

xi −hi

hnxn,

(x + th) = (s1, s2, · · · , sn−1, 0).

4. This follows from f(x) = f(x+ th) ⇔ f ∈ Sol(h;R).

QED.

43

5 Forms with zero Hessian

I want to show that if f is a form with zero Hessian,

then it satisfies a partial differential equation whose co-

efficients are an SVS.

44

Let f ∈ K[x1, . . . , xn] be a homogeneous polynomial.

Let fj = ∂f∂xj

. Assume that the Hessian∣∣∣ ∂2f∂xk∂xl

∣∣∣ = 0.

Then there exits a vector (h1, · · · , hn) such that

(h1, · · · , hn)

(∂2f

∂xk∂xl

)= 0. (5)

(We will assume that GCD(h1, · · · , hn) = 1. )

(h1, · · · , hn)

(∂2f

∂xk∂xl

)x1...

xn

= 0.

This shows

(h1, · · · , hn)

f1...

fn

= 0.

45

Hence

f(x) ∈ Sol(h;R).

We already knew that, for each k,

(h1, · · · , hn)

f1k...

fnk

= 0.

(Just look at the kth column of (5).)

Thus

fk(x) ∈ Sol(h;R).

We have to prove that hj(x) ∈ Sol(h;R) ∀j.This can be done by showing that there exists a poly-

46

nomial c(x) such that c(x)hj(x) is a polynomial of

f1(x), · · · , fn(x).

In fact suppose that c(x)hj(x) is a polynomial in f1, · · · , fn.Then

c(x)hj(x) ∈ K[f1, · · · , fn] ⊂ Sol(h;R).

Hence hj(x) ∈ Sol(h;R).

It remains to prove that c(x)hj(x) is a polynomial in

f1, · · · , fn for some c(x).

47

−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−

Another way to obtain the vector (h1, · · · , hn).

−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−

Note that the Hessian∣∣∣ ∂2f∂xk∂xl

∣∣∣ is the Jacobian of the

partials f1, · · · , fn of f .

So there is an algebraic relation among the partials;

g(y1, · · · , yn) ∈ K[y1, · · · , yn]

such that g(f1, · · · , fn) = 0. g is homogeneous. So we

have

y1∂g

∂y1+ · · · + yn

∂g

∂yn= (deg)g

(Choose g so that the degree is the smallest.)

48

In this formula substitute yj for fj. Then we obtain

f1h′1(x) + · · · + fnh

′n(x) = 0.

In other words, h′j(x) is

h′(x) =∂g

∂yj(f1, · · · , fn).

Moreover it is not difficult to see

fk1h′1(x) + · · · + fknh

′n(x) = 0 ∀k.

In other words,

(h′1, · · · , h

′n)

(∂2f

∂xk∂xl

)= 0.

49

Recall that

(h1, · · · , hn)

(∂2f

∂xk∂xl

)= 0.

If we assume that (∂2f/∂xk∂xl) has corank 1, we are

done!! Namely, h′j(x) = c(x)hj(x) is a function of f1, · · · , fn.

(“Corank 1” is not essential.)

50

To summarize the observation of Gordan-Noether:

If f(x) is a form with zero Hessian, then there exists a

self-vanishing system

h = (h1, · · · , hn)

such that f(x) ∈ Sol(h;R) and moreover f(x) ∈ Sol( ∂∂xj

h;R),

j = 1, 2, · · · , n. (Shortly we prove the second part.)

We state it as a theorem:

Theorem

Let f(x) be a form with zero Hessian. Then there

exists an SVS h = (h1, · · · , hn) such that Then f ∈Sol(h;R) and f ∈ Sol( ∂

∂xjh;R), j = 1, 2, · · · , n.

51

Proof. We know the first assertion: f(x) ∈ Sol(h;R).

(Sorry I have to use an odd notation f = (f1, · · · , fn).)This says that the vector product h · f is zero. I.e.,

h1f1 + · · · + hnfn = 0.

Apply the differential operator∂

∂xkto

h · f = 0.

Then we get(

∂∂xk

h)· f + h · ∂

∂xkf = 0.

We know that h · ∂∂xk

f = 0. So(

∂∂xk

h)· f = 0.

52

page47 Now we can prove Gordan-Noether’s result.

Theorem of Gordan-Noether

Suppose that f(x) is a form with zero Hessian. Then

a variable can be eliminated in f(x) by a birational

transformation.

Proof. Let h = (h1, · · · , hn) be a self-vanishing system

such that f(x) ∈ Sol(h;R). Then

f(x) = f(s1, s2, · · · , sn−1, 0),

where si = xi −hihn

xn, i = 1, 2, · · · , n − 1. This shows

that f is a polynomial in n − 1 rational functions. We

have to show that

K(s1, · · · , sn−1, xn) = K(x1, · · · , xn).

53

s1s2...

sn−1

xn

=

1 0 · · · 0 −h1hn

0 1 · · · 0 −h2hn

. . .

0 0 · · · 1 −hn−1hn

0 0 · · · 0 1

x1

x2...

xn−1

xn

.

x1

x2...

xn−1

xn

=

1 0 · · · 0 h1hn

0 1 · · · 0 h2hn

. . .

0 0 · · · 1hn−1hn

0 0 · · · 0 1

s1s2...

sn−1

xn

.

54

We already knew that

hi(x1, · · · , xn−1, xn) = hi(s1, · · · , sn−1, 0), i = 1, 2, · · · .

QED.

Remark

Suppose that f(x) is a form with zero Hessian. Then

f(x) does not degenerate if we set xn = 0 provided

that hn(x) = 0.

This is the end of Part I.

55

This is the start of Part II.

If we denote f = (f1, · · · , fn). It is a polynomial vector.

If we denote f = f(x), it is a polynomial.

When we write f = (f1, · · · , fn), most of the time fiare partials of a polynomial f(x).

At times fi are just homogeneous forms of the same

degree.

These should be distinguished from context.

The difference is that a linear transformation of the variables induces a linear transformation of the partials, but does not

in the other case.

56

Proposition

• FCAE. Let fj = ∂f(x)/∂xj.

1. f1, · · · , fn are linearly dependent.

2. A variable can be eliminated from f(x) by means

of a linear change of the variables.

• FCAE.

1. f1, · · · , fn are algebraically dependent.

2. The Jacobian∂(f1,...,fn)∂(x1,...,xn)

vanishes identically.

(In this statement, fj do not have to be the partials

of a polynomial.)

57

Proposition

Let R = K[x1, . . . , xn]. Let (f1, f2, . . . , fn) be a vector

of forms in R. Then

rankK(x)

(∂fi

∂xj

)= tr.degK K(f1, f2, . . . , fn).

In particular the following conditions are equivalent.

1. f1, . . . , fr are algebraically dependent.

2. The rank of Jacobian matrix (∂fi∂xj) is < r.

3. tr.degK K(f1, f2, . . . , fr) < r.

58

I think Gordan and Noether took this theorem for granted.

I am not certain what definition they had when they said

about the “dimension” of a variety.

− − − − − − − − − − − − − − − − − − − − − − −−

Even without referring to “Hessian,” there were many

new things to me in their paper.

− − − − − − − − − − − − − − − − − − − − − − −−

59

6 The observation of Gordan and Noether Revisited

Assume f1, . . . , fn are polynomials of the same degree.

(Do not have to be the partials of an f(x).) Assume that

these are algebraically dependent. Let

ϕ : K[y1, · · · , yn] → K[x1, . . . , xn]

be the homomorphism defined by yj 7→ fj. Let g(y)

be a homogeneous polynomial in the kernel of ϕ of the

smallest degree.

Let

h′j(x1, x2, . . . , xn) =

∂g

∂yi(f1, . . . , fn),

hj(x1, x2, . . . , xn) =1

GCD(h′1, h

′2, . . . , h

′n)

h′j(x1, . . . , xn).

60

We call the vector (h′1, h

′2, . . . , h

′n) the system of poly-

nomials associated to g(y), and (h1, . . . , hn) the reduced

system of polynomials associated g(y).

Theorem

1. (h1, h2, · · · , hn) is a syzygy of (f1, f2, · · · , fn).

2. (h1, h2, · · · , hn) is a syzygy of (∂f1∂xj, ∂f2∂xj

, · · · , ∂fn∂xj),

for all j = 1, 2 · · · .

3. (∂h1∂xj

, ∂h2∂xj

, · · · , ∂hn∂xj

) is a syzygy of (f1, f2, · · · , fn),for all j = 1, 2 · · · .

Did you know this? This is easy to prove, but I did not

61

know about this fact, not until I saw their paper.

62

Example

K[u, v, w] ⊃ f = (f1, f2, f3) = (u4, u2vw, v2w2).

Let ϕ : K[y1, y2, y3] → K[u, v, w]. Then kerϕ =

(g(y1, y2, y3) = y22 − y1y3).

1. (h1, h2, h3) = (−v2w2, 2u2vw,−u4) is syzygy of

(u4, u2vw, v2w2).

2. (h1, h2, h3) is a syzygy of

(4u3, 2uvw, 0), and

(0, u2w, 2vw2) and

(0, u2v, 2v2w).

63

− − − − − − − − − − − − − − − − − − − − − − −

Gordan-Noether applied this to the partials of a form

with zero Hessian. In the next page I show such an

example.

− − − − − − − − − − − − − − − − − − − − − − −

64

page61

Example

f = u3x + u2vy + v3z. (We assume x1 = u, x2 =

v, x3 = x, x4 = y, x5 = z.)

(f1, f2, f3, f4, f5) = (3u2x+2uvy, u2z+3v2z, u3, u2v, v3)

g(y1, y2, y3, y4, y5) = y34 − y23y5

(h′1, h

′2, h

′3, h

′4, h

′5) = (0, 0,−2u3v3, 3u4v2,−u6)

(h1, h2, h3, h4, h5) = (0, 0,−2v3, 3uv2,−u3)

The differential equation is

0∂F

∂x1+ 0

∂F

∂x2− 2v3

∂F

∂x3+ 3uv2

∂F

∂x4− u3 ∂F

∂x5= 0. (6)

65

f satisfies not only this diff equation, but it satisfies:

0∂F

∂x1+ 0

∂F

∂x2− 6v2

∂F

∂x3+ 6uv

∂F

∂x4− 0

∂F

∂x5= 0

and

0∂F

∂x1+ 0

∂F

∂x2− 0

∂F

∂x3+ 3u

∂F

∂x4− 3u2 ∂F

∂x5= 0

Gordan-Noether called the class of functions (6) “die

Functionen Φ.”

Yamada called (h1, . . . , hn) a self-vanishing system.

The vector (h3, h4, h5) is a syzygy of f3, f4, f5. I.e.,

h3f3 + h4f4 + h5f5 = 0.

66

Now we consider forms with zero Hessian where n ≤ 5.

7 Rational map defined by a self-vanishing system h.

Let f(x) = f(x1, . . . , xn) be a form with zero Hessian.

Let fj = ∂f∂xj

and let h = (h1, . . . , hn) be a self-vanishing

system of forms associated to f . Let

Z : Pn−1(x) → Pn−1(y)

be the rational map defined by the correspondence x =

(x1 : · · · : xn) 7→ (h1 : · · · : hn). Let W be the image

of Z and T the fundamental locus of Z in Pn−1(x). So

T is defined by the equations h1(x) = h2(x) = · · · =

hn(x) = 0. The algebraic set W ⊂ Pn−1(y) is defined

67

by the kernel of

ϕ : K[y] → K[x], yj → hj, j = 1, 2, · · ·

Lemma

Let h = (h1, . . . , hn) be a self-vanishing system of

forms in K[x1, . . . , xn]. Then, rank(∂hi∂xj

)≤ n/2.

In particular Krull.dim K[h1, . . . , hn] ≤ n/2.

(This lemma is independent of forms with zero Hessian.)

Proof. Consider the morphism of the affine space

ϕ : An → An,

defined by xj → hj(x). Since h is self-vanishing, the

composition is zero: ϕ2 = 0. Let Jϕ = (∂hi∂xj

) be the

68

Jacobian matrix. Since we have J2ϕ = 0, the rank cannot

exceed n/2. QED.

If n ≤ 5, then dimW ≤ 1.

−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−

Recall the definition of T,W , etc.

Z : Pn−1(x) → Pn−1(y)

is the rational map defined by h = (h1, · · · , hn).

W = Proj(K[(h1, · · · , hn)), the image of Z.

T = Proj(K[x1, · · · , xn]/(h1, · · · , hn)), the fundamen-

tal locus of Z.

69

Proposition

FCAE.

(a) deg (hj) = 0, i.e., h is a constant vector.

(b) dim W = 0, i.e., W is a one-point set.

(c) T is empty, i.e., Z is a morphism.

Suppose (a) is not true. Then we have hj(h) = 0. This

shows T is not empty. Thus (c) ⇒ (a).

(a) ⇒ (b) ⇒ (c) are trivial.

In this case a variable can be eliminated from f(x) by

70

a linear transformation of the variables. Indeed we have

h1f1 + · · · + hnfn = 0.

In other words the partials of f(x) has a linear relation.

71

Proposition

If dim W ≥ 1, then

n/2 ≤ Krull.dim K[x]/(h1, . . . , hn) ≤ n − 2,

or equivalently,

n/2 − 1 ≤ dim T ≤ n − 3.

Proof. Since height (h1, · · · , hn) ≥ 2, we have “Krull

dimension ≤ n − 2.” Consider the ring extension

S := K[h1, · · · , hn] → R := K[x1, · · · , xn].

We have shown that dimS ≤ n/2. (This is a consequence

of “hj(h1, · · · , hn) = 0, ∀j,” a very telling argument!)

72

So the dim of the fiber is ≥ n/2. QED.

If n = 4, then dimT = 1.

If n = 5, then 1 ≤ dimT ≤ 2.

−−−−−−−−−−−−−−−−−−−−−−−−−

Before we go to the next theorems, recall that W is the

image of the rational map:

Pn−1(x) → Pn−1(y)

(x) 7→ (h)

When dimW = 1, we have very satisfying theorems.

73

8 Proof for n = 4 and n = 5

Theorem

Assume that dim W = 1. Let

i : Pn−1(y) → Pn−1(x)

be the natural map yj → xj. Then L(i(W )) ⊂T , where L(i(W )) is the linear closure of i(W ) in

Pn−1(y).

Theorem

With the same notation and assumption, we have

hj(x) ∈ Sol(L(i(W ));R) for all j = 1, 2, . . . , n.

74

As long as dimW > 0, we have

i(W ) ⊂ T.

It is simply that

hj(h1, · · · , hn) = 0 ∀j.

The first theorem says that you can prove that we can

prove that the linear closure of i(W ) is contained in T

(under the assumption dimW = 1).

Problem: Is L(i(W )) ⊂ T not true if we drop

“dim W = 1” ? We have to assume dim W = 0.

The second theorem says that h is a Gordan-Noether

type.

Recall the example I showed you before:75

Example 2

Let hj ∈ K[x] be homogeneous polynomials (of the

same degree). Suppose that h = (h1, . . . , hn) satisfy

the following conditions.

1. h1 = · · · = hr = 0, for some integer r; 1 ≤ r < n.

2. The polynomials hr+1, . . . , hn are functions only in

x1, . . . , xr.

Then h is a self-vanishing system of forms.

This should be called “Gordan-Noether type.”

Now we deal with forms with zero Hessian for n = 4, 5.

76

9 Meaning of Sol(i(L(W ));R)

Assume n = 4. If W is a point, the partials of f

has a linear relation. (So Hesse’s claim holds.) Assume

dimW > 0. Then from n/2−1 ≤ dim T ≤ n−3 we have

dimT = 1. Hence the linear space i(L(W )) ⊂ T ⊂ P3 is

one dimensional. This means dimKKh1+Kh2+Kh3+

Kh4 = 2 we may assume that h1 = h2 = 0.

The linear space L(W ) can be assumed {(0, 0, ∗, ∗)}.Recall that we used g(y1, y2, y3, y4) such that g(f1, f2, f3, f4) =

0 to define

h = (h1, h2, h3, h4).

Now h1 = h2 = 0 means that g is a form in two variables.

Reason: ∂g∂y1

(f1, · · · , f4) = 0 means that ∂g∂y1

(y1, · · · , y4) =

77

0, because we chose g to have the smallest degree. So it

does not contain the variable y1. By the same reason, it

does not contain y2.

Thus g involves only two variables. So it is a linear

form, because it has to be an irreducible polynomial.

Hence the partials of f has a linear relation.

Now assume n = 5. In this case we also have dimW =

1, but this time dimT = 1, 2. This means that dimL(W ) =

1, 2, since L(i(W )) ⊂ T . If dimL(W ) = 1, we may as-

sume h1 = h2 = h3 = 0, and L(W ) = {(0, 0, 0, ∗, ∗)}.Since hj ∈ Sol(L(i(W ));R), h4, h5 are polynomials only

in x1, x2, x3. Since h1 = h2 = h3 = 0, as in the case

n = 4, ∂g∂y1

= ∂g∂y2

= ∂g∂y3

= 0. So g is a polynomial only

in the two variables. It has to be a linear form and it is a

78

linear relation among the partials of f because g involves

only two variables.

We are left with the case, dimL(W ) = 2, we may as-

sume h1 = h2 = 0, L(W ) = {(0, 0, ∗, ∗, ∗)}. So we may

assume that h3, h4, h5 are polynomials only in x1, x2.

Recall the fact:

(1) f ∈ Sol(h;R)

(2) f ∈ Sol( ∂∂xj

h;R), j = 1, · · · , n.

Note (1) follows from (2). Since h1 = h2 = 0, and

since h3, h4, h5 are functions only in x1, x2, (2) maybe

79

rewitten as (∂h3∂x1

∂h4∂x1

∂h5∂x1

∂h3∂x2

∂h4∂x2

∂h5∂x2

)f3f4f5

=

(0

0

)

Let A = (aij(x)) be the matrix:

A =

(1/g1 0

0 1/g2

)(∂h3∂x1

∂h4∂x1

∂h5∂x1

∂h3∂x2

∂h4∂x2

∂h5∂x2

),

where g1 is the GCD of the 1st row and g2 is the GCD

of the 2nd row.

We still have

A

f3f4f5

=

(0

0

).

80

We claim that if f ∈ K[x1, · · · , x5] is homogeneous of

degree d with respect to x3, x4, x5, then f is:

f ∈ K[x1, x2]∆d.

where

∆ =

∣∣∣∣∣∣∣a13 a14 a15a23 a24 a25x3 x4 x5

∣∣∣∣∣∣∣To prove it, note that (f3, f4, f5) is determined by the

matrix A up to a multiple of an element in K[x1, x2, x3, x4, x5].

In other words, (f3, f4, f5) = M(δ1, δ2, δ3), where M ∈K[x1, · · · , x5].

Take a vector product with (x3, x4, x5). Then, since f

81

is homogeneous with respect to x3, x4, x5, we have

(degf)f = x3f3 + x4f4 + x5f5 = M∆.

Notice that ∆ ∈ Sol( ∂∂xj

h;R), j = 1, 2.

Hence M ∈ Sol( ∂∂xj

h;R), j = 1, 2.

By induction we have M = M ′∆d−1, M ′ ∈ K[x1, x2].

Thus f = M ′∆d.

Even if we do not assume that f is homogeneous w.r.t.

x3, x4, x5, we have shown that

f ∈ Sol(∂

∂xjh;R), j = 1, 2. ⇒ f ∈ K[x1, x2][∆].

QED. In the above proof we used the fact

rankA = trans.deg K(h3, h4, h5) = 2.

82

Even if we do not assume dimW = 1, it is possible that

hj(x) ∈ Sol(i(L(W ));R) for all j = 1, 2, . . . , n.

is satisfied.

Gordan-Noether (in §6 “f-Problem” of their paper) de-

scribes all possible form F with zero Hessian under the

assumption

hj(x) ∈ Sol(i(L(W ));R) for all j = 1, 2, . . . , n.

(I do not think they are quite right in the description of

these forms.)

The condition

hj(x) ∈ Sol(i(L(W ));R) for all j = 1, 2, . . . , n.

is exactly what I said “Gordan-Neother type”

83

Example 2

Let hj ∈ K[x] be homogeneous polynomials (of the

same degree). Suppose that h = (h1, . . . , hn) satisfy

the following conditions.

1. h1 = · · · = hr = 0, for some integer r; 1 ≤ r < n.

2. The polynomials hr+1, . . . , hn are functions only in

x1, . . . , xr.

Then h is a self-vanishing system of forms.

84

Questions

1.What are other types of SVS’s arising from forms with

zero Hessian.

2. Provide an integrity basis for the subringµ∩

j=1

Sol(∂

∂xjh;R), where µ = dimW.

for h of Gordan-Noether type.

3. Prove that if f is a form with zero Hessian properly

containing n variables, prove that the Gorenstein al-

gebra that corresponds to it is not a complete inter-

section.

4. If f is a symmetric function properly containing n

85

variables, then prove that the Hessian of f does not

vanish.

86

Thank you for listening.

87

参考文献

[1] P. Gordan and M. Nother, Ueber die algebraischen

Formen, deren Hesse’sche Determinante identisch ver-

schwindet, Math. Ann. 10 (1876), 547-568.

[2] T. Maeno and J. Watanabe, Illinois J. Math., vol.

53, no. 2 (2009), 591-603.

[3] J. Watanabe, A remark on the Hessian of homogeneous

polynomials, in The Curves Seminar at Queen’s Volume

XIII, Queen’s Papers in Pure and Appl. Math., Vol.

119, 2000, 171-178.

[4] H. Yamada, On a theorem of Hesse —P. Gordan

and M. Noether’s theory —, Unpublished.

88