• Nu S-Au Găsit Rezultate

Whoever looks at a circle will continue to see an ellipse, unless his eye is on the axis of the curve

N/A
N/A
Protected

Academic year: 2022

Share "Whoever looks at a circle will continue to see an ellipse, unless his eye is on the axis of the curve"

Copied!
34
0
0

Text complet

(1)

CHAPTER VII

HYPERQUADRICS

Conic sections have played an important role in projective geometry almost since the beginning of the subject. In this chapter we shall begin by defining suitable projective versions of conics in the plane, quadrics in 3-space, and more generally hyperquadrics inn-space. We shall also discuss tangents to such figures from several different viewpoints, prove a geometric classification for conics similar to familiar classifications for ordinary conics and quadrics in R2 and R3, and we shall derive an enhanced duality principle for projective spaces and hyperquadrics. Finally, we shall use a mixture of synthetic and analytic methods to prove a famous classical theorem due to B. Pascal (1623–1662)1 on hexagons inscribed in plane conics, a dual theorem due to C. Brianchon (1783–1864),2and several other closely related results.

1. Definitions

The three familiar curves which we call the “conic sections” have a long history ... It seems that they will always hold a place in the curriculum. The beginner in analytic geometry will take up these curves after he has studied the circle. Whoever looks at a circle will continue to see an ellipse, unless his eye is on the axis of the curve. The earth will continue to follow a nearly elliptical orbit around the sun, projectiles will approximate parabolic orbits, [and] a shaded light will illuminate a hyperbolic arch. — J. L. Coolidge (1873–1954)

In classical Greek geometry, conic sections were first described synthetically as intersections of a plane and a cone. On the other hand, today such curves are usually viewed as sets of points (x, y) in the Cartesian plane which satisfy a nontrivial quadratic equation of the form

Ax2 + 2Bxy + Cy2 + 2D + 2E + F = 0

where at least one of A, B, C is nonzero. In these notes we shall generally think of conics and quadrics in such terms. Here are some online references which relate the classical and modern approaches to these objects. The first contains some historical remarks, the second is a fairly detailed treatment which shows the equivalence of the classical and modern definitions only using material from elementary geometry, and the third contains a different proof that the definitions are equivalent using standard results from trigonometry.

http://xahlee.org/SpecialPlaneCurves dir/ConicSections dir/conicSections.html

http://mathdl.maa.org/convergence/1/?pa=content&sa=viewDocument&nodeId=196&bodyId=60

1Incidentally, he proved this result when he was 16 years old.

2This result was originally discovered without using duality.

143

(2)

http://math.ucr.edu/∼res/math153/history04Y.pdf

The corresponding notion of quadric surfacein R3 is generally defined to be the set of zeros of a nontrivial quadratic polynomialp(x, y, z) in three variables (nontriviality means that at least one term of degree two has a nonzero coefficient). One can similarly define a hyperquadricin Rn to be the set of zeros of a nonzero quadratic polynomial p(x1, · · · , xn). Such an equation has the form

X

i,j

ai,jxixj + 2· X

k

bkxk + c = 0 where at least one of the coefficientsai,j = 0.

One obvious question about our definitions is to give a concise but useful description of all the different types of conics, quadrics or hyperquadrics that exist in Rn. Using linear algebra, in each dimension it is possible to separate or classify such objects intofinitely manytypes such that

if Σ1 and Σ2 are hyperquadrics that are affinely equivalent (so that there is an affine transformation T of Rn such that T[Σ1] = Σ2, thenΣ1 and Σ2 have the same type. — In fact, one can choose the affine transformation to have the form T1oT0, where T0 is a linear transformation and T1 is given by a diagonalizable invertible linear transformation; in other words,there are nonzero scalarsdi such that for each iwe have T1(ei) = diei, where ei is the ith standard unit vector inRn.

For n= 2 and 3, the details of this classification are described explicitly in Section V.2 of the following online document:

http://math.ucr.edu/∼res/math132/linalgnotes.pdf

The case of conics in R2 is summarized in the table on page 82 of this document, and the case of quadrics in R3 is summarized in the table on page 83 of the same document. In particular, there are fewer than 10 different types of possible nonempty figures inR2 (including degenerate cases of sets with one point or no points) and fewer than 20 different types of possible nonempty figures in R3 (also including an assortment degenerate cases). Later in this chapter we shall describe the analogous classification forRn (with n≥3 arbitrary) in one of the exercises.

Projective extensions of hyperquadrics We are now faced with an obvious question:

How does one define a hyperquadric in projective space?

Let us consider the analogous situation in degree one. The sets of solutions to nontrivial linear equationsp(x1, · · · , xn) = 0 are merely hyperplanes. If p(x1, · · · , xn)s = P

i aixi + b, then this hyperplane is just the set of ordinary points inRPnwhose homogeneous coordinates satisfy thehomogeneous linear equation

n

X

i=1

aixi + bxn+1 = 0 . This suggests the following: Consider the quadratic polynomial

p(x1, · · · , xn) = X

i,j

ai,jxixj + 2· X

k

bkxk + c

(3)

1. DEFINITIONS 145

and turn it into a homogeneous quadratic polynomial by multiplying each degree 1 monomial in the summation by xn+1 and multiplying the constant term by x2n+1. We then obtain the modified quadratic polynomial

p(x1, · · · , xn) = X

i,j

ai,jxixj + 2· X

k

bkxkxn+1 + cx2n+1 which is homogeneous and has the following compatibility properties:

Theorem VII.1. (i) If X is a point in RPn and ξ and ξ0 are homogeneous coordinates for X, then p(ξ) = 0 if and only if p(ξ0) = 0.

(ii) The set of zeros for p is equal to the set of ordinary points in RPn whose homogeneous coordinates are zeros of p.

Proof. We shall proof the two parts separately.

PROOF OF (i). Observe that p(kξ) = k2·p(ξ) by direct computation. Therefore ξ0 =kξ for some k 6= 0 implies that p(ξ0) = 0 if and only if p(ξ) = 0.

PROOF OF (ii). If x ∈ Rn,1, then the transpose of (x1, · · · , xn,1) is a set of homogeneous coordinates for J(x) ∈ RPn, and it is elementary to check that the solutions to the equa- tion p = 0 contained in the intersection of the set of ordinary points and the points in RPn whose homogeneous coordinates are solutions to the equation p = 0 (in particular, we have p(x1, · · · , xn) = p(x1, · · · , xn,1)). Conversely, if p(x1, · · · , xn, xn+1) = 0 where xn+1 6= 0, then we also have

0 = 1

x2n+1 ·p(x1, · · · , xn, xn+1) = p x1

xn+1, · · · , xn xn+1,1

= p(x1, · · · , xn)

and hence the solutions to p= 0 in the image of J are all ordinary points which are solutions to p= 0.

All of the preceding discussion makes at least formal sense over an arbitrary field F; of course, the mathematical value of the quadrics considered depends strongly upon the solvability of quadratic equations within the given field.3 Define a hyperquadric Σ in FPn to be the set of zeros of a homogeneous quadratic equation:

n+1

X

i,j=1

ai,jxixj = 0

In the study of hyperquadrics we generally assume that 1 + 16= 0 inF. This condition allows us to choose the n2 coefficientsai,j so that ai,j =aj,i; for if we are given an arbitrary homogeneous quadratic equation as above and set bi,j = 12(aj,i+ai,j), then it is easy to see that

n+1

X

i,j=1

ai,jxixj = 0 if and only if

n+1

X

i,j=1

bi,jxixj = 0 because we have

n+1

X

i,j=1

bi,jxixj = 12

n+1

X

i,j=1

ai,jxixj +

n+1

X

i,j=1

aj,ixixj

 .

3All fields in this chapter are assumed to have commutative multiplications.

(4)

For these reasons,we shall henceforth assume 1 + 16= 0 inFand ai,j =aj,i for all iandj.

It is natural to view the coefficients ai,j as the entries of a symmetric (n+ 1)×(n+ 1) matrix A. If we do so and Σ is the hyperquadric in FPn defined by the equation P

i,j ai,jxixj = 0, then we may rewrite the defining equation for Σ as follows: A point X lies on Σ if and only if for some (equivalently, for all) homogeneous coordinates ξ representingX we have

TξAξ = 0.

If we have an affine quadric inFndefined by a polynomialpas above, then an (n+ 1)×(n+ 1) matrix defining its projective extension is given in block form by

A Tb b c

where the symmetric matrixA = (ai,j) gives the second degree terms ofp, the row vector 2·b gives the first degree termsbi (note the coefficient!), andc gives the constant term.

Hypersurfaces of higher degree

The reader should be able to define projective hypercubics, hyperquartics, etc., as well as the projective hyper—ic associated to an affine hyper—ic. Subsets of these types are generally called projctive algebraic varieties; they have been studied extensively over the past 300 years and have many interesting and important properties. The mathematical study of such objects has remained an important topic in mathematics ever since the development of projective geometry during the 19th century, but it very quickly gets into issues far beyond the scope of these notes.

In particular, the theory involves a very substantial amount of input from multivariable calculus and the usual approaches also require considerably more sophisticated algebraic machinery than we introduce in these notes. The rudiments of the theory appear in Sections V.4–V.6 of the book by Bumcrot, and a more complete treatment at an advanced undergraduate level is given in Seidenberg,Elements of the Theory of Algebraic Curves, as well as numerous other introductory books onalgebraic geometry.

Projective algebraic varieties also turn out to have important applications in various directions, including issues in theoretical physics, the theory of encryption, and even the proof of Fermat’s Last Theorem during the 1990s which was mainly due to Andrew Wiles (the word “mainly” is included because the first complete proof required some joint work of Wiles with R. Taylor, and Wiles’ work starts with some important earlier results by others). A reader who wishes to learn more about some of these matters may do so by going to the final part of Section IV.5 in the online document

http//:math.ucr.edu∼res/math133/coursenotes4b.pdf and checking the traditional and electronic references cited there.

EXERCISES

1. Consider the conics inR2 defined by the following equations:

(i) The circle defined byx2+y2−1 = 0.

(ii) The hyperbola defined by xy−1 = 2.

(iii) The parabola defined byy−x2 = 0.

(5)

1. DEFINITIONS 147

Show that the associated projective conics have 0, 1 and 2 points at infinity respectively, and give homogeneous coordinates for these points.

2. Find which points (if any) at infinity belong to the projective conics associated to the conics inR2 defined by the following equations.

(i) x2−2y2−2xy = 0 (ii) 3x2+ 4y2−4x+ 2 = 0 (iii) x2+y2−4y = 4

(iv) x2−4xy−4y2−2y = 4

3. Find the points at infinity on the projective quadrics associated to the quadrics in R3 defined by the following equations.

(i) x2+y2−z2 = 1

(ii) x2+y2−z2−6x−8y = 0 (iii) x2+y2 = 2z

(iv) x2−y2−z2 = 1 (v) x2+y2 = z (vi) x2+y2 = z2

4. For each of the following affine quadricsσ inR3, find a symmetric 4×4 matrix such that the projective extensionP(Σ) of Σ is defined by the equation TξAξ = 0.

(i) Σ is defined by the affine equation 4x2+ 3y2−z2+ 2x+y+ 2z−1 = 0.

(ii) Σ is defined by the affine equation 3x2+y2+ 2z2+ 3x+ 3y+ 4z = 0.

(iii) Σ is defined by the affine equation 2x2+ 4z2−4x−y−24z+ 36 = 0.

(iv) Σ is defined by the affine equation 4x2+ 9y2+ 5z2−4xy+ 8yz+ 12xz+ 9z−3 = 0.

(6)

2. Tangents

Tangent lines to circles play an important role in classical Euclidean geometry, and their generalizations to other conics were also known to classical Greek mathematicians such as Archimedes4(287 B. C. E. – 212 B. C. E.) and Apollonius of Perga5 (c. 262 B. C. E. – c. 190 B. C. E.). In modern mathematics they are generally defined using concepts and results from single variable or multivariable differential calculus. Of course, the latter is designed to work primarily in situations where the coordinates are real or complex numbers, and since we want to consider more general coordinates we need to develop an approach that is at least somewhat closer to the classical viewpoint.

In these notes we shall concentrate on the following two ways of viewing tangents to conics in R2 or quadrics inR3.

1. SYNTHETIC APPROACH. A line is tangent to a hyperquadric if and only if it lies wholly in the hyperquadric or has precisely one point of intersection with the hyperquadric.

2. ANALYTIC APPROACH. Let X ∈ Σ∩L, where Σ is a hyperquadric and L is a line.

ThenLis tangent to Σ if and only if there is a differentiable curveγ : (a;b)→Rnlying totally in Σ such that γ(t0) =x for somet0∈(a;b) andL is the linex + R·γ0(t0).

For our purposes the first viewpoint will be more convenient; in Appendix E we shall show that the analytic approach is consistent with the synthetic viewpoint, at least in all the most important cases. Actually, the viewpoint of calculus is the better one for generalizing tangents to cubics, quartics,etc., but a correct formulation is too complicated to be given in these notes.

We begin with a result on solutions to homogeneous quadratic equations in two variables:

TheoremVII.2. Suppose that F is a field in which 1 + 16= 0, and (x1, y1), (x2, y2),(x3, y3) are solutions to the homogeneous quadratic equation

ax2 + bxy + cy2 = 0 .

Then either a = b = c = 0 or else one of (x1, y1), (x2, y2), (x3, y3) is a nonzero multiple of another.

Proof. If the hypothesis holds, then in matrix terminology we have

x21 x1y1 y12 x22 x2y2 y22 x23 x3y3 y32

·

 a b c

=

 0 0 0

 .

4Archimedes of Syracuseis well known to be one of the most important figures in Greek mathematics;

his contributions to physics and engineering innovations are also well known.

5Apollonius of Pergais particularly known for his extensive study of conic sections, which goes far beyond anything previously written. He is also known for other contributions to mathematics and astronomy.

(7)

2. TANGENTS 149

Suppose not all ofa, b, c are nonzero. Then the given 3×3 matrix is not invertible and hence has a zero determinant. But the determinant of such a matrix may be computed directly, and up to a sign factor it is equal to

x1 y1 x2 y2 ·

x1 y1 x3 y3 ·

x2 y2 x3 y3

The vanishing of this determinant implies that one of the 2×2 determinants in the factorization must be zero, and the latter implies that the rows of the associated 2×2 matrix are proportional to each other.

The preceding result has the following important geometric application:

Theorem VII.3. Let Σ be a hyperquadric inRPn, let X ∈Σ, and let L be a line containingX. Then Σ∩L is either {X}, two points, or all of L.

Proof. Let Y 6=X whereY ∈L, let ξ and η denote homogeneous coordinates for X and Y respectively, and suppose that Σ is defined by the equation

TωAω = 0

whereA is a symmetric (n+ 1)×(n+ 1) matrix andω representsW ∈RPn.

If Z ∈ L and is represented by the homogeneous coordinates ζ, then ζ = uξ+vη for some u, v∈Fthat are not both zero. By construction, Z ∈Σ if and only if

0 = TζAζ = T(uξ+vη)A(uξ+vη) =

u2TξAξ + 2uvTηAξ + v2TηAη = u2p + 2uvq + v2q

for suitable constants p, q, r. We claim that Σ∩L has at least three points if and only if L⊂Σ. The “only if” implication is trivial, so we shall focus on the “if” direction. — Suppose thatZ1, Z2, Z3 are points on Σ∩L, and take homogeneous coordinates ζi = uiξ+viη forZi. By Theorem 2, either p =q =r = 0 (in which case L ⊂Σ) or else one of the pairs (ui, vi) is proportional to the other, say (uj, vj) = m(uk, vk) for some m 6= 0. In this case we have that Zj =Zk and henceZ1, Z2, Z3 are not distinct.

Definition. Let Σ be a hyperquadric, letX ∈Σ, and let Lbe a line containing X. We shall say that L is a tangent line to Σ at X if either Σ∩L− {X} orL⊂Σ. In the remaining case where Σ∩Lconsists of two points, we shall say that Lis asecant line through X. Thetangent space to Σ atX is equal to the union of all tangent lines to Σ at X.

Singular and nonsingular points

If we consider the conic in R2 defined by the equtionx2−y2 = 0 we see that the structure of the conic at the origin is different than at other points, for the conic is given by a pair of lines which intersect at the origin. Some words which may be used to describe this difference are exceptional, special orsingular. A concise but informative overview of singular points for plane curves appears in the following online reference:

http://mathworld.wolfram.com/SingularPoint.html

There are corresponding theories of singularities for surfaces in R3, and more generally for hypersurfaces in Rn. Not surprisingly, if one is only interested in hyperquadrics as in these notes, then everything simplifies considerably. We shall explain the relationship between the

(8)

theory of singular and nonsingular points for hyperquadrics and the general case in Appendix E.

We have given a purely synthetic definition of the tangent space to a hyperquadric Σ⊂FPn at a point X ∈Σ. The first step is to give an algebraic description of the tangent space in terms of homogeneous coordinates.

Theorem VII.4. Let F and Σ ⊂ FPn be as above, and let X ∈ Σ. Then the tangent space to Σ at X is either a hyperplane in FPn or all of FPn. In the former case, X is said to be a nonsingular point, and in the latter case X is said to be a singular point. Furthermore, if Σ is defined by the symmetric matrix Aand ξ is a set of homogeneous coordinates forX, then in the nonsingular caseTξAis a (nonzero) set of homogeneous coordinates for the tangent hyperplane, but in the singular case we have TξA = 0.

EXAMPLES. Suppose we consider the projectivizations of the circlex2+y2 = 1, the hyperbola x2 −y2 = 1, the parabola y = x2, and the pair of intersecting lines x2 = y2. Then the corresponding projective conics are defined by the following homogeneous quadratic equations:

x21+x22−x23 = 0, x21−x22−x23 = 0 x21−x2x3 = 0, x21−x22 = 0

In the first three cases the associated 3×3 symmetric matrixAis invertible, and henceTξA6=0 for all nonzeroξ, so that every point of these projective conics will be a nonsingular point. — On the other hand, in the fourth example, the symmetric matrixAis not invertible, and in fact its kernel (either on the left or right side!) consists of all vectors whose first and second coordinates are equal to zero. This implies that all points on the conic except J(0) are nonsingular but J(0) is singular. These examples are all consistent with our intuition that the first three curves behave regularly (or are nonsingular) at all points and the fourth curve behaves regularly at all points except the origin.

Proof. In the proof of the preceding theorem, we noted that if Y ∈FPn with homogeneous coordinatesη and Z ∈XY has homogeneous coordinatesζ = uξ+vη, thenZ ∈Σ if and only if

u2 TξAξ

+ 2uv TξAη

+ v2 TηAη

= 0

and the number of points onXY ∩Σ depends upon the equivalence classes of solutions to this equation, which we shall call theINTERSECTION EQUATION.

CLAIM: The line XY is tangent to Σ if and only if TξAη = TηAξ = 0.

Suppose first thatXY is tangent to Σ. If XY is contained in Σ, then we have

TξAξ = TηAη = T(ξ+η)A(ξ+η) = 0

and elementary manipulations of these equations show that 2·TηAξ = 0. On the other hand, if XY∩Σ = {X}, thenTηAη = 0 and the only solutions to the Intersection Equation in the first paragraph of the proof are pairs (u, v) which are nonzero scalar multiples of (1,0). Therefore, the Intersection Equation evaluated at (1, t) is equal to zero if and only if t= 0. However, it is easy to check that the ordered pair

1, −

TξAη

TηAη

(9)

2. TANGENTS 151

solves the Intersection Equation because

TξAξ = 0 and therefore we must have TξAη = TηAξ = 0.

Conversely, suppose that TξAη = TηAξ = 0. Since TξAξ = 0, the Intersection Equation reduces to

v2 TηAη

= 0.

This equation means that either TηAη = 0, in which case we have L ⊂ Σ, or else v = 0, in which case every solution (u, v) of the Intersection equation is proportional to the known solution (1,0), so that Σ∩XY = {X}.

To conclude the proof, we have shown that the tangent space at X is the set of all points Y such that TξAη = 0. If TξA = 0, this is all of FPn, and if TξA 6= 0, this is the hyperplane with homogeneous coordinates TξA.

We shall say that a hyperquadric Σ is nonsingular if for each X ∈Σ the tangent space at X is a hyperplane (algebraically, this means that if ξ represents X then TξA6=0.

TheoremVII.5. IfΣis a hyperquadric defined by the symmetric matrixA, thenΣis nonsingular if and only if A is invertible.

Proof. Suppose first thatAis invertible. Thenξ 6=0 implies that TξAis nonzero, and by the preceding result it follows that the tangent space at every point must be a hyperplane.

Conversely, suppose that A is not invertible. Then there is some ξ 6=0 such that TξA = 0, and if ξ represents X it follows thatX ∈Σ andX is a singular point of Σ.

By definition, each symmetric matrix A determines a hyperquadric ΣA. This is not a 1–1 correspondence, for ifcis a nonzero scalar then clearly ΣA = ΣcA. We shall now use the notion of tangent hyperplane to show that, in many cases, this is the only condition under which two matrices can define the same hyperquadric. Further discussion of this question is given in Section 2 of Appendix E.

TheoremVII.6. Let AandB be symmetric(n+ 1)×(n+ 1) matrices over the field Fin which 1 + 16= 0, and suppose they define the same nonempty hyperquadric in FPn. Assume that Σhas at least one nonsingular point. Then B is a scalar multiple ofA.

Proof. We are given that Σ has a nonsingular point X; let ξ be a set of homogeneous coordinates forX. Then bothTξAandTξB define the same hyperplane and henceTξA = k·TξB for some nonzero scalark.

Suppose now thatY does not lie on this tangent hyperplane, and letη be a set of homogeneous coordinates for Y. Then the line XY meets Σ in a second point which has homogeneous coordinates of the formuξ+η for some u∈F. This scalar satisfies the following equations:

2uTξAη + TηAη = 0, 2uTξBη + TηBη = 0 Since TξA = k·TξB the equations above imply that

TηAη = k·TηBη

(10)

for all Y whose homogeneous coordinates satisfy TξAη 6= 0 (i.e., all vectors in Fn+1,1 except those in the n-dimensional subspace defined by the tangent hyperplane to Σ and X).

To prove thatTηAη = k·TηBη ifY lies in the tangent hyperplane atX, let Z be a point which is not on the tangent hyperplane. Then

TωAω = k·TωBω

forω = ζ, η+ζ, η−ζ. Let C =A orB, and write ΨC(γ, δ) = TγCδ. We then have the following:

ψC(η, ζ) = 14ΨC(η+ζ, η+ζ) − 14ΨC(η−ζ, η−ζ) ΨC(η, η) = ΨC (η+ζ)−ζ,(η+ζ)−ζ

By the first of these and the preceding paragraph, we have ΨA(η, ζ) = k·ΨB(η, ζ). Using this, the second equation above and the preceding paragraph, we see that ΨA(η, η) = k·ΨB(η, η) ifη represents a pointY in the tangent hyperplane to Σ atX. Applying this and the first displayed equation to arbitrary nonzero vectorsη, ζ ∈Fn+1,1, we see that ΨA(η, ζ) = k·ΨB(η, ζ). Since ci,j is the value of ΨC(ei,ej) if ei and ej are the standard unit vectors (the kth coordinate of ek is 1 and the rest are 0), we see that ai,j = k·bi,j for all i and j, and hence we see that B =k·A.

Intersections of two conics

Earlier in this section we noted that a line and a quadric intersect in at most two points. This may be viewed as a generalization of a standard fact from elementary algebra; namely, if we are given a system of two equations in two unknowns, with one linear and the other quadratic, then the system of two equations has at most two solutions. There is a similar principle which states that two quadratic equations in two unknowns have at most four solutions. In terms of analytic geometry, this means that two conics have at most four points in common. A geometrical derivation of the latter result for projective conics appears in Exercise II.6.11, and in the note following this exercise the statement about solutions to systems of quadratic equations is also discussed.

EXERCISES

In all these exercisesF denotes a (commutative) field in which 1 + 16= 0.

1. Find the singular points (if any) of the projective conics given in Exercise 3 of the previous section.

2. Find the equations of the tangent lines to the following conics in RP2 at the indicated points:

(i) The conic defined byx21+ 2x1x2+ 4x1x3+ 3x22−12x1x3+ 2x23 = 0 at the points

 1 1 1

 and

 1 1 3

 .

(11)

2. TANGENTS 153

(ii) The conic defined byx21−2x1x2+ 4x22−4x23 = 0 at the points

 2 2

−1

 and

 2 0 1

 .

Definition. Let Σ be a hyperquadric in FPn defined by the (n+ 1)×(n+ 1) matrix A such that Σ has at least one nonsingular point. Two points X and Y in FPn are said to beconjugate with respect to Σ if they have homogeneous coordinates ξ and η respectively such that TξAη = 0. By Theorem 6, this definition does not depend upon any of the choices (including A). Moreover, a point is self-conjugate if and only if it lies on Σ.

3. In the setting above, assume that X 6∈ Σ and Y is conjugate to P with respect to Σ.

Suppose thatXY ∩Σ consists of two points, sayA andB. Prove that XR(X, Y, A, B) = −1.

Note. If Σ is nonsingular and nonempty (hence A is invertible by Theorem 5) andX ∈FPn, then a hyperplane with homogeneous coordinatesTξA is called thepolar hyperplaneof X with respect to Σ. The mapPsendingX to its polar hyperplane is a collineation from FPn to its dual FPn

is called a polarity, and it has the property that the composite

FPn −−−−→P

= (FPn) −−−−→P

= (FPn)∗∗

is the identity.

4. Let Σ be an affine hyperquadric inFn, wheren≥3, and suppose thatLis a line inFnsuch thatL⊂Σ. Denote the projective extension of Σ by Σ. Prove that the ideal pointL, and in fact the entire projective line

J[L] ∪ {L}

is contained in Σ. [Hint: The field F contains at least three elements. What does this imply about the number of points on L, and how does this lead to the desired conclusion?]

5. Prove the determinant identity stated in the proof of Theorem 2 (when each yi = 1 this is the classical Vandermonde determinant).6

6. Consider the conic in FP2 defined by the equation x1x2−x23 = 0. What are the tangent lines to this curve at its points of intersection with the line at infinity defined by x3 = 0? [Hint:

If F=R then this conic is the projective extension of the hyperbola with equationxy= 1.]

7. Answer the corresponding question for the conic defined by the equation x21−x2x3 = 0.

[Hint: IfF=Rthen this conic is the projective extension of the parabola with equationy =x2.]

6Seehttp://www.math.duke.edu/johnt/math107/vandermonde.pdf for more on this topic. Alexandre Th´eophile Vandermonde (1735–1796) is known for his work on the roots of polynomial equations and his fundamental results on determinants.

(12)

3. Bilinear forms

At this point it is convenient to discuss a topic in linear algebra which is generally not covered in first courses on the subject. For the time being, F will be a (commutative field with no assumption on whether or not 1 + 1 = 0 or 1 + 16= 0.

Definition. LetV be a vector space over F. Abilinear form onF is a function Φ :V ×V −→ F

with the following properties:

(Bi–1) Φ(v+v0,w) = Φ(v,w) + Φ(v0,w) for allv, v0, w ∈ V. (Bi–2) Φ(v,w+w0) = Φ(v,w) + Φ(v,w0) for all v, w, w0 ∈ V.

(Bi–3) Φ(c·v,w) = c·Φ(v,w) = Φ(v, c·w) for all v, ,w ∈ V and c ∈ F.

The reader will notice the similarities between the identities for Φ and the identities defining the dot product onRn. Both are scalar valued, distributive in both variables, and homogeneous (of degree 1) with respect to scalars. However, we are not assuming that Φ is commutative — in other words, we make no assumption about the difference between Φ(v,w) and Φ(w,v) — and we can have Φ(x,x) = 0 even ifx is nonzero.

EXAMPLES. 1. LetF=RandV =R2, and let Φ(x,y) = x1y2−x2y1, where by convention a∈R2 can be written in coordinate form as (a1, a2). Then Φ(y,x) = −Φ(x,y) for all xand yand we also have Φ(z,z) = 0 for all z∈R2.

2. LetFandV be as above, and Φ(x,y) = x1y1−x2y2. In this case we have the commutativity identity Φ(y,x) = Φ(x,y) for allxand y, but if z= (1,1), or any multiple of the latter, then Φ(z,z) = 0.

3. LetAbe ann×nmatrix overF, and letV be the vector space of alln×1 column matrices.

Define a bilinear form ΦAon V by the formula

ΦA(x,y) = TxAy .

Examples of this sort appeared frequently in the preceding section (see also Appendix E). Actu- ally, the first two examples are special cases of this construction in which Ais given as follows:

0 1

−1 0

1 0

0 −1

In fact, the following theorem shows that, in principle, the preceding construction gives all possible bilinear forms on finite-dimensional vector spaces.

Theorem VII.7. Let v be ann-dimensional vector space over F, and let A = {a1, · · · ,an} be an ordered basis for V. If Φ is a bilinear form over F, let [Φ]A be the n×n matrix whose (i, j) entry is equal to Φ(ai,aj). Then the map sending Φto[Φ]A defines a1−1correspondence between bilinear forms overV and n×nmatrices over F.

(13)

3. BILINEAR FORMS 155

The matrix [Φ]A is called the matrix of Φwith respect to the ordered basis A.

Proof. The mapping is1−1. Suppose that we are given two bilinear forms Φ and Ψ such that Φ(ai,aj) = Ψ(ai,aj) for alliand j (this is the condition for [Φ]A and [Ψ]A to be equal).

If v, w∈V, express these vectors as linear combinations of the basis vectors as follows:

v = X

i

xiai w = X

j

yjbj

Then by (Bi–1)— (Bi–3)we have Φ(v,w) = X

i,j

xiyjΦ(ai,aj) = X

i,j

xiyjΨ(ai,aj) = Ψ(v,w) and since v andw are arbitrary we have Φ = Ψ.

The mapping is onto. IfB is ann×nmatrix andv, w∈V are as in the preceding paragraph, define

fB,A = X

i,j

xiyjbi,j .

This is well-defined because the coefficients of v and w with respect to A are uniquely deter- mined. The proof that fB,A satisfies (Bi–1) — (Bi–3) is a sequence of routine but slightly messy calculations, and it is left as an exercise. Given this, it follows immediately that B is equal to [fBA]A.

CHANGE OF BASIS FORMULA. Suppose we are given a bilinear form Φ on an n-dimensional vector space V over F, and let A and B be ordered basis forV. In several contexts it is useful to understand the relationship between the matrices [Φ]Aand [Φ]B. The equation relating these matrices are given by the following result:

Theorem VII.8. Given two ordered bases A andB, define a transition matrix by the form bj = X

i

pi,jai . If Φ is a bilinear form on V as above, then we have

[Φ]B = TP[Φ]AP .

Proof. We only need to calculate Φ(bi,bj); by the equations above, we have Φ(bi,bj) = Φ X

k

pk,iak,X

m

pm,jam

!

=

X

k

pk,i X

m

pm,kΦ(ak,am)

! ! .

However, the coefficient ofpk,i is just the (k, j) entry of [Φ]AP, and hence the entire summation is just the (i, j) entry ofTP[Φ]AP, as claimed.

Definition. A bilinear form Φ is symmetricif Φ(x,y) = Φ(y,x) for all xand y.

Theorem VII.9. Let Φand A be as in Theorem 7. Then Φ is symmetric if and only if[Φ]A is a symmetric matrix.

(14)

Proof. Suppose that Φ is symmetric. Then Φ(ai,aj) = Φ(aj,ai) for all i and j, and this implies that [Φ]A is a symmetric matrix.

Conversely, if [Φ]A is symmetric and v, w ∈V (the same notation as in Theorem 7), then by Theorem 7 we have

Φ(v,w) = X

i,j

([Φ]A)i,jxiyj Φ(w,v) = X

i,j

([Φ]A)j,ixiyj . Since [Φ]A is symmetric, the two summations are equal, and therefore we must have

Φ(y,x) = Φ(x,y) for allx and y.

We have introduced all of the preceding algebraic machinery in order to prove the following result:

Theorem VII.10. Let F be a field in which 1 + 16= 0, and let A be a symmetric n×n matrix over F. Then there is an invertible matrix P such that TP AP is a diagonal matrix.

This will be a consequence of the next result:

Theorem VII.11. Let Φ be a symmetric bilinear form on an n-dimensional vector space V over a field F for which 1 + 1 6= 0. Then there is an ordered basis v1, · ,vn of V such that Φ(vi,vj) = 0 ifi6=j and Φ(vi,vi) = di for suitable scalars di ∈F.

Proof that Theorem 11 implies Theorem 10. Define a bilinear form ΦA as in Example 1 above. By construction [ΦA]U = A, where U is the ordered basis obtained of standard unit vectors. On the other hand, ifV is the ordered basis obtained from Theorem 11, then [ΦA]V is a diagonal matrix. Apply Theorem 8 with Φ = ΦA, A = U, andB = V.

Proof of Theorem 11. If dimV = 1, the result is trivial. Assume by induction that the result holds for vector spaces of dimensionn−1.

CASE 1. Suppose that Φ(x,x) = 0 for allx. Then Φ(x,y) = 0 for all x and y because we have

Φ(x,y) = 12Φ(x+y,x+y) − Φ(x,x) − Φ(y,y) and consequently [Φ]A = 0 foreveryordered basisA.

CASE 2. Suppose that Φ(v,v) 6= 0 for some v. Let W be the set of all x ∈ V such that Φ(x,v) = 0.7 We claim thatW + F·v = V and W ∩F·v = {0}. — The second assertion is trivial because Φ(v, c·v) = 0 implies that c·Φ(v,v) = 0. Since Φ(v,v) 6= 0, this can only happen ifc = 0, so that c·v = 0. To prove the first assertion, we must observe that for arbitraryv∈V the vector

Π(x) = x − Φ(x,v) Φ(v,v)v

7If Φ is the usual dot product onRn, then this is the hyperplane through0that is perpendicular to the line 0v.

(15)

3. BILINEAR FORMS 157

lies in W (to verify this, compute Φ Π(x), v

explicitly).8 The conditions on W and F·v together with the dimension formulas imply that dimW = n−1.

Consider the form Ψ obtained by restricting Φ to W; it follows immediately that Ψ is also symmetric. By the induction hypothesis there is a basis w1, · · · ,wn−1 for W such that Φ(wi,wj) = 0 if i6=j. If we adjoinv to this set, then by the conditions on W and F·v we obtain a basis for V. Since Φ(v,wj) is zero for all j by the definition of W, it follows that the basis forV given byv together with w1, · · · ,wn−1 will have the desired properties.

The proof above actually gives and explicit method for finding a basis with the required proper- ties: Specifically, start with a basis v1, · · · ,vn forV. If some vi has the property Φ(vivi)6= 0, rearrange the vectors so that the first one has this property. If Φ(vi,vi) = 0 for alli, then either Φ = 0 or else some value Φ(vi,vj) is nonzero (otherwise Φ =0 by Theorem 10). Rearrange the basis so that Φ(v1,v2)6= 0, and take a new basis {vi }with v01 = v1+v2 and v0i = vi

otherwise. Then Φ(v01,v01)6= 0, and thus in all cases we have modified the original basis to one having this property.

Now we modify v0i such thatv001 = v01 and Φ(vi00,v100) = 0 if i >1. Specifically, ifi≥2 let vi00 = v0i − Φ(v0i,v01)

Φ(v01,v01)v01 .

Having done this, we repeat the construction forw1, · · · ,wn−1 forW withwi =v00i+1. When computing explicit numerical examples, it is often convenient to “clear the denominator of fractions” and multiply v00i by Φ(v01,v01). This is particularly true when the matrix Φ(vi,vj) are integers (as in Exercise 2 below).

EXERCISES

1. Prove that the map sending bilinear forms to matrices in Theorem 7 is surjective.

2. Find an invertible matrixP such thatTP AP is diagonal, whereAis the each of the following matrices with real entries:

1 0 1 0 0 1 1 1 1

1 0 1 0 1 1 1 1 2

2 1 3 1 0 1 3 1 1

0 1 0 0 1 1 0 1 1

 .

3. A symmetric bilinear form Φ on ann-dimensional vector spaceV over a field Fis said to be nondegenerate if for each nonzerox∈V there is some y∈V such that Φ(x,y)6= 0. Given an ordered basis AforV, show that Φ is nondegenerate if and only if the matrix [Φ]A is invertible.

[Hint: Suppose that xsatisfies Bx = 0, where B is the matrix in the previous sentence, and let v = P

i xiai. Ifw = P

j yjzj, explain why TyBx = Φ(x,y) and how this is relevant.]

8If Φ is the ordinary dot product, then Π(x) is the foot of the perpendicular dropped fromx to the plane determined byW, and hence0Π(x) is perpendicular toW.

(16)

4. Projective classification of hyperquadrics

A standard exercise in plane and solid analytic geometry is the classification of conics and quadrics up to changes of coordinates given by rotations, reflections and translations. Stated differently, the preceding is the classification up to finding a rigid motion sending on to the other.

An account of the classification for arbitrary dimensions appears on pages 257–262 of Birkhoff and MacLane, Survey of Modern Algebra (3rd Ed.). A related classification (up to finding an affine transformation instead of merely a rigid motion) is discussed in Exercise 4 below.

In this section we are interested in the corresponding projective problem involving projective hyperquadrics and (projective) collineations.

Throughout this section we assume thatFis a field in which 1 + 16= 0. Furthermore, if Σ⊂FPn is a hyperquadric, then we shall useSingSet(Σ) to denote its subset of singular points.

We shall begin with an important observation.

Theorem VII.12. Let g be a projective collineation of FPn. Then a subset Σ ⊂ FPn is a hyperquadric if and only ifg[Σ] is. Furthermore, the singular sets of these hyperquadrics satisfy

g[SingSet(Σ) ] = SingSet(T[Σ] )

and ifTangX(Σ) denotes the tangent hyperplane to Σat a nonsingular point X, then g[ TangX(Σ) ] = Tangg(X)(T[Σ] ) .

Proof. Let Abe a symmetric (n+ 1)×(n+ 1) matrix which defines the hyperquadric Σ.

According to Theorem VI.14, there is an invertible linear transformationC of Fn+1,1 such that T(F·ξ) = F·C(ξ) for all nonzero vectors ξ ∈ Fn+1,1. Let B be the matrix of C in the standard basis. ThenX lies inT[Σ] if and only ifT−1(X) lies in Σ. Ifξis a set of homogeneous coordinates forX, then the conditions in the preceding sentence are equivalent to

TξTB−1AB−1ξ − 0

and the displayed equation is equivalent to saying thatX lies on the hyperquadric associated to the (symmetric) matrix TB−1AB−1.

To check the statement about singular points, note that a pointX lies onSingSet(Σ) if and only ifX has homogeneous coordinates ξ such that TξA = 0, and the latter is equivalent to

TξTBTB−1AB−1 = 0 which in turn is equivalent to

T(Bξ)· TB−1AB−1

= 0 .

To check the statement on tangent hyperplanes, note thatY lies on the tangent hyperplane to Σ atX if and only if there are homogeneous coordinatesξ forX andη forY such that TξAη = 0, and the latter is equivalent to

TξTBTB−1AB−1Bη = 0 which in turn is equivalent to

T(Bξ)· TB−1AB−1

η) = 0 .

(17)

4. PROJECTIVE CLASSIFICATION OF HYPERQUADRICS 159

The latter is equivalent to saying that T(Y) is in the tangent hyperplane to T[Σ] atT(X).

Definition. Two hypequadrics Σ and Σ0 are projectively equivalent if there is a projective collineation T such that T[Σ] = Σ0. We sometimes write this relation as Σ ∼ Σ0. It is clearly an equivalence relation, and the main goal of this section is to understand this relation whenF is the real or complex numbers.

We shall first describe some necessary and sufficient conditions for the projective equivalence of hyperquadrics.

TheoremVII.13. Let Σbe a hyperquadric inFPn, and let T be a projective collineation ofFPn. Then the following hold:

(i) The dimensions of the geometrical subspaces of singular points ofΣ andT[Σ]must be equal.

(ii) If Σ contains no geometrical subspace of dimension r, then neither does T[Σ].

Proof. (i) By definition, SingSet(Σ) is the set of all X whose homogeneous coordinates ξ satisfy TξA = 0, and hence SingSet(Σ) is a geometrical subspace. Now Theorem 12 implies that T[SingSet(Σ)] = SingSetT[Σ], and hence

dimSingSet(Σ) = dimT[SingSet(Σ)] = dim SingSetT[Σ]

.

(ii) SupposeQ⊂T[Σ] is anr-dimensional geometrical subspace. SinceT−1 is also a projective collineation, the set

T−1[Q] ⊂ T−1[T[Σ] ] = Σ is also an r-plane.

Theorem VII.14. Suppose that ΣandΣ0 are hyperquadrics which are defined by the symmetric matrices A and B respectively. Assume that there is an invertible matrix C and a nonzero constant k such that B = TCAC. Then Σ and Σ0 are projectively equivalent.

Proof. LetT be the projective collineation defined byC−1, and ifX ∈FPn letξ be a set of homogeneous coordinates for X. Then by Theorem 12 we have

T[Σ] = {x|TξTCACξ = 0} = {x|Tξ(k−1B)ξ = 0} = {x|k−1 TξBξ

= 0} = Σ0. NOTATION. LetDr be the n×n diagonal matrix (n≥r) with ones in the first r entries and zeros elsewhere, and letDp,q denote then×ndiagonal matrix (n≥p+q) with ones in the first entries, (−1)’s in the next q entries, and zeros elsewhere.

REMARKS. 1. IfAis a symmetric matrix over thecomplexnumbers, then for some invertible matrix P the product TP AP is Dr for somer. For Theorem 10 guarantees the existence of an invertible matrix P0 such thatA1 = TP0AP0 is diagonal. LetP1 be the diagonal matrix whose entries are square roots of the corresponding nonzero diagonal entries of A1, and ones in the places where A1 has zero diagonal entries. Then the product P = P0P1−1 has the desired properties. This uses the fact that every element of the complex numbers C has a square root inC, and in fact the same argument works in every fieldF which is closed under taking square roots.

(18)

2. If A is a symmetric matrix over the complex numbers, then for some invertible matrix P the productTP AP isDp,q for some p and q. As in the preceding example, choose an invertible matrixP0 such that A1 = TP0AP0 is diagonal. Let P1 be the diagonal matrix whose entries are square roots of the absolute values of the corresponding nonzero diagonal entries of A1, and ones in the places whereA1 has zero diagonal entries. Then the product P = P0P1−1 has the desired properties. The need for more complicated matrices arises because overRone only has square roots of nonnegative numbers, and ifx∈Rthen eitherx or−x is nonnegative.

The preceding remarks and Theorems 12–14 yield a complete classification of hyperquadrics in FPn up to projective equivalence if F is either R or C. We shall start with the complex case, which is easier.

TheoremVII.15. LetΓr ⊂CPndefined by the matrixDrdescribed above. Then every nonempty hyperquadric inCPn is projectively equivalent toΓr for some uniquely determined value of r.

Proof. By Remark 1 above and Theorem 14, we know that Σ is projectively equivalent to Γr

for some r. It suffices to show that if Γr and Γs are projectively equivalent thenr = s.

By the preceding results we know that dimSingSet(Γr) is the dimension of the subspace of all X whose homogeneous coordinates ξ satisfy TξDr = 0, and the dimension of that subspace is equal to n−r + 1. Therefore, if Γr and Γs are projectively equivalent then we must have n−r+ 1 = n−s+ 1, which implies that r = s, so there is only one such hyperquadric that can be equivalent to Σ and thus uniqueness follows.

The preceding argument goes through if C is replaced by an arbitrary field F which is closed under taking square roots.

Over the real numbers, the classification is somewhat more complicated but still relatively simple.

Theorem VII.16. Let Γp,q ⊂ RPn defined by the diagonal matrix Dp,q described above. Then every nonempty hyperquadric in RPn is projectively equivalent to Γp,q for some uniquely deter- mined values of p and q such that p≥q.

Proof. As in the proof of the preceding theorem, by Theorem 14 and Remark 1 we know that an arbitrary projective hyperquadric is projectively equivalent to Γp,q for some p and q.

This hyperquadric is represented by Dp,q; if we permute the homogeneous coordinates, we see that Γp,q is projectively equivalent to the hyperquadric defined by the matrix−Dq,p, and since the negative of this matrix defines the same hyperquadric it follows that Γp,q is projectively equivalent to Γq,p. Since eitherp≥q orq ≥p, it follows that every hyperquadric is projectively equivalent to Γu,v for some u≥v.

To complete the proof, it will suffice to show that if Γp,q is projectively equivalent to Γu,v where p≥qandu≥v, thenp+q =u+v andp=u. To see the first equality, note that the dimension ofSingSet(Γa,b) is equal to n−(a+b) + 1 by the argument in the preceding theorem, and as in that proof we conclude thatp+q=u+v.

To see the second equality, we shall characterize the integerp in Γp,q as follows.

(19)

4. PROJECTIVE CLASSIFICATION OF HYPERQUADRICS 161

(‡)The hyperquadricΓp,q contains a geometric subspace of dimensionn−p but no such subspace of higher dimension.

This and the second part of Theorem 13 will combine to prove that if Γp,q is projectively equivalent to Γu,v wherep≥q and u≥v, then we also have p = u.

An explicit geometrical subspace S of dimensionN −p is given by the equations xi − xp+i = 0 1≤i≤q,

xi = 0 q≤i≤p . Consider the geometrical subspace T defined by

xp+1 = xp+2 = · · · = xn+1 = 0 .

This geometrical subspace is (p−1)-dimensional. Furthermore, ifX ∈T∩Σ has homogeneous coordinates (x1, · · · , xn+1) we have xi = 0 for i > p, so that

X

i≤p

x2i = 0.

The latter implies that xi = 0 fori ≤p, and hence it follows that xi = 0 for all i; this means that the intersectionT ∩Σis the empty set.

Suppose now that S0⊂Σ is a geometrical subspace of dimension≥n−p+ 1. Then the addition law for dimensions combined with dim(S0? T) ≤ n shows that S0∩T 6= ∅, and since S0 ⊂ Σ we would also have Σ∩T 6=∅. But we have shown that the latter intersection is empty, and hence it follows that Σ cannot contain a geometrical subspace of dimension greater than (n−p), which is what we needed to show in order to complete the proof.

COMPUTATIONAL TECHNIQUES. Over the real numbers, there is another standard method for finding an equivalent hyperquadric defined by a diagonal matrix. Specifically, one can use the following diagonalization theorem for symmetric matrices to help find a projective collineation which takes a given hyperquadric to one of the given type:

LetAbe a symmetric matrix over the real numbers. Then there is an orthogonal matrix P (one for which TP = P−1) such that TP AP is a diagonal matrix.

Furthermore, ifλi is theith entry of the diagonal matrix, then theith column of P is an eigenvector of Awhose associated eigenvalue is equal to λi.

This statement is often called the Fundamental Theorem on Real Symmetric Matrices, and further discussion appears on pages 51–52 of the following online notes:

http://math.ucr.edu/∼res/math132/linalgnotes.pdf

If we combine the Fundamental Theorem on Real Symmetric Matrices with other material from this section, we see that the construction of a projective collineation taking the hyperquadric ΣA defined by Ato a hyperquadric defined by an equation of the form

Σi dix2i = 0

reduces to finding the eigenvalues and eigenvectors of A. This approach is probably the most effective general method for solving problems like those in Exercise 3 below.

SPECIALIZATION TO THE REAL PROJECTIVE PLANE. We shall conclude this section by restating a special case of Theorem 16 that plays a crucial role in Section 6.

(20)

TheoremVII.17. All nonempty nonsingular conics inRP2 are projectively equivalent. In fact, they are equivalent to the affine unit circle which is defined by the homogeneous coordinate equation x21+x22−x23 = 0.

Proof. We must consider all Γp,q with p ≥ q and p+q = 3 (this is the condition for the singular set to be empty). The only possibilities for (p, q) are (2,1) and (3,0). However, Γ3,0 — the set of points whose homogeneous coordinates satisfyx21+x22+x23 = 0 — is empty, so there is a unique possibility and it is given by Γ2,1, which is the affine unit circle.

EXERCISES

1. For each projective quadric in Exercise VII.1.3, determine the quadric inRP3 to which it is projectively equivalent.

2. Show that the number of projective equivalence classes of hypequadrics in RPnis equal to

1

4(n+ 2)(n+ 4) ifn is even and 14(n+ 3)2 ifnis odd.

3. For each example below, find a projective collineation ofRP2that takes the projectivizations of the following affine conics into the unit circle (with affine equationx2+y2 = 1).

(i) The hyperbolaxy = 4.

(ii) The parabolay = x2. (iii) The ellipse 4x2+ 9y2 = 36.

(iv) The hyperbola 4x2−9y2 = 36.

4. (a) What should it mean for twoaffinehyperquadrics in Rnto be affinely equivalent?

(b) Prove that every affine hyperquadric in Rn is equivalent to one defined by an equation from the following list:

x21 + · · · + x2p − x2p+1 − · · · − x2r = 0 (r ≤n) x21 + · · · + x2p − x2p+1 − · · · − x2r + 1 = 0 (r≤n) x21 + · · · + x2p − x2p+1 − · · · − x2r + xr+1 = 0 (r < n)

See Birkhoff and MacLane, pp. 261–264, or Section V.2 of the online notes http://math.ucr.edu/res/math132/linalgnotes.pdf for further information on the classification of affine hyperquadrics.

5. Let Γ be a nonempty nonsingular conic inRP2. Prove that there are infinitely many points X such that no line throughX is tangent to Γ, but if a point X 6∈ Γ lies on a tangent line to Γ then there is also a second tangent line to Γ through X, and there is also a line through X which does not meet Γ. [Hint: Why does it suffice to prove these for the unit circle, and why are the statements true in that case? Note that for each point at infinity there is a tangent line to the unit circle.]

Referințe

DOCUMENTE SIMILARE

In the following two innovative applications of the magnetic nanowires and contacting process above is described: a current sensor based on a single homogeneous nanowire and an

Graph polynomials are invariants of graphs (i.e. functions of graphs that are invariant with respect to graph isomorphism); they are usually polynomials in one or two variables

If accounts receivable decrease during the time period, this means customers have paid off some accounts, (the company received cash payments) and so, net income should be increased

The diagnostic accuracy of US could be improved in combination with CEUS (65.3% vs 83.7%). The diagnostic accuracy of the GB wall thickening type was higher than the mass forming

De¸si ˆın ambele cazuri de mai sus (S ¸si S ′ ) algoritmul Perceptron g˘ ase¸ste un separator liniar pentru datele de intrare, acest fapt nu este garantat ˆın gazul general,

The best performance, considering both the train and test results, was achieved by using GLRLM features for directions {45 ◦ , 90 ◦ , 135 ◦ }, GA feature selection with DT and

Thus, if Don Quixote is the idealist, Casanova the adventurous seducer, Werther the suicidal hero, Wilhelm Meister the apprentice, Jesus Christ will be, in the audacious and

– Players, Objectives, Procedures, Rules, Resources, Conflict, Boundaries, Outcome. •