This article is a summary of my book *A Numerical Approach to Real Algebraic Curves with the Wolfram Language* [1].

### 1. Introduction

The nineteenth century saw great progress in geometric (real) and analytic (complex) algebraic plane curves. In the absence of an ability to do the large number of computations for a concrete theory, the twentieth century saw the abstraction to algebraic geometry of this material. Ideas of ideals, rings, fields, varieties, divisors, characters, sheaves, schemes and many types of homology and cohomology arose. The added benefit of this approach is that it became possible to apply geometric techniques to other fields. Probably the most striking accomplishment of this abstract approach was the solution of Fermat’s problem by Wiles and Taylor at the end of the century.

The plane geometric curve theory of the nineteenth century was collateral damage. All modern books on the subject want to follow the abstract approach, which raises the bar for those who want to know this theory. In addition, little attention was given to the concrete geometric theory. One goal of my book is to rectify this problem; substituting software for the abstract theory, we can give the theory in terms the non-mathematician can follow.

Since most algebraic curves have only finitely many rational points, I work numerically. The methods are constructive, heuristic and visual rather than the traditional theorem-proof of contemporary mathematics. In fact there is a fundamental oxymoron at the heart of my approach: a numerical algebraic curve is the solution set of an equation , where is a polynomial with integer or machine-number coefficients. Evaluating this polynomial at a point with machine-number coordinates gives a machine number on the left-hand side, while the right-hand side is a symbolic number, so actual equality is impossible. So my book is not an algebraic geometry book. Having worked during my career as a mathematician in both the abstract and numerical realms, I believe that while these approaches are incompatible, they can and should coexist within mathematics.

### 2. Lines

We will generally describe an algebraic plane curve by giving a polynomial in two variables with integer or real machine-number coefficients.

From an operational point of view, with an exception noted later, for a given curve we accept the output of `NSolve[{f,g},{x,y},{y,y _{0}}]` and

`FindRoot[{f,g},{x,x`, where is another such bivariate polynomial and , are real machine numbers, as points on the curve . In numerical work, we do not accept points as solutions to based on the value of , known as the

_{0}},{y,y_{0}}]*residue*. A detailed explanation is in [1].

For example, suppose some calculation claims is a point on . (If you have set values in your session for , and so on, now is the time to store them if needed and apply to them.)

We find a random line containing and use to check the point of intersection of with .

We see the residue is not zero.

But can be reconstructed from .

It checks.

The simplest example of an algebraic plane curve is a line. The first problem for lines is to find the equation of a line through two given points. We give our solution, found at the beginning of Chapter 1 of [1], as it will give the flavor of our approach to this subject.

Let , be the given points. The desired equation is of the form

We thus consider the coefficients , , as unknowns, but , as coordinates of the given points. So we have two equations in the three variables , , .

But this system is underdetermined. It is also not symmetric in the variables, so we use a dummy variable and add a third equation to get the system

(1) |

where , , are random real numbers.

Suppose the points are and . Here are the random reals.

Then the line is constructed as follows.

Perhaps this is not what you expected. But we are working with machine numbers so, particularly if this is not our final answer, we should not mind. If this does still bother us, we can always look for integers.

But system (1) gives more options. Suppose instead we were given the point and slope 2. We can change the second equation by setting , and .

Since our original already had slope 2, we are now not surprised to get the same result. Now consider the possibility that our line was given parametrically.

This time we replace the second equation in (1) with , , and again . We solve the

new system.

This is the same answer, because again we have the line through and

We can put all of this into one program if we simply make the convention that a slope or direction vector is denoted by a triple with third coordinate 0. So here is our universal code for creating a line.

Our results will differ from the previous ones because we are now choosing the random numbers each run but normalizing the output. The advantage is that each run gives the same answer up to a factor of .

Computing a point far away from by taking in our parametric equation, we get approximately .

So we can consider to be the *infinite point* on the line. But putting in our function gave us the same thing, so these infinite points are *homogeneous*; that is, they can be multiplied by a scalar getting the same infinite point. Note also that adding a coordinate 1 to a coordinate pair *homogenizes* a Cartesian point of the plane.

In Chapter 5 of [1], we find that we have invented the *projective plane*. So that we do not get confused, we will henceforth call points (pairs) of our standard Cartesian plane *affine points* and the triples *projective points*.

The method for finding equations of lines can be generalized to find curves of degree through sufficiently general points. See [2] for the code of (here stands for affine).

We do define two families of curves that are used extensively as examples in [1]. The first are *Gaussian curves*. We start with a single variable polynomial , typically with integer coefficients but possibly complex integers such as . Replace by ; after expanding, the formal real part forms a curve. Gauss used this construction in his first and fourth proofs of the fundamental theorem of algebra, published 50 years apart.

For example, the following is said to be Gauss’s original example for the fourth proof. Note that it has a singular point!

A second family of curves I call Newton’s hyperbolas. Here can range from 1 to .

### 3. Important Definitions

The *total degree* of a plane curve is an important invariant, but not quite as simple in the numerical case as it may seem. Small coefficients of the highest degrees matter little near the origin but strongly affect the asymptotic and infinite behavior of the curve. Therefore, we approach this symbolically using .

Sometimes a little care is necessary to make sure that coefficients that are the result of roundoff error only are not allowed to increase the degree; a judicious use of may be required.

Because we are often working numerically, we use a slightly stronger criterion for a plane curve to be called regular at a point on . The quantity is known as the *Jacobian determinant* of the intersection of the curve and line at . We say *is a regular point of * if the Jacobian is not numerically zero for almost all pairs , of machine numbers. In practice, this can be checked by letting , be random real numbers. For a regular point , a *tangent line* is defined as follows.

On the other hand, a point of is called *singular* if the Jacobian is zero at for all numbers , . Again, in practice it is enough to check for a random pair , .

An alert reader may notice that since we are working constructively, *regular* and *singular* are not logical negations of each other, but a practical test does distinguish regular from singular points.

An important kind of point for [1] is a *critical point*. A point on curve is critical if it is also on the curve defined by . All real critical points of a curve can be found easily in practice by the following.

Unlike the conditions regular and singular, which are invariant under transformations such as translation, being a critical point is a positional property. Among the critical points are local extrema of the distance from the origin to a point on the curve and, by our definition, singular points. The most important thing about critical points is that *every affine topological component of a plane curve contains at least one critical point*. This means that from our simple function for finding critical points, we will be able to locate all components on the curve, no matter how small—even one-point components.

Consider the following contrived example of a numerical cubic curve, which has an isolated point.

The point with coordinates is a one-point component of the curve `h1`.

The same idea allows us to find the closest point on a curve to a given point in the plane.

In this case, the closest point may be one invisible on a plot.

We may also find the infinite points of a curve. Here is code that is slightly different from [1] but avoids subroutines. This uses a random variable so that different runs give the infinite points in possibly a different order.

Here is an example using Newton hyperbola 376.

### 4. Topology and Tracing

We start with an idea Gauss used in his 1849 proof of the fundamental theorem of algebra characterizing real plane curves. Given a bivariate polynomial , Gauss considered the semialgebraic set . *The algebraic curve ** is the complete topological boundary in ** of *.

Among other things, this nicely solves our conundrum as to the precise meaning of the curve when is a polynomial with machine-number coefficients, as the inequality does make sense numerically.

Another consequence of this definition is that for each regular point of the curve, a line different from the tangent line intersecting the curve at this point travels from to the negative set at . We will see later in this section that a curve defined by a square-free has only finitely many singular points, so a contour plot gives a reasonable picture of the curve in a bounded region with appropriate scaling. Contour plots may miss large parts or all of the curve if the polynomial has a factor repeated an even number of times. Fortunately, if is a polynomial with integer coefficients, then the built-in function finds the repeated factors, and one can produce a square-free polynomial with the same curve. For machine-coefficient polynomials, there is a function given in Appendix 1 of [1] and in [2] that can check to see if is square free and if not, produce a square-free polynomial giving the same curve.

This last paragraph also tells us that the complement of a square-free curve is two colored, with the curve separating the colors. In particular, an algebraic real plane curve cannot have bifurcations [3]. That is, the following cannot be a plot of an algebraic curve.

There are always an even number of *branches* going in and out of singular points, an essential idea we will use in the next section.

For now, the main use of the Gauss point of view is that a square-free curve is oriented; that is, we can specify a direction of travel along the curve. In his proof, Gauss proposed “walking along the curve” with on our right. Essentially, we are traveling around topological components clockwise. As an aside, the curve Gauss was using is our Gaussian curve of the particular complex univariate polynomial that he was proving has a zero. Thinking of points of the plane as complex numbers, Gauss showed the walker would always stumble over a zero of .

We implement this by noting that for regular points, this right-hand direction is given by the vector , so we can use the following code (`g` stands for Gauss, `T` for tangent, and `vec` for vector).

Example: Consider the curve . (In the PDF and HTML versions, the graphic is not interactive.)

This leads to path tracing. In [1], we consider various methods, including using a method based on the built-in . Here, we use a very common method given by the following.

This function traces from point to point in the direction defined by with steps of size . By default, it stops after 40 steps, but that can be changed by an option. If is the wrong direction from , this fails with a warning. The direction can be changed by replacing the curve by . If there is a singular point in the path between and , then this will likely get hung up there. The key is that one can trace into a singularity, but not out. Normally, we use critical points for the endpoints , , but we may need to add points between singularities.

The bow curve is a good example of using path tracing.

We proceed as follows with the positive direction clockwise around the positive region, but always into the singularity at

In [1], we develop a number of utility functions to make tracing easier and do many examples, particularly of Gaussian curves. But the main point we are making is that a square-free curve can be reasonably approximated by a piecewise linear curve, and the instructions to do so can be given by a graph (network) consisting of the endpoints of each trace as vertices with the direction traveled, not traced, as directed edges. Here is the graph for the previous example.

### 5. A Classical Interlude

In this section, we touch base with contemporary algebraic geometry. We operate in the *real and complex projective planes * and .

Our construction follows our discussion on lines in Section 2. A point in the real (or complex) projective plane is a triple of real (or complex) numbers so that not all of , , are zero. Two such triples that differ by a nonzero real (or complex) multiple are considered the same. For example, if , then loosely speaking, is an affine point. We called points infinite points; in the projective plane they are just points. Just as we added a variable for the third coefficient in the equation of a line, in the projective plane we again add a third variable for equations. We call this *homogenization*. Now we want all of our monomials to have the same total degree. The next function homogenizes a bivariate polynomial.

That is, if we are working with a polynomial of degree , a monomial is converted by . There is a 1-1 correspondence between two-variable monomials of total degree less than or equal to and three-variable monomials of degree exactly .

If particular , , with , then also, so being a zero is a property of the projective point . Thus *projective curves* are the zero set of homogeneous polynomials in three variables. Also in the example for the bow curve , is a point of the homogenization of , which means is an infinite point of .

The opposite of homogenization is *specialization*. We can substitute the number 1 for any of the three variables in a homogeneous polynomial and get a two-variable polynomial that is in general nonhomogeneous. For example, if we homogenize and then specialize at , we get back the original. But specializing at or produces a new polynomial.

We say is a singular curve if any complex projective singular point exists. So may be a singular curve even though there are no affine singularities. We do this partly to be consistent with the algebraic geometers, but also because singular curves (even with infinite or complex singularities) do behave differently from regular curves.

Likewise, a curve is *reducible* if its homogenization is reducible over the complex numbers. Because homogenization preserves polynomial multiplication, the homogeneous polynomial is reducible if and only if all its specializations are reducible. It is fairly rare that a bivariate real polynomial has complex factors, but an important class of examples is the homogeneous functions in two variables. These always factor into linear factors, but some factors may be complex. Consider the next example.

This seems to be irreducible, but the plot appears to be a straight line rather than a cubic. Furthermore, it is singular at . Think of this curve as a homogenization of a polynomial of one variable and specialize at .

So gives a complex numerical factorization of ; the two complex factors are invisible on the contour plot.

Related to singular points are intersection points. Here is an example.

We say the intersection of these curves at has *multiplicity* 8. To explain what this means, particularly in the case of numerical curves, we use the formulation given in [4], which has been implemented numerically by Z. Zeng and the author. The implementation in the plane curve case is given in Appendix 1 of [1], the code and examples are in [2] and further information can be found in [5].

Intersections and singularities are connected, in that if and intersect at , then the curve has a singularity at However, there is an important difference. If we perturb a curve with a singularity by adding some terms with very small coefficients, the singularity often goes away. But if we perturb both of the curves intersecting at , then locally we have the same multiplicity. Here is an example.

What this shows is that singularities are numerically unstable, but intersections are numerically stable. Thus in [1], which emphasizes the numerical point of view, we avoid getting deeply into singularities, but we can deal with intersections.

This leads to the most important theorem of complex projective plane algebraic geometry, Bézout’s theorem.

Given complex algebraic curves and of degrees (respectively) and with no common nonconstant factor, there are exactly complex projective points on both curves counting intersection multiplicity.

There are many proofs in the literature, and we will not give a complete proof here or in [1]. The complicating issue is when there are infinite or multiple intersection points. The typical proof involves use of the *resultant*. In the case of possibly infinite but not multiple points, one approach is to apply a random projective transformation. The resulting curves then, with high probability, will have no infinite intersection points and moreover, each intersection point will have a unique coordinate. We can find these by applying the resultant with respect to , which will then give a polynomial of degree with distinct and hence non-multiple zeros. One can easily find the coordinates of the transformed system by substituting each in either equation and solving for . Finally, transforming back will give the solutions of the original system. We will study these transformations and find infinite intersection points by transforming, solving the affine system and transforming back in the next section.

As an example, consider the following Gaussian cubic and quadratic. There is one infinite solution. Applying the random projective linear transformation with matrix

gives a system of equations that leads to polynomials with rational coefficients, no infinite solutions and unique coordinates for the affine solutions.

Pictured are the original system and the transformed system. The indicated point in the second plot corresponds to the infinite solution of the first plot. Even in this simple example with equations and transformation using one-digit integers, the resultant polynomial was a rational polynomial with numerators of 17 digits and a denominator of 21 digits!

Later in [1], Bézout’s theorem is used in the discussion of Cayley’s theorem and Harnack’s theorem. In this section, we use Bézout’s theorem to argue the *singularity theorem*:

An irreducible curve of degree has at most complex projective singular points.

In [1] I take a constructive point of view and show instead that a curve of degree with or more singular points is reducible. In the argument, we produce a polynomial of smaller degree that meets the given curve in too many points, so has a common factor with the given curve. In fact, in Appendix 1 of [1] we implement this argument with a function that factors the defining polynomial of any curve with or more singularities.

Going back to the cubic, we homogenize and then specialize at .

The resulting plot shows the infinite points of in the specialization where the dashed line is the original infinite line. The original infinite points are named , , . The first critical point becomes the infinite point in the - plane, and the other two go to the points , .

So in the projective plane, infinite points look just like affine points. We can trace projective paths just like affine paths. Thus, we can form graphs just like in the affine case; in particular, the projective graphs now have the property that every vertex is even. This gives my *fundamental theorem of real plane projective algebraic curves*, henceforth called just the *fundamental theorem*, which completely describes the topology of the projective curve.

Let be a homogeneous real plane projective algebraic curve . Then there is a finite set of points in , called vertices, and a set of edges between pairs of vertices satisfying:

*Each edge corresponds to a continuous arc (or path) in**connecting the two vertices*.*Every singular point of**is a vertex*.*The interiors of any two arcs corresponding to edges are disjoint; that is, arcs only meet at vertices*.*Every point of**is either a vertex or an interior point of an arc*.*The graph is an Euler graph; that is, every vertex is even*.

In the previous example, the graph can be rendered as follows, where the vertex names refer to the original affine specialization.

Several comments are in order. First, *critical points* are not a concept in the projective plane; they come from some affine specialization. They make good vertices, but in this context are somewhat arbitrary. The same is true of the *direction* of the curve, but these graphs can be given a directed Euler graph structure. The fact that these are Euler graphs implies they can be decomposed into (not necessarily disjoint) directed circuits.

Already in his 1799 proof of the fundamental theorem of algebra, Gauss essentially calculates the infinite points of Gauss curves coming from a monic polynomial of degree as

Since the Gaussian curve already approximately intersects large circles about the origin in the affine points (and their antipodal points) given by the first two coordinates, one can infer that the graph will have edges pointing directly out from boundary points on a large circle to the appropriate infinite point. Thus by treating any two antipodal points of the curve on a large circle about the origin as the same infinite vertex, we convert the bounded graph to the projective graph.

A more interesting example of a Gaussian curve is Gauss’s example, which has two components and a singular point.

We find the critical and boundary points on a circle of radius 4 and put them in an association for labeling.

We show a contour plot and the bounded graph. Then, by treating boundary points as infinite points and identifying pairs of antipodal points (, , , , ), here is the projective graph.

We mention the Riemann–Roch theorems, whose main subject is the concept of *genus*. These theorems are the backbone of complex curve theory and even real space curve theory. However, for real plane curves the important invariant of a curve is the degree, not genus, so we do not dwell on these theorems.

### 6. Fractional Linear Transformations

An important tool in [1] is utilizing the projective linear transformations. We follow Abhyankar [6] by keeping the discussion mainly in the affine realm, where it is easier to compute, viewing these as *fractional linear transformations*.

A *fractional linear transformation* is a function defined by

where and are real (or sometimes complex) numbers in the form of integers or machine numbers. Setting the common denominator to zero defines a line, so the domain of is the affine plane minus this line.

The notation suggests describing the fractional linear transformation compactly by the matrix:

This is more compact as well as useful, as the fractional linear transformation is actually given by the two-step procedure using matrix multiplication:

In the Wolfram Language, this becomes the function .

To the extent that we want to work completely in the affine domain, we note that the Wolfram Language also includes fractional linear transformation under the name *linear fractional transformation*. So one can also use the Wolfram Language to evaluate a fractional linear transformation.

Here is an example.

In [1], to keep things simple we assume the matrix is invertible. Matrix multiplication corresponds to composition of transformations; in particular, since our matrices are invertible, so are our fractional linear transformations.

Somewhat unique to [1], we have our transformations work on curves as well as points.

The fractional linear transformation takes the curve (i.e. the bivariate polynomial ) to a curve such that whenever .

Here is an example using as defined before.

The relationship between and is shown by the following example; maps points to points and maps curves to curves. The image of a point under is a point of the image of the curve under .

In this case, the transformation takes the circle to a conic, a parabola. One can use the various transformations given by the Wolfram Language. We provide some additional ones in [2] (such as , which takes line to line and , the reflection about the line ) as Euclidean transformations and as an affine transformation. As an example, we give .

More importantly, we have two fractional linear transformations that act on the projective plane. The transformation takes the infinite point to the origin and the original infinite line to the axis. The transformation specializes the projective plane by removing the line from the affine plane and making it the infinite line; the new axis is the original infinite line.

As an example, we are interested in the behavior of the infinite point of the preceding curve .

The transformation puts the infinite point at the origin of this plot, which shows the infinite line as the axis. It appears that the parabola is actually tangent to the infinite line at the infinite point. To check, we can calculate the tangent line to at the origin to see it is the axis, that is, .

There are various alternate versions of the functions and to handle working projectively. For example, accepts infinite points as input or returns them as output.

The main application for is that we can now find all complex projective singular points or intersection points in one step by picking a random line that, with high probability, will not go through any of the finite number of singular or intersection points. For details, see [1].

### 7. Applications to Geometry

In [1], the theory so far is applied to recover known results in lower-dimension geometry. First we consider nonsingular conics in the form

where the are integers or machine numbers. We identify them as to type (hyperbola, parabola, ellipse) and write them in standard form. We parameterize them by rational or trigonometric functions, find their foci and directrix or conversely, construct them from arbitrary foci and another appropriate value such as the semilatus rectum.

We then discuss the numerical theory for nonsingular cubics. Unlike the number theory case, which is one of the most difficult subjects in mathematics, the numerical case is very simple. We give a function to find the numerical inflection points; then, with a choice of inflection point, we have a deterministic black-box function to calculate the Weierstrass normal form and -invariant. The -invariant almost completely classifies numerical cubics relative to fractional linear transformations, that is, relative to the real projective linear group: there are two conjugate classes for each . Under the complex projective linear group, the classification is complete.

We end with Cayley’s theorem: an irreducible curve of degree with double points has a rational parameterization. This means that with parameter , the parameterization components have the form of a polynomial in divided by another polynomial in . The coefficients are not, however, expected to be rational numbers; Cayley only promises algebraic numbers. Thus in practice, this parameterization works best with machine-number coefficients. We illustrate by parameterizing the hypocycloid.

The gap occurs because we only plotted the parameter on a closed interval; in theory it should run from . Details of how the parametric functions were calculated are in [1].

### 8. The Möbius Band Model of the Real Projective Plane

Topologists often think of the real projective plane as a Möbius band where the entire outer boundary is squashed to the affine origin. Alternatively, the Möbius band can be viewed as the real projective plane with a tiny disk about the affine origin removed, the boundary of that disk being the boundary of the Möbius band. In either case, the center line of the band is the *infinite line*.

It is common to construct a Möbius band out of a strip of paper. Here is a slightly different but useful way, but shown in Figure 1 by a physical deconstruction: cut from a boundary point to the center (infinite) line, then cut around the center line.

**Figure 1. **Constructing a Möbius band.

This gives a long skinny strip that we can identify with the real projective plane shown in Figure 2. The vertical yellow lines are the negative and positive axes, and the standard quadrants of the affine plane are numbered in Roman numerals.

**Figure 2. **The real projective plane.

We implement the mappings from the projective plane to this strip, called the *rectangular hemisphere* in [1] for reasons given there, and from the strip to the Möbius band by the following functions.

A simple example is the hyperbola ; we give the construction. The infinite points are and . Unfortunately, even this simple example takes up a great deal of space, so we will just get started. We consider the part of this hyperbola in the second quadrant of the affine plane and plot it on the Möbius band. We start at the infinite point and trace to the infinite point , which is an ambiguous point.

The affine part is well known, so we inspect the obvious infinite points .

A technicality is that uses the line function, which could randomly differ by a multiple of . We need a specific choice, so we set the value of .

Again, the axis represents the infinite line for , and the origin the infinite point.

To connect this plot to the affine curve, find the points where intersects the circle of radius 1.

Now map these back to the affine plane to see that it is the first point in that is related to a point in the second quadrant.

So the part of the hyperbola in the second quadrant can be traced using two parts: the part from the intercept to the point and the image of the part from to the infinite point represented by .

We see no error messages, so we assume the tracing went correctly. Now apply .

Here we get some warning messages. We have to set the ambiguous points correctly and can then draw Figure 3.

So we have drawn this section of the hyperbola on the rectangular hemisphere.

**Figure 3.** The part of the hyperbola in the second quadrant on the rectangular hemisphere.

The reader may wish to attempt the other sections of the hyperbola; the one in the third quadrant is similar to , and the parts in the first and fourth quadrants can be done together since the intercept does not give an ambiguity (Figure 4).

**Figure 4.** The full plot of the hyperbola on the hemisphere looks like this.

Finally, we lift to the actual Möbius band using and (Figure 5).

**Figure 5.** The hyperbola on a Möbius band.

This last example was simple! From now on we just show the final output (Figure 6).

**Figure 6.** Here are two Möbius plots of lines, the first a line through the origin, and then a typical line.

Next, Figure 7 shows two affinely parallel lines meeting at an infinite point and three circles; the black one contains the origin in its interior.

**Figure 7.** Affinely parallel lines and three circles.

In Figure 8, we plot the rational function . This has two infinite points: a singular one at and a regular one at .

**Figure 8.** Plot of a rational function.

Experimenting with these plots we see, as the fundamental theorem tells us, that these curves are comprised of loops, that is, simple closed curves. Draw these yourself using the pattern in our hyperbola example that stops at the rectangle, print it out (preferably in landscape orientation with smaller aspect ratio), then cut it out, twist and tape together the two copies of the infinite line to make the Möbius band. Now cut out a loop. Two things can happen: either you get two pieces, one of topologically a disk and the other not, or only one piece, as in the classic example of cutting a Möbius band along the center line.

In the first case, we call the loop an *oval*, and the complementary piece shaped like a disk is called the *interior*. In the other case, we call this a *pseudo-line*. Notice both kinds of lines have this property. One easy way to tell, without going through the trouble of constructing a physical Möbius band, is that an oval meets the infinite line (or any other line for that matter, since up to fractional linear transformations all lines are the same) in an even number of points (possibly zero). A pseudo-line meets the infinite line in an odd number of points. Again, since in the projective plane all lines are equivalent, two pseudo-lines always meet in an odd number of points, in particular, at least one.

From Bézout’s theorem, a curve of even degree meets any line in an even number of points. A consequence is that* a nonsingular curve can contain at most one pseudo-line. Further, if the degree is even, each loop of the curve must be an oval. On the other hand, each nonsingular curve of odd degree must contain exactly one pseudo-line and possibly some ovals*.

This last paragraph is in italics because it essentially tells us the topological structure of nonsingular plane curves.

### 9. Diamond Diagrams

We can now find the specific topological (and even some geometrical) structure of any particular real plane curve, at least up to degree six. We concentrate on the Newton’s hyperbola family of curves introduced in Section 2. These are not well conditioned, so they present interesting problems. It may be necessary to go to arbitrary-precision numbers to get further with these, although for well-conditioned curves I have used the methods of [1] for curves up to degree nine.

We first consider Harnack’s theorem [7] and related problems from Hilbert’s problem, Part 1 [8]. Harnack’s first theorem states that a nonsingular curve of degree can have at most topological components in . A rigorous proof requires advanced concepts in topology, but a heuristic proof is easy from Bézout’s theorem, especially given the ideas of ovals and pseudo-lines in the last section.

As mentioned in the last section, an oval is a loop that cuts the Möbius band in two parts, one topologically a disk. That part is known as the *interior* of the oval. It is possible that the interior of an oval contains another oval. Consider the following example with fractional linear transformation given by matrix that cuts the axis out of the affine plane.

We say the smaller oval has *depth* 2. If there were another oval inside that oval, it would have depth 3, and so on. It is easy to prove that the maximal depth of an oval in an irreducible curve of degree is ; simply consider a line through a point in the interior of the deepest oval and apply Bézout’s theorem. The next example generalizes this.

Continuing this way, we can in principle construct an oval of depth using a curve of degree for even and a curve of depth for odd.

An -curve is a nonsingular curve with the maximum number of components. To best show the possible arrangements of the components of an -curve, we use *diamond diagrams*. We have two main types, first the *Descartes–Viro diagrams* (or more simply the *Viro diagrams*), which depend on the signs of coefficients of the equation of the curve [9]. These diagrams turn out to be in 1-1 correspondence with the Newton hyperbolas. We also use *Gauss diagrams*, which show the complementary positive and negative value sets of .

The code for drawing diamond diagrams is very long and explained in [1] and [2]; in this article we do not give code, only graphics.

For the Viro diagram in the first quadrant including the positive axes, the color of the dot at the point is green if the coefficient of is positive and red if it is negative. We do not allow equations with any term 0 for a Viro diagram. The curve then separates the red and green lattice points.

As an example, consider Newton hyperbola 413 (Figure 8).

**Figure 9.** The Viro diagram, region plot, graph and diamond diagram for the function `nh413`.

In this case, the Viro diagram and Gauss diagram (not shown) are the same, other than the color of the lattice points; orange indicates where and brown where . A graph is given using only the infinite points, which are labeled , , , . The outer boundary of the diamond represents the infinite line. The diamond diagram indicates that: (1) on the positive axis the curve crosses three times; (2) it does not cross the negative axis; and (3) it crosses the positive axis once and the negative axis twice. The Viro diagram gives the maximal number of crossings according to Descartes’s theorem on each positive and negative , and axis, viewing as a single-variable polynomial restricted to these lines. In the projective plane, the axis is the line of infinite points where infinite points in the first/third quadrant are positive and those in the second/fourth quadrant are considered negative in this context.

In this example, the crossing points are given as follows.

Let , , be the infinite points.

The Newton hyperbola 613 is more complicated (Figure 10).

**Figure 10.** The Viro diagram for the function `nh613`.

We see there are three ambiguous cells, that is, four lattice points with , one color and , the other. There are two different possible ways to connect regions given by dashed curves in the colors aqua and magenta. Without further investigation, there is no a priori way to determine the correct choice; a slight perturbation of the curve can affect this. A suggests an answer.

Checking infinite points and critical points confirms that there is nothing unexpected going on outside the region plot, so we get the Gauss diagram and graph (Figure 11).

**Figure 11.** Gauss diagram and graph.

In this case, a tiny perturbation changes the geometry and the Gauss diagram (Figure 12).

**Figure 12.** The region plot and Gauss diagram for the perturbed `nh613` are different from `nh613`.

Originally, the negative complement was connected and the positive complement had three components; after the perturbation, it is the positive complement that is connected. Luckily, we did not need to change any values of the lattice points when changing from the Viro diagram to the Gauss diagram. In general, the user will need to do that. See [1] or some later examples.

Now that we have explained our diagrams, we can show some -curves. Hilbert gave a series of -curves for each degree that are given by Viro diagrams and hence exist by the work of Viro. We simply give the diagrams here (Figure 13). For more information see [1].

**Figure 13.** Viro diagrams of -curves of degree .

Hilbert suggested other possibilities with more nesting in degree six.

Many more details on these diagrams and Hilbert’s problem [8] are given in [1].

We now have all of our tools. In [1] we illustrate more complicated examples, two of them the curve and the Newton hyperbola 336941. Both of these curves have interesting behavior at or near the infinite line, so a contour plot, even with large scale, cannot show everything.

### 10. Conclusion

At present, we have shown how to analyze and plot curves of degree up to six in various ways. For well-conditioned curves, these machine-number methods often work with higher degree; the author has had success with curves of degree eight and nine. To adequately deal with Newton hyperbolas of degree greater than six, one would perhaps like to rewrite some of the code to use arbitrary precision.

Our forthcoming book is a first attempt to apply numerical methods to a formerly abstract subject. There is a lot more that can be done in this area. We hope the book will be a starting point.

### Acknowledgments

I want to thank the people at Wolfram Research for their help on the book project, especially Jeremy Sykes, Daniel Lichtblau and, for this article, George Beck.

### References

[1] | B. H. Dayton, A Numerical Approach to Real Algebraic Curves with the Wolfram Language, Champaign, IL: Wolfram Media, 2018.www.wolfram-media.com/products/dayton-algebraic-curves.html. |

[2] | Global Functions. (Jul 18, 2018) barryhdayton.space/curvebook/GlobalFunctionsTMJ.nb. |

[3] | E. W. Weisstein. “Bifurcation” from Wolfram MathWorld—A Wolfram Web Resource. mathworld.wolfram.com/Bifurcation.html. |

[4] | F. S. Macaulay, The Algebraic Theory of Modular Systems, Cambridge: Cambridge University Press, 1916. |

[5] | B. H. Dayton, T.Y. Li, Z. Zeng, “Multiple Zeros of Nonlinear Systems,” Mathematics of Computation, 80(276), 2011 pp. 2143–2168.www.ams.org/journals/mcom/2011-80-276/S0025-5718-2011-02462-2/S0025-5718-2011-02462-2.pdf. |

[6] | S. S. Abhyankar, Algebraic Geometry for Scientists and Engineers, Providence, RI:AMS, 1990. |

[7] | E. W. Weisstein. “Harnack’s Theorems” from Wolfram MathWorld—A Wolfram Web Resource. mathworld.wolfram.com/HarnacksTheorems.html. |

[8] | E. W. Weisstein. “Hilbert’s Problems” from Wolfram MathWorld—A Wolfram Web Resource. mathworld.wolfram.com/HilbertsProblems.html. |

[9] | E. W. Weisstein. “Descartes’ Sign Rule” from Wolfram MathWorld—A Wolfram Web Resource. mathworld.wolfram.com/DescartesSignRule.html. |

B. H. Dayton, “A Wolfram Language Approach to Real Numerical Algebraic Plane Curves,” The Mathematica Journal, 2018. dx.doi.org/doi:10.3888/tmj.20-7. |

### About the Author

Barry H. Dayton is Professor Emeritus at Northeastern Illinois University, where he taught for 33 years. His Ph.D. was in the field of algebraic topology, but he has done research in a variety of fields, including algebraic geometry and numerical algebraic geometry.

**Barry H. Dayton**

*Department of Mathematics
Northeastern Illinois University
Chicago, IL 60625-4699*

*barry@barryhdayton.us*

*barryhdayton.space*