*Symbolic-Numeric Algebra for Polynomials (SNAP)* is a prototype package that includes basic functions for computing approximate algebraic properties, such as the approximate GCD of polynomials. Our aim is to show how the unified tolerance mechanism we introduce in the package works. Using this mechanism, we can carry out approximate calculations under certified tolerances. In this article, we demonstrate the functionality of the currently released package (Version 0.2), which is downloadable from wwwmain.h.kobe-u.ac.jp/~nagasaka/research/snap/index.phtml.en.

This article has not been updated for Mathematica 8. |

### Introduction

Recently, there have been many results in the area of symbolic-numeric algorithms, especially for polynomials (for example, approximate GCD for univariate polynomials [1, 2, 3, 4] and approximate factorization for bivariate polynomials [5, 6, 7, 8]). We think those results have practical value, which should be combined and implemented into one integrated computer algebra system such as *Mathematica*. In fact, *Maple* has such a special package called *SNAP (Symbolic-Numeric Algorithms for Polynomials)*. Currently, ours is the only such package available for *Mathematica*.

We have been developing our *SNAP* package for *Mathematica*, which is an abbreviation for Symbolic-Numeric **Algebra** for Polynomials. We use algebra to mean continuous capabilities of approximate operations; for example, computing an approximate GCD between an empirical polynomial and the nearest singular polynomial computed by SNAP functions of another empirical polynomial. This continuous applicability is more important for practical computations than the number of algorithms that are already implemented, especially for average users.

Our aim is to provide practical implementations of SNAP functions with a unified tolerance mechanism for polynomials like *Mathematica*‘s floating-point numbers or Kako and Sasaki’s effective floating-point numbers [9]. Our idea is very simple. We only have to add new data structures representing such polynomials with tolerances and basic calculation routines and SNAP functions for the structures. At this time, only simple tolerance representations (-norm, -norm, and -norm) for coefficient vectors of polynomials and a few SNAP functions are implemented, but the package is an ongoing project (Version 1.0 should be released in March 2007).

Features that are new in Version 0.2 [10, 11], include:

SNAP structures for multivariate polynomials that have more than one variable.

Basic functions that can operate with multivariate polynomials.

New compatibilities with the built-in functions `PolynomialReduce` and `D`.

New SNAP functions, such as `CoprimeQ`, `AbsolutelyIrreducibleQ`, `SeparationBound`, `Factor`, and `FactorList`.

#### Difference Between Numerical and Algebraic Computations

Let us consider a typical numerical computation—finding numerical roots:

We might assume that polynomials that are close have roots that are close.

If this argument is not correct, the problem is called “numerically unstable.” Hence, “numerical stability” of the given problem and “significant digits” of the calculated solutions are important in numerical computations. *Mathematica*‘s accuracy and precision tracking system are very useful for this purpose and are based on the concept that the higher precision used, the closer the solutions.

Unfortunately, this concept may not be correct for algebraic computations. Let us consider a factorization:

We cannot factor this polynomial if we increase the precision of the computation because algebraic properties are generally not continuous. Forward and backward error analyses may not be the solution, though they can be of supplemental use. The previous polynomial, for example, can be either reducible or irreducible, and we may not be able to know which property is correct.

We note that there are two approaches to operating with empirical polynomials in an extreme instance—using inexact or exact approximations. For example, let us consider factoring bivariate polynomials. The following numerical polynomial is reducible if we rationalize its coefficients up to machine precision:

(1) |

This factorization is fine. The problem is operating with the following polynomial (we added small perturbation terms to the previous polynomial):

(2) |

In general, we may not know whether these perturbation terms are numerical errors or actually exist in the polynomial. The inexact approximation approach tries to factor numerically, regardless of significant digits or magnitude of errors. Although we can check its backward error after calculations, we cannot know if the backward error is globally minimized or if the number of factors is globally maximized. Moreover, there is a possibility that an algorithm cannot find appropriate factors even if there are approximate factors.

Therefore, to do algebraic operations, we have to guarantee the properties for all possible polynomials that are sufficiently close to the given polynomial. For example, assuming the precision is 16, the polynomial set of the polynomial (2) includes the following polynomials:

(3) |

(4) |

Using exact approximations tries to guarantee such properties. However, this approach is more difficult than the first, because we have to guarantee an algebraic property of an infinite number of polynomials that are sufficiently close to the given polynomial.

### Tolerance Mechanisms and Implementations

First, we introduce a framework for our package, which includes SNAP structures and their implementations using , , and so forth.

**Table 1.**Basic functions.

#### Tolerance Mechanisms

We introduce the following data structures for empirical polynomials. They are very simple, but there is no system in which we can automatically use these structures in a way similar to *Mathematica*‘s floating-point numbers or Kako and Sasaki’s effective floating-point numbers [9]. These structures are different from those used in the previous version of this package [10, 11]. The latest version can operate on polynomials that have more than two variables (bivariate or multivariate polynomials).

**Definition 1 (SNAP Structures).**We define the following SNAP structures (like sets of neighbors) for approximate polynomials for the given polynomial and tolerance :

(5) |

(6) |

In the rest of this article, means any , , or .

**Remark 1.**You might think that the following definition is better than the previous definition:

(7) |

(8) |

The coefficient vector norms for polynomials have properties similar to normal vector norms (see [12] for a discussion of these basic properties); hence, we have the following corollary.

**Corollary 1.**We have the following properties:

(9) |

(10) |

(11) |

(12) |

(13) |

(14) |

**Lemma 1.**We have the following properties for polynomials and :

(15) |

(16) |

(17) |

**Proof of Lemma 1.**The lemma is proved by the following properties of vector and matrix norms [12], since any multiplication of polynomials can be done as matrix multiplications:

(18) |

(19) |

(20) |

(21) |

(22) |

**Proof of Lemma 2.**The lemma is directly proved by the triangle inequality for norms.

The condition of Lemma 2 is necessary for ensuring equalities for degrees in . Without the condition, unusual results as in Remark 1 may occur. However, for example, we have to operate with the following computation and determine the set . This is important especially for divisions (quotient and remainder) and reductions (for non-univariate polynomials):

(23) |

We introduce a simple rule to handle this. If the norm of the leading coefficients of the representative polynomial , with respect to , is not larger than the tolerance , then we rewrite the SNAP structure as follows:

(24) |

where denotes (all the leading monomials of , with respect to ).

#### Tolerance Implementations

According to the previous definitions, we have implemented the structures, basic operations (addition and multiplication) on the structures, and functions that transform one structure into another.

We define the following expression to represent the SNAP structures , , and , where `scheme` can be `AbsolutePolynomial2Norm`, `AbsolutePolynomial1Norm`, and `AbsolutePolynomialiNorm`, respectively. In a future release, other schemes will be implemented.

SNAP[f,,{{u_{1},e_{1}},…,{u_{r},e_{r}}},scheme]

Using this structure, we have implemented basic functions provided by the mechanisms and now show some of their expressions. We note that the package also provides functions that automatically transform the given polynomial to the previous representation.

One aim of the package is providing an environment in which we use SNAP structures transparently, like *Mathematica*‘s floating-point numbers; hence, we have implemented the following `Format` code.

In this version, we suppose that the basic four operations between *Mathematica*‘s bigfloat numbers and its interval arithmetic guarantee the precision of their results, since basically we use *Mathematica*‘s own arithmetic for estimations. All the implementations depend on *Mathematica*‘s accuracy and precision tracking system, though in our implementation, we round up all the error parts in case our assumption is incorrect.

#### Tolerance Examples

Next we show some examples of the SNAP tolerance mechanism. This loads the package.

This gives a SNAP structure. The output looks like a normal expression.

This gives the of the previous SNAP structure.

The error bound of this SNAP structure, which is a a set of polynomials, is given by `Tolerance`.

Each calculation enlarges its error bound.

By using `Element`, we can check whether a polynomial is included in the given SNAP structure or not. The error bound of the following SNAP structure is about ; hence, we have these results.

`Normal` gives corresponding representative polynomials in the normal form.

We can expand the expression of the representative polynomial of a SNAP structure.

Any result of a substitution for a SNAP structure also has a guaranteed accuracy.

We show the situation discussed in the last paragraph of the Tolerance Mechanisms subsection.

Because this rewriting can be important for users, this package generates the displayed warning.

### Other Basic Operations

We implemented other basic operations: a polynomial division (quotient and remainder), reductions (for non-univariate polynomials), and root-finding with error considerations.

**Table 2.**Other basic functions.

#### Polynomial Division (Univariate Case)

Let and be the following polynomials of degrees and , , with tolerances and , respectively:

(25) |

Dividing by is defined as follows, with polynomials and :

(26) |

For SNAP structures, we have to check that the polynomials and for the given and are properly related to and :

(27) |

We have the following corollaries [11]. Note the discussion at the end of the Tolerance Mechanisms subsection.

In these corollaries, is the matrix:

(28) |

**Corollary 2.**If the following expression holds,

(29) |

(30) |

**Corollary 3.**If the following expression holds,

(31) |

(32) |

**Corollary 4.**If the following expression holds,

(33) |

(34) |

#### Polynomial Reduction (More than One Variable Case)

Since the SNAP structure is extended to multivariate polynomials in this version, we introduce polynomial reductions. Because a reduction can be done by two multiplications and one subtraction, we just reduce polynomials by ordinary algorithms using the SNAP structures introduced in the previous section. Note that we assume that the given polynomial basis is a Gröbner basis for any possible combinations of polynomials in the basis, and the result, including tolerance, is dependent on a specified term order.

#### Root Finding with Error Considerations

There are many root-finding methods. Basically, *Mathematica* uses the Jenkins-Traub method. Since those roots found by numerical methods may not be the exact ones, we have to consider numerical errors after calculations. For example, Smith [13] studied error bounds for numerical roots.

However, working with polynomials with errors in their coefficients extends the problem. Terui and Sasaki [14] studied an extended version of Smith’s work, by which we can bound errors included in numerical roots of polynomials with errors on their coefficients and for polynomials represented as SNAP structures.

(35) |

#### Partial Derivation with Error Considerations

Computing the partial derivatives of the given polynomial is also important. We have the following trivial lemma to calculate the derivatives of SNAP structures.

(36) |

#### Examples

Here we show some examples of basic operations of the package using the previous polynomial.

The following examples give all the roots of the given representative polynomial with or without considerations of the error bound. We recommend comparing the following two results of `NSolve` and `System`NSolve`. `NSolve` with a SNAP structure considers all the possible polynomials within the structure; hence, their tolerances are larger than that of `NSolve` without a SNAP structure.

This gives a result with error considerations according to Lemma 3.

This gives a result without error considerations.

To show division examples, define the following two polynomials.

This gives the quotient and remainder of `g2` by `h` with error considerations. Note that though `h` is not in the SNAP structure, it is automatically transformed into it and SNAP functions are overloaded because the built-in functions are not compatible with such arguments including a SNAP structure.

These commands are the same for univariate polynomials.

In the next example, `PolynomialRemainder` gives a pseudo-zero number since `g` can divide `g2`. However, these polynomials have their error bounds, and we cannot argue that its remainder is completely zero; hence, the following warning messages are generated.

SNAP functions give normal *Mathematica* numbers if their degrees are not larger than zero.

Each operation or calculation enlarges error bounds; hence, tolerances of the roots also get larger.

The tolerance correction mechanism used in `NSolve` with SNAP structures assumes that the given representative polynomial does not have multiple roots. Therefore, the following warning message is generated if the given polynomial has multiple (or close) roots and any tolerance correction is not applied. This means that any output of `Tolerance` below is not reliable.

Partial derivatives also can be computed in SNAP structures.

The tolerance correcting method used in SNAP computes subtractions among close numbers so it requires a certain precision. In this case, working precision is not enough; hence, we increase it.

### Symbolic-Numeric Algorithm Implementation

Using the basic features, we have started to modify and implement known symbolic-numeric algorithms. In the current implementation, only one algorithm for each computation is used. Other algorithms will be implemented in the near future.

**Table 3.**Symbolic numeric functions.

#### Approximate GCD and Divisors (Univariate Case)

From the early historical period of symbolic-numeric computations, various approximate GCDs have been studied. The problem treated here is very simple: for the given polynomials and and the tolerance , find a polynomial of maximal degree that satisfies

(37) |

where denotes 2, 1, or . The polynomial is called an -GCD of polynomials and with tolerance . Currently, we have implemented the algorithm by Pan [2], for the 2-norm case only.

We note that there are also other approximate GCDs that have slightly different definitions and approximate GCDs of multivariate polynomials. Those approximate GCDs will be implemented in a future release.

Considering approximate GCDs, the following concept of an -divisor is useful. For the given polynomial and the tolerance , we call a polynomial an -divisor of if satisfies

(38) |

where denotes 2, 1, or . Moreover, in this package, we call the quotient of by an -quotient of by . These concepts are used in Pan’s algorithm, and currently we have only implemented the 2-norm case.

Note that, theoretically, -GCD, -divisor, and -quotient are exact polynomials, since only the given polynomials have perturbation parts, and -GCD, -divisor, and -quotient are treated as exact polynomials in those computations. However, due to numerical errors, in this package, -GCD, -divisor, and -quotient are treated as SNAP structures.

We also provide a coprimeness check function that uses the well-known fact that if the Sylvester matrix of the given two polynomials has full rank, then they are coprime [15].

#### Nearest Singular Polynomial (Univariate Case)

The nearest singular polynomial [16, 17, 18] of is the nearest polynomial that has a double root, minimizes , and has the same degree as . A similar problem that finds the nearest polynomial with constrained roots has been studied in [19, 20].

In this package, finding the nearest singular polynomial can be written as follows.

For the given polynomial and tolerance , find a polynomial satisfying

(39) |

where denotes 1, 2, or ; if the output is `False`, the nearest singular polynomial does not exist in the given SNAP structure. The current version of the package solves this problem using the known algorithm [18], so the command can only solve the problem for the 2-norm case. The constrained roots version of the problem will be solvable in a future release.

Note that the current implementation outputs a normal polynomial (not in a SNAP structure) and the given tolerance for the command corresponding to the nearest singular polynomial does not have the same meaning as the other commands of the *SNAP* package. For more information, see [18].

#### Irreducibility Testing for Bivariate Polynomials

Conventional ordinary factorization algorithms may always output “absolutely irreducible” for numerical or empirical polynomials, since the given polynomial may have error parts on its coefficients even if the original polynomial is reducible. Moreover, if a numerical factorization algorithm, for example [7], outputs “no nontrivial factors found,” it does not mean “absolutely irreducible,” since those algorithms can basically find factors when the given polynomial is close enough to a reducible polynomial. Hence, the “irreducibility testing” problem is still important for numerical or empirical polynomials [21, 22, 23, 24, 25].

In this package, the problem becomes:

The algorithm implemented in this package (Nagasaka [23]) is an improved version of the algorithm of Kaltofen and May [22] for bivariate polynomials. Note that the current implementation is based on the algorithm for the 2-norm case; hence, there are possibilities of improvement for another norm. The version for more than two variables will be implemented in a future release. The largest problem in implementing more than two variables is effectiveness, and further studies are needed.

The previously mentioned methods [22, 23] use the coefficients of the given polynomial directly, so we can adapt it to *Mathematica*‘s coefficient-wise accuracy concept. This is better than the original methods, because treating tolerances as polynomial norms tends to overestimate. The current implementation can do this for polynomials not in SNAP structures.

#### Numerical Factorization of Multivariate Polynomials

For the same reason as the previous test for irreducibility, we have to use completely different factorization algorithms for numerical or empirical polynomials. In this package, we have implemented Sasaki’s algorithm [7] with a degree bound studied by Bostan et al. [26]. Currently, for nonmonic polynomials, the command is not stable since none of the approximate GCD algorithms for multivariate polynomials that are needed for factoring nonmonic polynomials are implemented.

Note that the given polynomial may have approximate factors (or so-called numerical or pseudo-factors) even if the algorithm outputs “absolutely irreducible” or “no factors.” Therefore, you are encouraged to use the preceding irreducibility testing when you do not get approximate factors.

#### Examples

Here we show some examples of SNAP operations using this previously defined polynomial.

Here we introduce another polynomial. Though it is not in a SNAP structure, treating it as such is acceptable.

The built-in function outputs that these polynomials are coprime.

The *SNAP* package can compute the -GCD as follows, where . Note that the current implementation of `PolynomialGCD` for SNAP structures is still experimental and that the definition of the greatest common divisor of SNAP structures may change in the future.

We can check its approximate divisibility.

`ApproximateQuotient` minimizes so `aqofgh` is not a constant in this case.

Without any tolerance, the worst tolerance between the given polynomials is used.

This also checks whether they are coprime or not within their tolerance.

This gives the nearest singular polynomial to `g`, so the output polynomial `nsg` has a double root. Note that the current implementation of `NearestSingularPolynomial` is still experimental and that the definition of the nearest singular polynomial of a SNAP structure may change in the future.

To show an example of absolute irreducibility testing, we define the following bivariate polynomial.

This tests its absolute irreducibility within the given tolerance .

If the given polynomial has a SNAP structure, its tolerance is used, so the following evaluations give the same result.

We can also compute a separation bound of `f`. In this case, all the polynomials of are absolutely irreducible.

A warning message is generated if the package routines encounter a machine precision number. We recommend not using machine precision numbers.

This gives an example using the algorithm adapted for *Mathematica*‘s coefficient-wise accuracy concept. Hence, changing all the coefficients within their accuracy does not change its absolute irreducibility.

For numerical polynomials, the built-in function gives the input.

With the *SNAP* package, for example, we can factor it numerically with a backward error bound.

If we use a SNAP structure, its tolerance is automatically used as a backward error bound.

### Conclusion

The *SNAP* package is useful for almost all users who have to work with polynomials with errors in their coefficients. Users may think that *Mathematica* has its own accuracy and precision system, and therefore another structure like those in SNAP is unnecessary. This will be true in the future; however, at least now, most of the latest algorithms for numerical or empirical polynomials cannot operate with coefficient-wise accuracy and precision. Using only significant digits like *Mathematica*‘s cannot answer the algebraic problems, though it can guarantee significant digits of coefficients generated by polynomial arithmetic. Moreover, most of the algorithms in symbolic-numeric computations have to use matrix computations and are not compatible with coefficient-wise concepts, since they usually use matrix norms. Depending on the algorithm, by using absolute irreducibility testing, for example, we can combine them with *Mathematica*‘s coefficient-wise error scheme and we plan to incorporate that in a future release.

Moreover, we are considering whether computing Canonical Comprehensive Gröbner Bases (CCGB) should be integrated into the *SNAP* package since we have implemented CCGB in *Mathematica* and some kind of CCGB is the only way to treat numerical errors exactly. They can be represented as parameters on coefficients; however, this method is so time-consuming that this release does not have CCGB routines. We note that *Mathematica* can compute Gröbner bases numerically, but we think any result is not guaranteed mathematically. However, *Mathematica*‘s built-in computation of numerical Gröbner bases is more advanced than finding pseudo-solutions numerically, which is very difficult, and there are few known academic results.

### Acknowledgment

This research was partially supported by Grants-in-Aid for Scientific Research from the Japanese Ministry of Education, Culture, Sports, Science and Technology (#16700016). We also wish to thank the anonymous referees for their useful suggestions.

### References

[1] | V. Y. Pan, “Approximate Polynomial GCDs, Padé Approximation, Polynomial Zeros, and Bipartite Graphs,” in Proceedings of the Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, San Francisco, CA, New York: ACM Press, and Philadelphia: SIAM Publications, 1998 pp. 68-77. |

[2] | V. Y. Pan, “Computation of Approximate Polynomial GCDs and an Extension,” Information and Computation, 167(2), 2001 pp. 71-85. |

[3] | B. Beckermann and G. Labahn, “When Are Two Numerical Polynomials Relatively Prime?” Journal of Symbolic Computation, 26(6), 1998 pp. 677-689. |

[4] | I. Z. Emiris, A. Galligo, and H. Lombardi, “Certified Approximate Univariate GCDs,” Journal of Pure and Applied Algebra (Special Issue on Algorithms in Algebra), 117 & 118, 1997 pp. 229-251. |

[5] | R. M. Corless, M. W. Giesbrecht, M. van Hoeij, I. S. Kotsireas, and S. M. Watt, “Towards Factoring Bivariate Approximate Polynomials,” in Proceedings of the 2001 International Symposium on Symbolic and Algebraic Computation (ISSAC 2001), London, Ontario, Canada, New York: ACM Press, 2001 pp. 85-92. |

[6] | Z. Mou-tan and R. Unbehauen, “Approximate Factorization of Multivariate Polynomials,” Signal Processing, 14, 1988 pp. 141-152. |

[7] | T. Sasaki, “Approximate Multivariate Polynomial Factorization Based on Zero-Sum Relations,” in Proceedings of the 2001 International Symposium on Symbolic and Algebraic Computation (ISSAC 2001), London, Ontario, Canada, New York: ACM Press, 2001 pp. 284-291. |

[8] | Y. Huang, W. Wu, H. J. Stetter, and L. Zhi, “Pseudofactors of Multivariate Polynomials,” in Proceedings of the 2000 International Symposium on Symbolic and Algebraic Computation (ISSAC 2000), St. Andrews, UK, New York: ACM Press, 2000 pp. 161-168. |

[9] | F. Kako and T. Sasaki, “Proposal of ‘Effective Floating-Point Number’ for Approximate Algebraic Computation,” ACM SIGSAM Bulletin, 31(3), 1997 p. 31. |

[10] | K. Nagasaka, “SNAP Package,” (Talk in Japanese), JSSAC 2004, 2004. |

[11] | K. Nagasaka, “SNAP Package for Mathematica and Its Applications,” in The Ninth Asian Technology Conference in Mathematics (ATCM 2004), Singapore, 2004. |

[12] | G. H. Golub and C. F. V. Loan, Matrix Computations, 3rd ed., Johns Hopkins Studies in Mathematical Sciences, Baltimore: The Johns Hopkins University Press, 1996. |

[13] | B. T. Smith, “Error Bounds for Zeros of a Polynomial Based upon Gerschgorin’s Theorems,” Journal of the ACM (JACM), 17(4), 1970 pp. 661-674. |

[14] | A. Terui and T. Sasaki, “‘Approximate Zero-Points’ of Real Univariate Polynomial with Large Error Terms,” Journal (Information Processing Society of Japan), 41(4), 2000 pp. 974-989. |

[15] | Z. Zeng, “The Approximate GCD of Inexact Polynomials. Part I: A Univariate Algorithm,” Preprint, 2004. |

[16] | N. Karmarkar and Y. N. Lakshman, “Approximate Polynomial Greatest Common Divisors and Nearest Singular Polynomials,” in Proceedings of the 1996 International Symposium on Symbolic and Algebraic Computation (ISSAC 1996), Zurich, Switzerland, New York: ACM Press, 1996 pp. 35-39. |

[17] | L. Zhi and W. Wu, “Nearest Singular Polynomials,” Journal of Symbolic Computation, 26(6), 1998 pp. 667-675. |

[18] | L. Zhi, W. Wu, M.-T. Noda, and H. Kai, “Hybrid Method for Computing the Nearest Singular Polynomials,” MM Research Preprints, 20, 2001 pp. 229-239. |

[19] | M. A. Hitz and E. Kaltofen, “Efficient Algorithms for Computing the Nearest Polynomial with Constrained Roots,” in Proceedings of the 1998 International Symposium on Symbolic and Algebraic Computation (ISSAC 1998), Rostock, Germany, New York: ACM Press, 1998 pp. 236-243. |

[20] | M. A. Hitz, E. Kaltofen, and Y. N. Lakshman, “Efficient Algorithms for Computing the Nearest Polynomial with a Real Root and Related Problems,” in Proceedings of the 1999 International Symposium on Symbolic and Algebraic Computation (ISSAC 1999), Vancouver, B.C., Canada, New York: ACM Press, 1999 pp. 205-212. |

[21] | E. Kaltofen, “Effective Noether Irreducibility Forms and Applications,” Journal of Computer and System Sciences, 50(2), 1995 pp. 274-295. |

[22] | E. Kaltofen and J. May, “On Approximate Irreducibility of Polynomials in Several Variables,” in Proceedings of the 2003 International Symposium on Symbolic and Algebraic Computation (ISSAC 2003), Philadelphia, PA, New York: ACM Press, 2003 pp. 161-168. |

[23] | K. Nagasaka, “Towards More Accurate Separation Bounds of Empirical Polynomials,” ACM SIGSAM Bulletin (Formally Reviewed Articles), 38(4), 2004 pp. 119-129. |

[24] | K. Nagasaka, “Neighborhood Irreducibility Testing of Multivariate Polynomials,” in Proceedings of the Sixth International Workshop on Computer Algebra in Scientific Computing (CASC 2003), Passau, Germany, New York: ACM Press, 2003 pp. 283-292. |

[25] | K. Nagasaka, “Towards Certified Irreducibility Testing of Bivariate Approximate Polynomials,” in Proceedings of the 2002 International Symposium on Symbolic and Algebraic Computation (ISSAC 2002), Lille, France, New York: ACM Press, 2002 pp. 192-199. |

[26] | A. Bostan, G. Lecerf, B. Salvy, É. Schost, and B. Wiebelt, “Complexity Issues in Bivariate Polynomial Factorization,” in Proceedings of the 2004 International Symposium on Symbolic and Algebraic Computation (ISSAC 2004), Santander, Spain, New York: ACM Press, 2004pp. 42-49. |

[27] | R. M. Corless, P. M. Gianni, B. M. Trager, and S. M. Watt, “The Singular Value Decomposition for Polynomial Systems,” in Proceedings of the 1995 International Symposium on Symbolic and Algebraic Computation (ISSAC 1995), Montreal, Canada, New York: ACM Press, 1995 pp. 195-207. |

[28] | S. Gao and V. M. Rodrigues, “Irreducibility of Polynomials Modulo via Newton Polytopes,” Journal of Number Theory, 101, 2003 pp. 32-47. |

[29] | W. M. Ruppert, “Reducibility of Polynomials Modulo ,” Journal of Number Theory, 77, 1999 pp. 62-70. |

[30] | H. J. Stetter, Numerical Polynomial Algebra, Philadelphia: SIAM, 2004. |

[31] | K. Nagasaka, “Towards More Accurate Separation Bounds of Empirical Polynomials II,” in Proceedings of the Eighth International Workshop on Computer Algebra in Scientific Computing (CASC 2005), Kalamata, Greece, Lecture Notes in Computer Science, 3718, New York: Springer-Verlag, 2005 pp. 318-329. |

[32] | K. Nagasaka, “Using Coefficient-Wise Tolerance in Symbolic-Numeric Algorithms for Polynomials,” Sushikisyori, 12(3), 2006 pp. 21-30. |

K. Nagasaka, “Symbolic-Numeric Algebra for Polynomials,” The Mathematica Journal, 2012. dx.doi.org/10.3888/tmj.10.3-10. |

### About the Author

Kosaku Nagasaka is an assistant professor at Kobe University in Japan. In the summer of 1999, Nagasaka participated in the Wolfram Research student internship program. Since 2001, he has been one of the directors of the Japanese *Mathematica* User Group. His main research topic is symbolic numeric algorithms for polynomials.

**Kosaku Nagasaka**

*Division of Mathematics and Informatics
Department of Science of Human Environment
Faculty of Human Development
Kobe University
Japan
*

*nagasaka@main.h.kobe-u.ac.jp*

*wwwmain.h.kobe-u.ac.jp/~nagasaka/research/snap/index.phtml.en*