CAGD 2002

CAGD 2002 notes: Parametric Curve Interrogations

topics


Differential Geometry

In standard differential geometry it is always assumed that a curve is parametrised with respect to arc length. In CAGD it is necessary to face the reality that arc length parametrisations are actually hard to achieve and expensive to implement, so that we cannot assume that the parameter is arc length SP .

A useful mental image is to interpret t as time, and think of a point moving along the curve.

What is the derivative of a curve with respect to t ?

The fundamental definition of a derivative is as the limit of the ratio of differences
dP/dt = Limit as dt tends to zero of (P(t+dt) - P(t))/dt

Now (P(t+dt) - P(t)) is just a chord. As dt shrinks, P(t+dt) moves around the curve towards P(t) and in the limit dP/dt becomes a tangent. From this definition we see that it is a displacement.

If t is time, then dP(t)/dt is just the velocity vector, whereas in classical differential geometry the tangent is a unit vector.

Because of this complication CAGD people have invented the distinction between parametric continuity and geometric continuity. If two pieces of curve have the same value of dP/dt where they meet the junction is said to be C1: if they merely have the same direction and sense of tangent vector, but different magnitudes of |dP/dt|, the junction is said to be G1.

A G1 junction can always be turned into a C1 one by just rescaling the parameter on one side of it. The actual shape of a curve does not depend on the rate at which it is traversed.

What about second derivatives ?

If the first derivative is velocity, the second derivative is acceleration.

This is closely linked to the curvature of a curve. In fact, the component of acceleration perpendicular to the curve is proportional (at a constant speed) to the curvature. The component of acceleration along the curve is not linked to the curvature at all.

We draw a similar distinction between C2 and G2 continuity. In G2 continuity the actual curvature of the curve is the same on both sides of a junction, while in C2 the second derivative vector is the same. A good intuitive mental image of a G2 junction is a car driving round a constant radius track, where the driver suddenly puts his foot down on the accelerator. The acceleration vector changes, but the curvature of the path does not.


Basic Interrogations (the API)

In software design it is considered good practice to restrict dependence on the detail of the representations of objects to a small set of kernel methods related to each object.

In the case of parametric curves we can indeed do this, limiting detailed knowledge of the basis functions and the coefficients to just a few routines:


Graphics

Drawing a Curve

There are two stages in this process. One is to calculate a sequence of points along the curve, which will then be joined by straight line segments by the graphics device. The other is to apply an appropriate transformation so that the required view of the curve is seen.

Because of the coordinate system invariance of the curves that we actually use, we can carry out these two parts of the process in either sequence, either transforming the representation to camera coordinates first and then rendering, or else calculating points and then mapping them to camera coordinates afterwards. For drawing curves the former is often more efficient, but for other interrogations the balance may favour the other approach.

Here we consider only the rendering process, of finding a sequence of points along the curve. There are three approaches:

We call the second method marching and the third refinement or homotopy.


Incidence Interrogations

We use two examples of interrogations, although there are many more. The ones we elaborate are

Curve-Plane Intersection

If we use the implicit plane equation f(P) = 0, we can in principle substitute the parametric equation of the point into it, giving an equation in terms of the scalar t for which we want to find the roots.
f( F( t ) ) = 0

Actually doing the algebraic substitution is generally very little help, as it violates the principle of separation of levels of functionality, and while very good root-finders are available for polynomials, there are rather few for spline functions.

It does help, however, to view the intersection process as a special-purpose root-finder, and steal ideas from the root-finders in the standard libraries.

In this context the two main ones are

Newton iteration is a classical process for improving an estimate of the position of a root. Suppose that we have an estimate t0 for the position of a root, we can evaluate both the value and the derivative of the function there.
P0 := P(t0)
f0 := [P0 - Q].N
P'0 := dP(t0)
f '0 := P'0.N

From these we can estimate a better position for the root
t1 := t0 - f0 / f '0

This in turn can be used as a starting point for a further improvement, and the process repeated until the value of f is small enough.

This is a very fast process: under normal circumstances it converges quadratically, which means that the number of correct digits in t doubles at every step. If, as is reasonable, we start with two or three binary digits correct, we can get down to double precision floating point precision in 5 steps.

However, there are a few problems

Current practice is pretty ad hoc. We can address the initial estimate question by taking a spatter of points along the line, but who can say how dense that spatter needs to be ? We can address divergence by trapping when the process gives an estimate outside [0..1], but what should we do then ? We can address multiple roots by iterating to a solution from each of the points of the spatter, but we then need to weed out duplicate roots which have resulted from converging to the same place from different starting points. If f' is small, they may still have different values of t when f says that they have both converged. CAD systems have ways of doing something sensible, but these are not appropriate to be taught in a maths course.

One technique which I have not seen in the CAGD literature, but which is used in standard root-finders, is deflation. If we are solving for the roots of a polynomial, we can ensure that any second root found will not be a duplicate by setting up a new function, consisting of the old one divided by the distance of t from the known root. In the case of a polynomial, we can modify the representation to give a new polynomial explicitly of lower degree. In our more general context we cannot do that, but we can do the division numerically every time f and f' are evaluated. This would not work well in the presence of multiple roots, but those can be tested for by looking at the derivatives at any root actually found.

Bracketting, also called Regula Falsi or Iterated Inverse Linear Interpolation is a more robust process, though it is not usually as fast. In this process we scan along the curve to find a change of sign of f, and as soon as we have two values of t for which the signs of f are different, we set up the interval between them as [tl..tu].

Now find the value tn at which the chord from ( tl , fl ) to ( tu , fu ) cuts f=0. This must lie within our interval, because the values of f at the ends have different signs.

Evaluate fn at tn. This may be zero, in which case the root is found, or it may be positive or it may be negative. If it is not zero, replace as a bound of the current interval whichever of tl , tu has the same sign of f. The width of the interval gets strictly smaller at each iteration. When it is narrow enough the iteration can stop anyway.

This process is only linearly convergent, adding a fixed number of correct digits to the precision of t at each stage. That number depends on the second derivative of the curve over the interval. (For almost straight curves it is obviously very fast.)

There are counter-examples where the speed of convergence is very low, but these can be countered by ad-hoc variation of the algorithm, or by fitting something other than a straight line (a rational function, linear divided by quadratic, is suggested in the literature, chosen using the derivatives at the end of the current interval).

Bracketting is much more robust than Newton, because it cannot diverge. It has a problem with double roots, because there is no sign-change to be exploited there. The finding of many intersections, rather than just one, is also a somewhat ad hoc process.

The main limitation of Bracketting, however, is that it does not generalise to multiple dimensions. Newton is therefore the principal paradigm in practical use.

Closest approach of two curves

Consider the case where we have two curves P(t) and Q(s), where s and t are independent parameters.

We want to find the values of s and t which minimize the value of |P(t) - Q(s)|.

It is easiest to deal with the square of this distance
w = [P(t) - Q(s)].[P(t) - Q(s)]

Focussing on the `usual' case, where this minimum occurs in the interior of both curves (not at an end point), and where there is only a single local minimum, the problem becomes the finding of points at which
f(s,t) = dw/ds = -2dQ/ds.[P(t) - Q(s)] = 0
g(s,t) = dw/dt = 2dP/dt.[P(t) - Q(s)] = 0

If we have an estimate of the pair ( s0 , t0 ) (This is the same thing as a pair of estimates, but I want to think of the pair of parameters as a single object) we can evaluate f, g, df/ds, df/dt, dg/ds, dg/dt there.

Setting up a bivariate Taylor series about ( s0 , t0 ) gives
f ( s+ds , t+dt )
g ( s+ds , t+dt )
~
df/dsdf/dt
dg/dsdg/dt
ds
dt
=
0
0

The right hand side of this can be solved for ds , dt which can be added to the original estimate to give us a new iteration. This is essentially Newton's method in a multivariate setting. It works just as well / just as badly as in the univariate case.



This page was last updated on October 17th 2002.
It is maintained by Malcolm Sabin <malcolm@geometry.demon.co.uk&>;