Assignment 6 Lecture Notes

The following will act as lecture notes to help you review the material from lecture for the assignment. You can click on the links below to get directed to the appropriate section or subsection.


Section 1: Variational Time Integrators

One common application of computer graphics animation is the simulation of physical systems such as the swinging of a simple pendulum or the flow of air over an airfoil. In these simulations, we want to graphically model the behavior of natural phenomena as they evolve dynamically in time. In order to model the behaviors correctly, we not only need to focus on the graphics that we display but also on the numerical computations that determine the state of a physical system at a particular point in time.

A state of a physical system refers to the conditions that define the system at a particular point in time. For instance, consider a ball bouncing vertically in a frictionless environment. At any given moment in time, we can specify the state of the system with the height of the ball relative to the ground and the vertical velocity of the ball. This essentially gives us a snapshot of the configuration of the system at some time t.

The evolution of a system from the state at time t to the state at time t + Δt is often governed by a system of differential equations that is generally hard or impossible to solve analytically. In many cases, we must resort to using numerical methods to approximate the changes in conditions between states, starting with the initial conditions for the initial state of the system. We refer to these numerical methods specifically as time integration techniques or time integrators.

As an example of a time integrator, recall Euler’s method from introductory calculus: given a position q(t) and velocity v(t) at time t and an acceleration function a(t), we compute the conditions for time t + Δt using the equations:

q(t + Δt) = q(t) + (Δt)v(t)

v(t + Δt) = v(t) + (Δt)a(t)

In addition to Euler’s method, there are many other time integrators out there, though not all of them are equal in being “reliable” for graphical simulations. In the following subsections, we discuss one particularly appropriate family of integrators for computer graphics known as variational time integrators, which are derived from Lagrangian mechanics.

Motivating Example: Simple Pendulum

We start off by first examining a simple example of a physical system to demonstrate the different behaviors of different time integrators. Consider a simple pendulum of mass m and length l, swinging in a frictionless medium under the influence of the gravitational acceleration g. The pendulum only has one degree of freedom in its angle, which we will denote q(t). Figure 1 shows a visual of the system:

pict
Figure 1: A simple pendulum with mass m and length l moving in a frictionless environment under the influence of gravity. We denote the angle at a given time t as q(t).

We know from simple physics that the equation of motion for this system is given by:

q¨ = d2q dt2 = -g l sin q

where we use the dot notation to denote derivatives with respect to time.

From here, we could theoretically solve for an analytical solution to the system if we were to make a small angle approximation. However, for the sake of the example, let us assume that an analytical solution is not possible (as is the case for many other systems). We would then need to approximate the time evolution of the system using a time integrator.

Let us first try to model the system using the Euler’s method we used in introductory calculus. We will refer to this method by its more technical name: the explicit Euler method. It is also often called the forward Euler method.

Explicit (Forward) Euler

The explicit Euler method is often taught in introductory calculus classes as simply “Euler’s method.” For our convenience, we will reiterate the method here: given a position q(t) and velocity v(t) at time t and an acceleration function a(t), we compute the conditions for time t + Δt using the update rules:

q(t + Δt) = q(t) + (Δt)v(t)

v(t + Δt) = v(t) + (Δt)a(t)

To apply the explicit Euler method to the simple pendulum system, we first need to discretize the problem and introduce some notation. We break up the time between our initial time t0 and our final time tf of simulation into N equal time steps of length Δt. We denote the time at the k-th time step as tk = t0 + kΔt for k = 0, 1,...,N - 1. The angle of the pendulum at time tk is denoted as qk. Let v = q = dq dt so that vk is the angular velocity of the pendulum at time tk.

Using the above notation, we write the explicit Euler update rules as so:

qk+1 = qk + (Δt)vk

vk+1 = vk - (Δt)g l sin qk

These equations are referred to as explicit equations. Given the values qk and vk at time step k, we can use the above update rules to explicitly compute the values qk+1 and vk+1 at the next time step k + 1. Let us now examine the performance of this integrator. Consider Figure 2:

pict
Figure 2: A visual of the behavior of the explicit Euler integrator. On the left, we see that the amplitude of the pendulum increases as the pendulum continues to swing. On the right is a phase portrait of the system with the angle q on the x-axis and the angular velocity v on the y-axis. The dot marks the initial point corresponding to the initial values of q and v. We see from the phase portrait that the values of q and v continuously increase and “spiral outwards” from the initial point. This diagram is taken from [1].

Figure 2 shows a visual display of the behavior of the explicit Euler integrator. On the left is a display of how the amplitude of the pendulum changes as time goes on. We see that the explicit Euler integrator actually causes the amplitude to increase with passing time. The phase portrait on the right conveys this information in a different manner. Here, we see that the angle q, represented by the x-axis, and velocity v, represented by the y-axis, both continuously increase from their initial values, which we mark with the dot. The outward “spiral” shown in the phase potrait tells us that the pendulum will move further and faster as time goes on, suggesting a numerical instability where the energy of the system “blows up” and goes towards infinity.

This “blowing up” behavior modeled by the explicit Euler integrator is obviously incorrect. In reality, the energy of the simple pendulum system is conserved, and the amplitude of the pendulum should remain constant. The explicit Euler integrator therefore is not a reliable integrator for simulating physical systems.

What causes this ”blowing up” behavior? To answer this question, we can examine what exactly goes on in the update rules of the explicit Euler integrator. Both the update rules for the integrator are actually first order Taylor approximations. Given qk and vk, we compute qk+1 and vk+1 using the tangent slopes of q and v respectively at time t. Figure 3 demonstrates this with a visual:

pict
Figure 3: The explicit Euler integrator makes first order Taylor approximations. Given qk and vk, we use the slopes of the tangents at time step k to compute qk+1 and vk+1. The nature of this approximation causes the integrator to accumulate error at each time step, leading to numerical instability.

Let the curve in Figure 3 represent the path of motion of the system in phase space. We then see that using a first order Taylor approximation for qk+1 and vk+1 leads us to a point that is off the actual path of the system. Because this point is off, the subsequent approximation for qk+2 and vk+2, which depends on the tangent at this “off” point, would then lead to another point that is even further off from the actual path of the system. As we continue making these approximations, the error between the approximated values and the true values will accumulate, eventually causing the approximated values to approach infinity. This results in the “blow up” behavior we see in Figure 2.

Knowing that the explicit Euler method is unreliable, let us try out a different time integrator: the implicit Euler method. This method is also often called the backward Euler method.

Implicit (Backward) Euler

The implicit Euler method is another popular time integrator that is a counterpart to the explicit Euler method. The implicit Euler method takes the update rules of the explicit Euler method and evaluates their right hand sides at the next time step. This causes the update rules to take the form of a set of non-linear equations.

Using the same notation as before, we write the implicit Euler update rules for the simple pendulum system as so:

qk+1 = qk + (Δt)vk+1

vk+1 = vk - (Δt)g l sin qk+1

These equations are referred to as implicit equations as opposed to explicit equations and require the use of a non-linear solver to compute qk+1 and vk+1 given qk and vk. We examine the behavior of implicit Euler in Figure 4:

pict
Figure 4: A visual of the behavior of the implicit Euler integrator. On the left, we see that the amplitude of the pendulum decreases as the pendulum continues to swing. On the right is a phase portrait of the system with the angle q on the x-axis and the angular velocity v on the y-axis. The dot marks the initial point corresponding to the initial values of q and v. We see from the phase portrait that the values of q and v continuously decrease and “spiral inwards” from the initial point. This diagram is taken from [1].

Figure 4 shows the behavior of the implicit Euler integrator similar to how Figure 2 shows the behavior of the explicit Euler integrator. We see on the left that the implicit Euler integrator causes the amplitude of the pendulum to decrease with passing time. The phase portrait tells us that q and v both continuously decrease from their initial values, forming an inward “spiral” that suggests numerical dissipation. That is, the energy of the system is artificially dampened by the integrator and goes towards 0.

While the implicit Euler method is not numerically unstable like the explicit Euler method, it also does not correctly conserve the energy of the system. This is not too surprising, since the implicit Euler method is essentially a reverse explicit Euler (hence the name “backward Euler”). We can see this by rearranging the update rules for implicit Euler as so:

qk = qk+1 - (Δt)vk+1

vk = vk+1 + (Δt)g l sin qk+1

The above equations show that the implicit Euler is just explicit Euler going backwards in time. Thus, implicit Euler also suffers from error due to first order approximations and is not a reliable integrator for simulating physical systems.

What integrator should we use then if both explicit and implicit integrators are unsatisfactory?

Symplectic (Semi-Implicit) Euler

There is an integrator known as the symplectic Euler method or semi-implicit Euler method, which involves a “strange mix” of the explicit and implicit update rules. The update rules as applied to the simple pendulum system are:

vk+1 = vk - (Δt)g l sin qk

qk+1 = qk + (Δt)vk+1

The idea here is to first explicitly compute vk+1 and then implicitly use vk+1 to compute qk+1. The results of this integrator are shown in Figure 5:

pict
Figure 5: A visual of the behavior of the symplectic Euler integrator. On the left, we see that the amplitude of the pendulum remains constant as the pendulum continues to swing. On the right is a phase portrait of the system with the angle q on the x-axis and the angular velocity v on the y-axis. The dot marks the initial point corresponding to the initial values of q and v. We see from the phase portrait that the values of q and v are conserved. This diagram is taken from [1].

Similar to Figures 2 and 4, Figure 5 shows the behavior of the symplectic Euler integrator. We see from both the pendulum diagram and the phase portrait that the symplectic Euler method does correctly conserve the energy of the system.

The question now is why. Why does the symplectic Euler method produce reliable results? Also, how would we have come up with this “strange mix” of an integrator?

We will find in the following sections that the reason the symplectic Euler method correctly conserves the energy of a system is because it is derived from a principle from Lagrangian mechanics that takes into account the entire path (i.e. all the states) that the system takes to go from its initial state to its final state. In comparison, neither the forward nor implicit Euler methods consider the entire path of the system; instead, they both only consider the time evolution of the system one state at a time.

Lagrangian Mechanics

We present, in this section, a brief overview of Lagrangian mechanics as necessary for the understanding of creating reliable time integrators. Do note though that, because this class is not a physics class, our overview is not very rigorous. For those students who are interested in learning Lagrangian mechanics in depth, we recommend taking an advanced classical mechanics course. Additionally, [2] also provides a rigorous, in-depth analysis of what we cover in the following sections and more (although the work does contain a few typos).

The Lagrangian and the Euler-Lagrange Equations

At the root of Lagrangian mechanics lies the Lagrangian, which is defined to be the difference between the kinetic and potential energies of a physical system:

L(q,q) = T(q) - U(q)

where L denotes the Lagrangian, T denotes kinetic energy, and U denotes potential energy. q and q denote our position and velocity state vectors respectively.

We define what is called the action, denoted by S(q), as:

S(q) =t0tf L(q(t),q(t))dt

To get an intuitive understanding of what the action is, let us examine for a moment the simple physical system involving a ball thrown from position q0 to position qf. Physics causes the ball to take a certain trajectory or path {q(t),q(t)} as it moves from q0 to qf. We can compute the Lagrangian for each point along this path, and the sum of all the computed Lagrangian values gives us the action.

The reason why we introduce this concept of “action” is because of a key principle in physics known as Hamilton’s least action principle, which states that the action of a physical system is always at a extremum (e.g. minimum). Now, before we discuss what it means for the action to be at an extremum, let us step back and consider what it means for a simple function f(x) to be extremized at a point xt. For f(xt) to be a extremum, we know that f(x t) = 0. From this, we might be tempted to say that for the action, S(q), to be at a extremum, the derivative of the action with respect to q must be 0. The issue with this statement is that, unlike x, q does not just represent a single value. Rather, because the action is defined to be an integral over the values qt0 to qtf, q is actually a path q(t) between the initial and final states. This makes examining the extremum of the action a bit more complicated than what we are used to.

The key to understanding the impact of the least action principle lies in what we call variational calculus. Of course, an overview of variational calculus is beyond the scope of this course. For our purposes, we simply need to know that the analog of the derivative that we need for examining the extremum of the action is known as the variation, which we denote with δ. Conceptually, a variation of a path q, denoted as δq, is an infinitesimal perturbation (i.e. deviation) to the path at each point except at the endpoints, where the perturbations are null. Similar to how f(x) = 0 when f(x) is a extremum, δS(q) = 0 when S(q) is a extremum.

We compute the variation of the action as so:

δS(q) = δt0tf L(q,q)dt =t0tf [L q δq + L q δq]dt

Using integration by parts yields:

δS(q) =t0tf [L q - d dt L q ] δqdt + [L q δq]t0tf

Now, because the perturbations at the endpoints of q have to be null, we have that δq(t0) = δq(tf) = 0. This causes the rightmost term of the right-hand side to vanish. Following the least action principle, we then set δS(q) = 0 and obtain:

t0tf [L q - d dt L q ] δqdt = 0

From here, we invoke what is called the fundamental lemma of calculus of variations, which tells us that, because the above integral equals 0 for all δq, the part of the integrand in the square brackets also equals 0. Applying this lemma gives us the following important formula:

L q - d dt L q = 0

The above formula is known as the Euler-Lagrange equation and is essentially the core of Lagrangian mechanics. For a given Lagrangian of a physical system, we can use the Euler-Lagrange equations to derive the equations of motion of the system.

A Simple Application

As a demonstration of the power of the Euler-Lagrange equations, consider a physical system involving a particle of mass m moving under the influence of a potential function U(q), where we use q(t) to denote the position of the particle. The Lagrangian of the system is simply:

L(q,q) = 1 2mq2 - U(q)

Plugging L into the Euler-Lagrange equations yields:

-U q - mq¨ = 0

which we can express as:

-U(q) = F = mq¨

And we see that we obtain Netwon’s law of F = ma, which is what we would expect as the equation of motion for this system.

The Discrete Lagrangian

So what about Lagrangian mechanics allows us to create reliable integrators? The answer to this question lies in the fact that the action is a function of the entire path that the given system takes from its initial state to its final state. This fact causes the action to encode the global behavior (i.e. behavior throughout all of time) of the system. And because the Euler-Lagrange equations are derived directly from the action, they inherently also encode the global behavior of the system. As a result, any equations of motion derived from the Euler-Lagrange equations would also inherently take into account the global behavior.

Now, let us say that we manage to construct a complete, discrete equivalent of the action that – like its continuous counterpart – encodes the global behavior of the given system. The update rules that we derive from this discrete action would then also inherently take into account the global behavior. The following subsection details how we construct this discrete action and subsequently derive from it the discrete Euler-Lagrange equations.

Deriving the Discrete Euler-Lagrange (DEL) Equations

Let us first consider how we compute integrals in the discrete setting in general. Given an integral of a function f(t) from t0 to tf, we can approximate the integral by summing up rectangle approximations of the area under the curve formed by f at discrete time steps of length Δt. Let N be the number of time steps. The area of the rectangle at time step k for k = 0, 1,...,N - 1 is given by (Δt)f(tk). The overall integral is then computed via:

t0tf f(t) k=0N-1(Δt)f(t k)

We use the same thought process as above to construct a discrete equivalent of the action. We first define the discrete Lagrangian, denoted by Ld, as the area of a discrete rectangle given by:

Ld(qk,qk+1, Δt) = (Δt)L(qk, qk+1 - qk Δt )

where L denotes the continuous Lagrangian of the system of interest. Note that we use above a discrete approximation of q, which, for a discrete time step, can be written as q qk+1 - qk Δt . Hence, the discrete Lagrangian ends up being a function of two consecutive states, qk and qk+1, and the size of the time step, Δt.

We then write the discrete action as:

Sd = k=0N-1L d(qk,qk+1, Δt)

Following the continuous derivation of the Euler-Lagrange equations, we express the variation of the discrete action as:

δSd = k=0N-1 D 1Ld(qk,qk+1, Δt) δqk + D2Ld(qk,qk+1, Δt) δqk+1

where we use D1Ld to denote the derivative of Ld with respect to the first argument, qk, and D2Ld to denote the derivative of Ld with respect to the second argument, qk+1.

We can rearrange the sum of Sd to obtain:

δSd = k=1N-1 D 2Ld(qk-1,qk, Δt) + D1Ld(qk,qk+1, Δt) δqk

+D1Ld(q0,q, Δt) δq0 + D2Ld(qN-1,qN, Δt) δqN

Remember that we require the variations at the boundary points, q0 and qN, to be 0. This eliminates the two terms outside the summation. From here, we follow the continuous derivation of the Euler-Lagrange equations and set the variation of the discrete action to 0:

δSd = k=1N-1 D 2Ld(qk-1,qk, Δt) + D1Ld(qk,qk+1, Δt) δqk = 0

Because the sum is 0 for all δqk, the expression within the sum must be 0. Hence, we obtain:

D1Ld(qk,qk+1, Δt) + D2Ld(qk-1,qk, Δt) = 0 (1)

We refer to the above as the discrete Euler-Lagrange (DEL) equations. Given two consecutive states (i.e. qk-1 and qk), we can use the discrete Euler-Lagrange equations to solve for the next state (i.e. qk+1).

We can, however, go further and derive a more useful form of the DEL equations. To do so, let us first consider the DEL equations for the simple Lagrangian:

L(q,q) = T(q) - V (q) = 1 2mq2 - U(q)

The discrete Lagrangian for this system is:

Ld(qk,qk+1, Δt) = Δt 1 2m qk+1 - qk Δt 2 - U(q k)

Observe what happens when we compute:

D2Ld(qk-1,qk, Δt) = m qk - qk-1 Δt

Note how the right-hand side of the equation is just the momentum at time step k, which we will denote pk. It turns out that, in general, D2Ld(qk-1,qk, Δt) is indeed just pk for any Lagrangian. The full proof of this is outside the scope of this class, but we can at least see from the above example how this can be true.

Now, from the above fact and the DEL equations, we have:

pk = D2Ld(qk-1,qk, Δt) = -D1Ld(qk,qk+1, Δt)

The above means that we can write the DEL equations in the following form:

pk = -D1Ld(qk,qk+1, Δt) (2)
pk+1 = D2Ld(qk,qk+1, Δt) (3)

Consider for a moment the power of the above two equations. Given the position and momentum of a physical system at some state k, we can solve the above two equations for the position and momentum of the next state k + 1. Unlike the previous form of the DEL equations, which required two consecutive states to compute the next state, this form only needs one state (i.e. the state at the current time step) to compute the next state.

Equations (2) and (3) are the main DEL equations and are what we use to create reliable time integrators. Because they were derived from the action, which takes into account the entire path of the given system, the DEL equations and any time integrators derived from them inherently account for the global behavior of the system. We refer to the time integrators that we derive from the DEL equations as variational time integrators.

Deriving Symplectic Euler for the Simple Pendulum

To demonstate the power of the DEL equations, we will derive the sympletic Euler integrator for the simple pendulum system that we analyzed earlier.

We start by writing the continuous Lagrangian for the system:

L = T - U = 1 2m(x2 + y2) - (-mgy)

Using the relationships:

x = l sin θ

y = l cos θ

x = l cos(θ)θ

y = l sin(θ)θ

we can rewrite the Lagrangian in terms of θ and θ as:

L = 1 2ml2θ2 + mgl cos θ

The discrete Lagrangian is then:

Ld(θk,θk+1, Δt) = Δt 1 2ml2 θk+1 - θk Δt 2 + mgl cos θ k

Computing the respective derivatives gives us:

pk = -D1Ld = -Δt -ml2 θk+1 - θk (Δt)2 - mgl sin θk

pk+1 = D2Ld = Δt ml2 θk+1 - θk (Δt)2

Now, recall from classical mechanics that the angular momentum of a simple pendulum is p = ml2ω, where ω is the angular velocity. Substituting this definition of p into the above equations and doing some algebra simplifies the above two equations into the following equations:

ωk = θk+1 - θk Δt + (Δt)g l sin θk

ωk+1 = θk+1 - θk Δt

A little rearranging and substitution finally gives us:

ωk+1 = ωk - (Δt)g l sin θk

θk+1 = θk + (Δt)ωk+1

which are exactly the equations that we introduced previously as the symplectic Euler integrator!


Section 2: Keyframe Interpolation

When we animate scenes in computer graphics, we often use a technique called keyframe interpolation to generate the animation. The idea of keyframe interpolation is best introduced with an example, so let us consider the following scenario.

We want to create a minute-long, traditional animation where each frame is hand-drawn. For the animation to appear smooth, we need to display around 30 frames per second. However, 30 frames per second for 60 seconds results in a grand total of 1800 frames to be drawn! And that is only for a minute-long animation. The amount of time and manpower it would take to generate all the frames necessary for an even longer animation would be enormous. However, with the advent of computer animation, we can now generate long animations by drawing only a small number of select frames called keyframes. We draw the keyframes, which capture the most important scenes in our animation, and generate the rest of the animation by interpolating or, in other words, using computer algorithms to “fill in the gaps” between the keyframes.

Keyframes

One particular way of specifying a keyframe is with the positions and orientations of all the points in the scene of interest. We specify the position of a point as a translation from an arbitrary chosen origin and the orientation of a point as a rotation by an angle about some axis. In addition, we also find it convenient to specify the scaling of a point as another characterization of the point’s configuration in a scene. When we interpolate between keyframes, we actually interpolate the change in translation, rotation, and scaling of each point in the scene between the frames.

Splines

We most often interpolate across keyframes by piecing together individual polynomial curves to form an overall, piecewise polynomial function called a spline. The spline goes through and connects all corresponding points across all keyframes. To see how this process works, let us consider piecing together a spline based on linear interpolation between keyframes.

Let us say that we currently have two keyframes, K1 and K2, that we want to linearly interpolate between. In K1, we have a point that is positioned in the scene by the translation vector p0. In K2, that same point is now positioned in the scene by the translation vector p1. The scenes in both keyframes use the same coordinate system. To linearly interpolate the change in position of the point between K1 and K2, we need to linearly interpolate between the translations vectors p0 and p1.

Our linear interpolation scheme proceeds as follows. First, we should note that we need to interpolate the components of the translation vectors one at a time. However, for simplicity in the notation, we will still use the variables, p0 and p1, to denote the values that we are interpolating. Now, consider a line segment connecting p0 to p1. We parameterize this line segment over the unit domain with the following function:

f(u) = (1 - u)p0 + up1

where u [0, 1]. We can then use f(u) to compute the values between p0 and p1 to “fill in” the motion that our point takes as it moves between the two keyframes.

Now, for reasons that we will see later, we find it convenient to consider the following alternative formulation of f(u):

f(u) = a0 + ua1

for coefficients a0 and a1. We notice that the above formulation is the canonical representation of a first degree polynomial (i.e. a line). We can rewrite this formulation using vector notation. Let u = [1,u] and a = [a0,a1]. Then, we have:

f(u) = u a

What exactly is a though? Well, we know that f(0) = p0 and f(1) = p1. We also know that for p0, we have u = [1, 0]. And for p1, we have u = [1, 1]. With this, we can write the following system of equations:

p0 p1 = 10 1 1 a0 a1

Let us write the above system as:

p = Ca

where we call C the constraint matrix. Note how we can solve for a after finding the inverse of C. We denote the inverse of C with the variable B and call it the basis matrix. We can now evaluate the curve between p0 and p1 as:

f(u) = uBp

We repeat the above process for each pair of values pi and pi+1 in consecutive keyframes Ki and Ki+1. In the end, we obtain multiple line segments that connect sequentially to form a spline that goes through all points p0,p1,p2,...,pfinal. We can use the pieces of the spline to compute the components of all translation vectors needed between keyframes to generate the animation across all keyframes.

Now, at this point, one might ask, “why did we have to make this process so complicated?” After all, our original, parameteric formulation would have been sufficient for the interpolation between two points. The answer is that while this process is unnecessary for simple linear interpolation, the whole concept of computing constraint and basis matrices becomes useful when we interpolate using more complicated curves.

Furthermore, we can imagine that simply using linear interpolation between keyframes would not produce smooth motion. Consider the shape of a spline formed from only line segments. Line segments do not transition smoothly into one another; rather, they form sharp angles at each junction. These sharp angles cause the motion of the point to result in sudden changes as we transition between keyframes. As a result, we do not obtain smooth motion from linear splines and must instead look towards more complicated curves for interpolation.

Cardinal Splines (Catmull-Rom Splines)

Before we continue, let us stop to consider the nature of a spline that would produce smooth interpolation across keyframes. We know that a spline consisting of line segments does not work because the segments form sharp angles at each junction. What is a condition that we can enforce to prevent our spline from having sharp angles?

The answer is to require the first derivatives of two connecting pieces of our spline to be equal at the junction (i.e. C1 continuity). This means that we must, at the very least, use quadratics for interpolation.

It turns out that cubics are the most commonly used polynomials for keyframe interpolation. In particular, we often use a specific class of cubics called cardinal cubic curves. We call the overall resulting curve a cardinal spline. They have the nice properties of C1 and C2 continuity (i.e. their first and second derivatives are continuous everywhere).

We form a cardinal cubic curve in the following manner. First, let us denote the two keyframes that we are interpolating between as Ki and Ki+1. Let pi and pi+1 be the values (e.g. components of translation or scaling vectors) that we are interpolating between the two keyframes. To form a cardinal cubic curve, we need to also consider the keyframes, Ki-1 and Ki+2 and their corresponding values, pi-1 and pi+2. Thus, the cardinal cubic curve has four control points indexed from i - 1 to i + 2.

We let the derivative at the beginning of our curve segment be a scaling of the vector between the first and third control points. And we let the derivative at the end of our curve segment be a scaling of the vector between the second and fourth control points. Figure 1 below provides a visual of this.

pict
Figure 1: A visual of the formation of a cardinal curve (in green) used to interpolate between pi to pi+1. Four control points are necessary. The derivative at the beginning of the curve is a scaling of the vector between the first and third control points. The derivative at the end of the curve is a scaling of the vector between the second and fourth control points.

Let us take a moment to convince ourselves that the derivatives of two consecutive cardinal curves match at the point in which the curves connect. Consider a cardinal curve c1 between points piand pi+1 connecting to a cardinal curve c2 between points pi+1 and pi+2. The two curves connect at point pi+1. Now, the derivative of c1 at pi+1 is given by pi+2 - pi, the difference between the second and fourth control points of c1. What is the derivative of c2 at pi+1? It would the the difference between the first and third control points of c2. The first and third control points would be pi and pi+2 respectively, hence, the derivative of c2 at pi+1 would also be pi+2 - pi. Thus, the derivatives match at point pi+1. Similar logic can be used to confirm that the derivatives at other junctions along a cardinal spline also match.

To find a mathematical representation for the cardinal curve, we undergo a process similar to the one we took for our linear interpolation example above. We first represent the cardinal curve between pi and pi+1 with a function, f(u), that is parameterized over the unit domain. Our constraints for the curve then can be expressed as:

f(0) = pi

f(1) = pi+1

f(0) = s(p i+1 - pi-1)

f(1) = s(p i+2 - pi)

where s is the scaling factor for the derivative vectors. We formally define s to be:

s = 1 2(1 - t)

where t [0, 1) is known as the tension parameter and is an arbitrarily set value that controls the bending of the curve. The closer we set t to 1, the flatter the curve becomes around the control points. We often, however, simply set t to 0. We call the set of splines for t = 0 Catmull-Rom splines.

Since cardinal curves are cubics, we will consider the canonical representation of a third degree polynomial:

f(u) = a0 + ua1 + u2a 2 + u3a 3 = u a

for u = [1,u,u2,u3] and a = [a0,a1,a2,a3]. We want to now write a system of equations in the form of p = Ca. To do so, we consider our constraints and solve for the control points:

pi-1 = f(1) -1 sf(0)

pi = f(0)

pi+1 = f(1)

pi+2 = f(0) + 1 sf(1)

Substituting our canonical cubic polynomial into the above set of equations yields:

pi-1 = a0 + (1 -1 s)a1 + a2 + a3

pi = a0

pi+1 = a0 + a1 + a2 + a3

pi+2 = a0 + 1 sa1 + 2 sa2 + 3 sa3

which we can express in the form of p = Ca. Taking the inverse of the resulting constraint matrix yields the following basis matrix:

B = C-1 = 0 1 0 0 -s 0 s 0 2ss - 33 - 2s-s-s 2 - s s - 2 s

For the case where t = 0 (i.e. for Catmull-Rom splines), we have:

B = C-1 = 1 2 0 2 0 0 -1 0 1 0 2 -5 4 -1-1 3 -3 1

Hence, our cardinal curve function can be expressed as:

f(u) = uBp

Given four control points expressed as p = [pi-1,pi,pi+1,pi+2] and an appropriate t parameter, we can use the above to compute the interpolated values between pi and pi+1. This interpolation method smoothly interpolates the components of both translation and scaling vectors.

To interpolate the rotation quaternions, simply use this interpolation method on each component of the keyframe rotation quaternions, then normalize the result. Refer back to our Assignment 3 lecture notes for details about quaternions.

There are other methods for rotation interpolation, including Spherical Linear Interpolation (SLERP) [3] [4], which interpolates quaternions along the great arc of a sphere. However, SLERP does not have C1 or C2 continuity because it only uses the nearest two control points for interpolation. Thus, it can produce awkward motion in comparison to more sophisticated methods [5].

B-splines and NURBS

Another major class of splines is the B-spline, which is a type of non-interpolating spline. This means that a B-spline is not required to pass through its control points. An extension of B-splines called NURBS (Nonuniform rational B-splines) can have control points that are non-uniformly spaced apart. This is useful for adding control points to a NURBS curve without affecting the shape of the curve, which is extremely useful in modeling for allowing more local control of the curve. NURBS can also be used to define smooth surfaces in 3D using a technique called lofting, which combine two NURBS curves into a single surface. If you are interested in reading about the details behind NURBS and B-splines in general, feel free to check out these notes on NURBS and NURBS surfaces from previous years of CS171. For an interactive website which allows you to see how B-splines and NURBS splines change when you move the control points, click here and here.


References

[1] http://www.geometry.caltech.edu/pubs/SD06.pdf

[2] http://thesis.library.caltech.edu/2492/1/west_thesis.pdf

[3] http://dl.acm.org/citation.cfm?id=325242

[4] https://en.wikipedia.org/wiki/Slerp

[5] http://www.geometry.caltech.edu/pubs/HTZBD10.pdf


Written by Kevin (Kevli) Li (Class of 2016).
Links: Home Assignments Contacts Policies Resources