<- previous    index    next ->

Lecture 9, Review 1


Go over WEB pages Lecture 1 through 8.
Work the homework 1 and homework 2.
Read the pages in the textbook that are on the syllabus.

Open book, open notes, exam.
Multiple choice and short answer.

Example questions and answers will be presented in class.
Some things you should know for the exam:

IEEE float has 6 or 7 decimal digits precision

IEEE double has 15 or 16 decimal digits precision

Adding or subtracting numbers of large differences in magnitude
causes precision lose.

RMS error is root mean square error, a reasonable intuitive measure

Maximum error may be the most important measure for some applications

Average error is not very useful

Most languages include the elementary functions sin and cos, yet many
do not include a complete set of reciprocal functions such as cosecant
or inverse or hyperbolic functions such as inverse hyperbolic cotangent.

If given only a natural logarithm function, log2(x) and be computed
as  log(x)/log(2.0) and log10(x) as log(x)/log(10.0)

Homework 1 used a simple approximation for integration:
position s = integral for time=0 to time=t  velocity(t) dt 
as s = sum i=1 to n  s_i = s_i-1 + dv_i-1 * dt  where n*dt=t

In order to guarantee a mathematical unique solution to a set of
linear simultaneous equations, two requirements are needed:
There must be the same number of equations as unknown variables and
the equations must be linearly independent.

For two equations to be linearly independent, there must not be
two constants p and q such that equation1 * p + q = equation2

The numerical solution of liner simultaneous equations can fail even
though the mathematical condition for a unique solution exists.

The Gauss Jordan method of solving simultaneous equations A x = y produces
the solution x by performing operations of A y, reducing A to the
identity matrix such that y is replaced by x at the end of the computation

Improved numerical accuracy is obtained in the solution of linear
systems of equations by interchanging rows such that the largest
magnitude diagonal is used as the pivot element.

The system of linear equations is solved by the same method when
the equations have complex values.

Given data, a least square fit of the data minimizes the RMS error
for a given degree polynomial at the data points. Between the
data points there may be large variations.

When trying to fit large degree polynomials, there may be numerical
errors such that the approximation becomes worse.

Mathematically, a least square fit of n data points should be
exactly fit by a n-1 degree polynomial. The numerical computation
may not give this result.

A least square fit may use powers, sine, cosine or any other
smooth function of the data. The basic requirement is that
all functions must be linearly independent of each other.

A least square fit uses a solution of simultaneous equations

A polynomial of degree n has n+1 coefficients. Thus n+1 data
points can be exactly fit by an n degree polynomial.

A polynomial of degree n has exactly n roots (possibly with multiplicity)

Given roots r1, r2, ... rn a polynomial with these roots is created by
(x-r1)*(x-r2)* ... *(x-rn)

Horner's method of evaluating polynomials provides accuracy and
efficiency by never directly computing high powers of the variable.

Given the numerical coefficients of a polynomial, the numerical
coefficients of the integral, derivative are easily computed.

Given the numerical coefficients of two polynomials, the
sum, difference, product and ratio are easily computed.

Any functions that can be continuously differentiated can be
approximated by a Taylor series expansion.

Orthogonal polynomials are used to fit data and perform numerical
integration. Examples of orthogonal polynomials include:
Legendre, Chebyshev, Laguerre, Lagrange.

Chebyshev polynomials are used to approximate smooth functions
while minimizing the maximum error.

Legendre polynomials are used to approximate smooth functions
while minimizing RMS error.

Lagrange polynomials are used to approximate smooth functions
while exactly fitting a specific set of points.

For non smooth functions, including square waves and pulses,
a Fourier series approximation may be used.

The Fejer approximation can be used to eliminate many oscillations
in the Fourier approximation at the expense of a less accurate fit.

Numerical integration is typically called numerical quadrature.

The Trapezoidal integration method requires a small step size
and many function evaluations to get accuracy.
(The step size is uniform and the method is easy to code)

The Gauss Legendre integration of smooth functions provides
good accuracy with a minimum number of function evaluations.
(The weights and ordinates are needed for the summation)

An adaptive quadrature integration method is needed to
get reasonable accuracy of functions with large derivatives.

Adaptive quadrature uses a variable step size and at least
two methods of integration to determine when the desired
accuracy is achieved. This method, as with all integration
methods, can fail to give accurate results.

Two dimensional and higher dimensions can use simple extensions
of one dimensional numerical quadrature methods.


    <- previous    index    next ->

Other links

Go to top