Mathematics
Version 7
M ATLAB
®The Language of Technical Computing
[email protected] Technical support
[email protected] Product enhancement suggestions [email protected] Bug reports
[email protected] Documentation error reports
[email protected] Order status, license renewals, passcodes [email protected] Sales, pricing, and general information
508-647-7000 Phone
508-647-7001 Fax
The MathWorks, Inc. Mail 3 Apple Hill Drive
Natick, MA 01760-2098
For contact information about worldwide offices, see the MathWorks Web site.
MATLAB Mathematics
© COPYRIGHT 1984 — 2005 by The MathWorks, Inc.
The software described in this document is furnished under a license agreement. The software may be used or copied only under the terms of the license agreement. No part of this manual may be photocopied or repro- duced in any form without prior written consent from The MathWorks, Inc.
FEDERAL ACQUISITION: This provision applies to all acquisitions of the Program and Documentation by, for, or through the federal government of the United States. By accepting delivery of the Program or Documentation, the government hereby agrees that this software or documentation qualifies as commercial computer software or commercial computer software documentation as such terms are used or defined in FAR 12.212, DFARS Part 227.72, and DFARS 252.227-7014. Accordingly, the terms and conditions of this Agreement and only those rights specified in this Agreement, shall pertain to and govern the use, modification, reproduction, release, performance, display, and disclosure of the Program and Documentation by the federal government (or other entity acquiring for or through the federal government) and shall supersede any conflicting contractual terms or conditions. If this License fails to meet the government's needs or is inconsistent in any respect with federal procurement law, the government agrees to return the Program and Documentation, unused, to The MathWorks, Inc.
Trademarks
MATLAB, Simulink, Stateflow, Handle Graphics, Real-Time Workshop, and xPC TargetBox are registered trademarks of The MathWorks, Inc. Other product or brand names are trademarks or registered trademarks of their respective holders.
Patents
The MathWorks products are protected by one or more U.S. patents. Please see
Revision History
June 2004 First printing New for MATLAB 7.0 (Release 14) Formerly part of Using MATLAB October 2004 Online only Revised for MATLAB 7.0.1 (Release 14SP1) March 2005 Online only Revised for MATLAB 7.0.4 (Release 14SP2) June 2005 Second printing Minor revision for MATLAB 7.0.4
September 2005 Online only Revised for MATLAB 7.1 (Release 14SP3)
i
Contents 1
Matrices and Linear Algebra
Function Summary . . . 1-2 Matrices in MATLAB . . . 1-4 Creating Matrices . . . 1-4 Adding and Subtracting Matrices . . . 1-6 Vector Products and Transpose . . . 1-7 Multiplying Matrices . . . 1-8 The Identity Matrix . . . 1-10 The Kronecker Tensor Product . . . 1-11 Vector and Matrix Norms . . . 1-12 Solving Linear Systems of Equations . . . 1-13 Computational Considerations . . . 1-13 General Solution . . . 1-15 Square Systems . . . 1-15 Overdetermined Systems . . . 1-18 Underdetermined Systems . . . 1-20 Inverses and Determinants . . . 1-22 Overview . . . 1-22 Pseudoinverses . . . 1-23 Cholesky, LU, and QR Factorizations . . . 1-27 Cholesky Factorization . . . 1-27 LU Factorization . . . 1-29 QR Factorization . . . 1-30 Matrix Powers and Exponentials . . . 1-34 Eigenvalues . . . 1-38 Singular Value Decomposition . . . 1-42
Polynomial Function Summary . . . 2-2 Representing Polynomials . . . 2-3 Polynomial Roots . . . 2-3 Characteristic Polynomials . . . 2-4 Polynomial Evaluation . . . 2-4 Convolution and Deconvolution . . . 2-5 Polynomial Derivatives . . . 2-5 Polynomial Curve Fitting . . . 2-6 Partial Fraction Expansion . . . 2-7 Interpolation . . . 2-9 Interpolation Function Summary . . . 2-9 One-Dimensional Interpolation . . . 2-10 Two-Dimensional Interpolation . . . 2-12 Comparing Interpolation Methods . . . 2-13 Interpolation and Multidimensional Arrays . . . 2-15 Triangulation and Interpolation of Scattered Data . . . 2-18 Tessellation and Interpolation of Scattered Data in
Higher Dimensions . . . 2-26 Selected Bibliography . . . 2-37
3
Fast Fourier Transform (FFT)
Introduction . . . 3-2 Finding an FFT . . . 3-2 Example: Using FFT to Calculate Sunspot Periodicity . . . 3-3 Magnitude and Phase of Transformed Data . . . 3-7 FFT Length Versus Speed . . . 3-9
iii Function Summary . . . 3-10
4
Function Functions
Function Summary . . . 4-2 Representing Functions in MATLAB . . . 4-3 Plotting Mathematical Functions . . . 4-5 Minimizing Functions and Finding Zeros . . . 4-8 Minimizing Functions of One Variable . . . 4-8 Minimizing Functions of Several Variables . . . 4-9 Fitting a Curve to Data . . . 4-10 Setting Minimization Options . . . 4-13 Output Functions . . . 4-14 Finding Zeros of Functions . . . 4-21 Tips . . . 4-25 Troubleshooting . . . 4-26 Numerical Integration (Quadrature) . . . 4-27 Example: Computing the Length of a Curve . . . 4-27 Example: Double Integration . . . 4-28 Parameterizing Functions Called by Function Functions 4-30 Providing Parameter Values Using Nested Functions . . . 4-30 Providing Parameter Values to Anonymous Functions . . . 4-31
5
Differential Equations
Initial Value Problems for ODEs and DAEs . . . 5-2 ODE Function Summary . . . 5-2
Changing ODE Integration Properties . . . 5-17 Examples: Applying the ODE Initial Value Problem Solvers . 5-18 Questions and Answers, and Troubleshooting . . . 5-43 Initial Value Problems for DDEs . . . 5-49 DDE Function Summary . . . 5-49 Introduction to Initial Value DDE Problems . . . 5-50 DDE Solver . . . 5-51 Solving DDE Problems . . . 5-53 Discontinuities . . . 5-57 Changing DDE Integration Properties . . . 5-60 Boundary Value Problems for ODEs . . . 5-61 BVP Function Summary . . . 5-62 Introduction to Boundary Value ODE Problems . . . 5-63 Boundary Value Problem Solver . . . 5-64 Changing BVP Integration Properties . . . 5-67 Solving BVP Problems . . . 5-68 Using Continuation to Make a Good Initial Guess . . . 5-72 Solving Singular BVPs . . . 5-80 Solving Multipoint BVPs . . . 5-84 Partial Differential Equations . . . 5-89 PDE Function Summary . . . 5-89 Introduction to PDE Problems . . . 5-90 MATLAB Partial Differential Equation Solver . . . 5-91 Solving PDE Problems . . . 5-94 Evaluating the Solution at Specific Points . . . 5-99 Changing PDE Integration Properties . . . 5-100 Example: Electrodynamics Problem . . . 5-100 Selected Bibliography . . . 5-106
v
6
Sparse Matrices
Function Summary . . . 6-2 Introduction . . . 6-5 Sparse Matrix Storage . . . 6-5 General Storage Information . . . 6-6 Creating Sparse Matrices . . . 6-7 Importing Sparse Matrices from Outside MATLAB . . . 6-12 Viewing Sparse Matrices . . . 6-13 Information About Nonzero Elements . . . 6-13 Viewing Sparse Matrices Graphically . . . 6-15 The find Function and Sparse Matrices . . . 6-16 Adjacency Matrices and Graphs . . . 6-17 Introduction to Adjacency Matrices . . . 6-17 Graphing Using Adjacency Matrices . . . 6-18 The Bucky Ball . . . 6-18 An Airflow Model . . . 6-23 Sparse Matrix Operations . . . 6-25 Computational Considerations . . . 6-25 Standard Mathematical Operations . . . 6-25 Permutation and Reordering . . . 6-26 Factorization . . . 6-30 Simultaneous Linear Equations . . . 6-36 Eigenvalues and Singular Values . . . 6-39 Performance Limitations . . . 6-41 Selected Bibliography . . . 6-44
Index
1
Matrices and Linear Algebra
Function Summary (p. 1-2) Summarizes the MATLAB® linear algebra functions Matrices in MATLAB (p. 1-4) Explains the use of matrices and basic matrix operations
in MATLAB Solving Linear Systems of Equations
(p. 1-13)
Discusses the solution of simultaneous linear equations in MATLAB, including square systems, overdetermined systems, and underdetermined systems
Inverses and Determinants (p. 1-22) Explains the use in MATLAB of inverses, determinants, and pseudoinverses in the solution of systems of linear equations
Cholesky, LU, and QR Factorizations (p. 1-27)
Discusses the solution in MATLAB of systems of linear equations that involve triangular matrices, using Cholesky factorization, Gaussian elimination, and orthogonalization
Matrix Powers and Exponentials (p. 1-34)
Explains the use of MATLAB notation to obtain various matrix powers and exponentials
Eigenvalues (p. 1-38) Explains eigenvalues and describes eigenvalue decomposition in MATLAB
Singular Value Decomposition (p. 1-42) Describes singular value decomposition of a rectangular matrix in MATLAB
Function Summary
The linear algebra functions are located in the MATLAB matfun directory.
Function Summary
Category Function Description
Matrix analysis norm Matrix or vector norm.
normest Estimate the matrix 2-norm.
rank Matrix rank.
det Determinant.
trace Sum of diagonal elements.
null Null space.
orth Orthogonalization.
rref Reduced row echelon form.
subspace Angle between two subspaces.
Linear equations \ and / Linear equation solution.
inv Matrix inverse.
cond Condition number for inversion.
condest 1-norm condition number estimate.
chol Cholesky factorization.
cholinc Incomplete Cholesky factorization.
linsolve Solve a system of linear equations.
lu LU factorization.
luinc Incomplete LU factorization.
qr Orthogonal-triangular decomposition.
Function Summary
1-3 lsqnonneg Nonnegative least-squares.
pinv Pseudoinverse.
lscov Least squares with known covariance.
Eigenvalues and singular values
eig Eigenvalues and eigenvectors.
svd Singular value decomposition.
eigs A few eigenvalues.
svds A few singular values.
poly Characteristic polynomial.
polyeig Polynomial eigenvalue problem.
condeig Condition number for eigenvalues.
hess Hessenberg form.
qz QZ factorization.
schur Schur decomposition.
Matrix functions expm Matrix exponential.
logm Matrix logarithm.
sqrtm Matrix square root.
funm Evaluate general matrix function.
Function Summary (Continued)
Category Function Description
Matrices in MATLAB
A matrix is a two-dimensional array of real or complex numbers. Linear algebra defines many matrix operations that are directly supported by MATLAB. Linear algebra includes matrix arithmetic, linear equations, eigenvalues, singular values, and matrix factorizations.
For more information about creating and working with matrices, see Data Structures in the MATLAB Programming documentation.
This section describes the following topics:
•“Creating Matrices” on page 1-4
•“Adding and Subtracting Matrices” on page 1-6
•“Vector Products and Transpose” on page 1-7
•“Vector Products and Transpose” on page 1-7
•“Multiplying Matrices” on page 1-8
•“The Identity Matrix” on page 1-10
•“The Kronecker Tensor Product” on page 1-11
•“Vector and Matrix Norms” on page 1-12
Creating Matrices
Informally, the terms matrix and array are often used interchangeably. More precisely, a matrix is a two-dimensional rectangular array of real or complex numbers that represents a linear transformation. The linear algebraic operations defined on matrices have found applications in a wide variety of technical fields. (The optional Symbolic Math Toolbox extends the capabilities of MATLAB to operations on various types of nonnumeric matrices.)
MATLAB has dozens of functions that create different kinds of matrices. Two of them can be used to create a pair of 3-by-3 example matrices for use throughout this chapter. The first example is symmetric:
A = pascal(3) A =
1 1 1
Matrices in MATLAB
1-5 The second example is not symmetric:
B = magic(3) B =
8 1 6 3 5 7 4 9 2
Another example is a 3-by-2 rectangular matrix of random integers:
C = fix(10*rand(3,2)) C =
9 4 2 8 6 7
A column vector is an m-by-1 matrix, a row vector is a 1-by-n matrix and a scalar is a 1-by-1 matrix. The statements
u = [3; 1; 4]
v = [2 0 -1]
s = 7
produce a column vector, a row vector, and a scalar:
u = 3 1 4 v =
2 0 -1 s =
7
Adding and Subtracting Matrices
Addition and subtraction of matrices is defined just as it is for arrays, element-by-element. Adding A to B and then subtracting A from the result recovers B:
A = pascal(3);
B = magic(3);
X = A + B X =
9 2 7 4 7 10 5 12 8 Y = X - A
Y =
8 1 6 3 5 7 4 9 2
Addition and subtraction require both matrices to have the same dimension, or one of them be a scalar. If the dimensions are incompatible, an error results:
C = fix(10*rand(3,2)) X = A + C
Error using ==> +
Matrix dimensions must agree.
w = v + s w =
9 7 6
Matrices in MATLAB
1-7
Vector Products and Transpose
A row vector and a column vector of the same length can be multiplied in either order. The result is either a scalar, the inner product, or a matrix, the outer product:
u = [3; 1; 4];
v = [2 0 -1];
x = v*u x = 2 X = u*v X =
6 0 -3 2 0 -1 8 0 -4
For real matrices, the transpose operation interchanges and . MATLAB uses the apostrophe (or single quote) to denote transpose. The example matrix A is symmetric, so A' is equal to A. But B is not symmetric:
B = magic(3);
X = B' X =
8 3 4 1 5 9 6 7 2
Transposition turns a row vector into a column vector:
x = v' x = 2 0 -1
aij aji
If x and y are both real column vectors, the product x*y is not defined, but the two products
x'*y and
y'*x
are the same scalar. This quantity is used so frequently, it has three different names: inner product, scalar product, or dot product.
For a complex vector or matrix, z, the quantity z' denotes the complex conjugate transpose, where the sign of the complex part of each element is reversed. The unconjugated complex transpose, where the complex part of each element retains its sign, is denoted by z.'. So if
z = [1+2i 3+4i]
then z' is 1-2i 3-4i while z.' is
1+2i 3+4i
For complex vectors, the two scalar products x'*y and y'*x are complex conjugates of each other and the scalar product x'*x of a complex vector with itself is real.
Multiplying Matrices
Multiplication of matrices is defined in a way that reflects composition of the underlying linear transformations and allows compact representation of systems of simultaneous linear equations. The matrix product C = AB is defined when the column dimension of A is equal to the row dimension of B, or when one of them is a scalar. If A is m-by-p and B is p-by-n, their product C is m-by-n. The product can actually be defined using MATLAB for loops, colon notation, and vector dot products:
Matrices in MATLAB
1-9 A = pascal(3);
B = magic(3);
m = 3; n = 3;
for i = 1:m for j = 1:n
C(i,j) = A(i,:)*B(:,j);
end end
MATLAB uses a single asterisk to denote matrix multiplication. The next two examples illustrate the fact that matrix multiplication is not commutative; AB is usually not equal to BA:
X = A*B X =
15 15 15 26 38 26 41 70 39 Y = B*A
Y =
15 28 47 15 34 60 15 28 43
A matrix can be multiplied on the right by a column vector and on the left by a row vector:
u = [3; 1; 4];
x = A*u x = 8 17 30 v = [2 0 -1];
y = v*B y =
12 -7 10
Rectangular matrix multiplications must satisfy the dimension compatibility conditions:
C = fix(10*rand(3,2));
X = A*C X =
17 19 31 41 51 70 Y = C*A
Error using ==> *
Inner matrix dimensions must agree.
Anything can be multiplied by a scalar:
s = 7;
w = s*v w =
14 0 -7
The Identity Matrix
Generally accepted mathematical notation uses the capital letter to denote identity matrices, matrices of various sizes with ones on the main diagonal and zeros elsewhere. These matrices have the property that and
whenever the dimensions are compatible. The original version of MATLAB could not use for this purpose because it did not distinguish between upper and lowercase letters and already served double duty as a subscript and as the complex unit. So an English language pun was introduced. The function
eye(m,n)
I
AI = A IA = A I
i
Matrices in MATLAB
1-11 returns an m-by-n rectangular identity matrix and eye(n) returns an n-by-n square identity matrix.
The Kronecker Tensor Product
The Kronecker product, kron(X,Y), of two matrices is the larger matrix formed from all possible products of the elements of X with those of Y. If X is m-by-n and Y is p-by-q, then kron(X,Y) is mp-by-nq. The elements are arranged in the following order:
[X(1,1)*Y X(1,2)*Y . . . X(1,n)*Y . . .
X(m,1)*Y X(m,2)*Y . . . X(m,n)*Y]
The Kronecker product is often used with matrices of zeros and ones to build up repeated copies of small matrices. For example, if X is the 2-by-2 matrix
X =
1 2 3 4
and I = eye(2,2) is the 2-by-2 identity matrix, then the two matrices kron(X,I)
and
kron(I,X) are
1 0 2 0 0 1 0 2 3 0 4 0 0 3 0 4 and
1 2 0 0 3 4 0 0 0 0 1 2 0 0 3 4
Vector and Matrix Norms
The p-norm of a vector x
is computed by norm(x,p). This is defined by any value of p > 1, but the most common values of p are 1, 2, and . The default value is p = 2, which
corresponds to Euclidean length:
v = [2 0 -1];
[norm(v,1) norm(v) norm(v,inf)]
ans =
3.0000 2.2361 2.0000 The p-norm of a matrix A,
can be computed for p = 1, 2, and by norm(A,p). Again, the default value is p = 2.
C = fix(10*rand(3,2));
[norm(C,1) norm(C) norm(C,inf)]
ans =
19.0000 14.8015 13.0000 x p = ⎝⎛
Σ
xip⎠⎞1⁄p∞
A p max
x
Ax p x p ---
=
∞
Solving Linear Systems of Equations
1-13
Solving Linear Systems of Equations
This section describes
•Computational considerations
•The general solution to a system It also discusses particular solutions to
•Square systems
•Overdetermined systems
•Underdetermined systems
Computational Considerations
One of the most important problems in technical computing is the solution of simultaneous linear equations. In matrix notation, this problem can be stated as follows.
Given two matrices A and B, does there exist a unique matrix X so that AX = B or XA = B?
It is instructive to consider a 1-by-1 example.
Does the equation
have a unique solution ?
The answer, of course, is yes. The equation has the unique solution x = 3. The solution is easily obtained by division:
The solution is not ordinarily obtained by computing the inverse of 7, that is 7-1= 0.142857…, and then multiplying 7-1 by 21. This would be more work and, if 7-1 is represented to a finite number of digits, less accurate. Similar
considerations apply to sets of linear equations with more than one unknown;
MATLAB solves such equations without computing the inverse of the matrix.
Although it is not standard mathematical notation, MATLAB uses the division terminology familiar in the scalar case to describe the solution of a general system of simultaneous equations. The two division symbols, slash, /, and
7x = 21
x = 21 7⁄ = 3
backslash, \, are used for the two situations where the unknown matrix appears on the left or right of the coefficient matrix:
You can think of “dividing” both sides of the equation AX = B or XA = B by A.
The coefficient matrix A is always in the “denominator.”
The dimension compatibility conditions for X = A\B require the two matrices A and B to have the same number of rows. The solution X then has the same number of columns as B and its row dimension is equal to the column dimension of A. For X = B/A, the roles of rows and columns are interchanged.
In practice, linear equations of the form AX = B occur more frequently than those of the form XA = B. Consequently, backslash is used far more frequently than slash. The remainder of this section concentrates on the backslash operator; the corresponding properties of the slash operator can be inferred from the identity
(B/A)' = (A'\B')
The coefficient matrix A need not be square. If A is m-by-n, there are three cases:
The backslash operator employs different algorithms to handle different kinds of coefficient matrices. The various cases, which are diagnosed automatically by examining the coefficient matrix, include
•Permutations of triangular matrices
•Symmetric, positive definite matrices
•Square, nonsingular matrices
•Rectangular, overdetermined systems
X = A\B Denotes the solution to the matrix equation AX = B.
X = B/A Denotes the solution to the matrix equation XA = B.
m = n Square system. Seek an exact solution.
m > n Overdetermined system. Find a least squares solution.
m < n Underdetermined system. Find a basic solution with at most m nonzero components.
Solving Linear Systems of Equations
1-15
General Solution
The general solution to a system of linear equations AX = b describes all possible solutions. You can find the general solution by
1 Solving the corresponding homogeneous system AX = 0. Do this using the null command, by typing null(A). This returns a basis for the solution space to AX = 0. Any solution is a linear combination of basis vectors.
2 Finding a particular solution to the non-homogeneous system AX = b.
You can then write any solution to AX = b as the sum of the particular solution to AX = b, from step 2, plus a linear combination of the basis vectors from step 1.
The rest of this section describes how to use MATLAB to find a particular solution to AX = b, as in step 2.
Square Systems
The most common situation involves a square coefficient matrix A and a single right-hand side column vector b.
Nonsingular Coefficient Matrix
If the matrix A is nonsingular, the solution, x = A\b, is then the same size as b. For example,
A = pascal(3);
u = [3; 1; 4];
x = A\u x = 10
-12 5
It can be confirmed that A*x is exactly equal to u.
If A and B are square and the same size, then X = A\B is also that size:
B = magic(3);
X = A\B X =
19 -3 -1 -17 4 13 6 0 -6
It can be confirmed that A*X is exactly equal to B.
Both of these examples have exact, integer solutions. This is because the coefficient matrix was chosen to be pascal(3), which has a determinant equal to one. A later section considers the effects of roundoff error inherent in more realistic computations.
Singular Coefficient Matrix
A square matrix A is singular if it does not have linearly independent columns.
If A is singular, the solution to AX = B either does not exist, or is not unique.
The backslash operator, A\B, issues a warning if A is nearly singular and raises an error condition if it detects exact singularity.
If A is singular and AX = b has a solution, you can find a particular solution that is not unique, by typing
P = pinv(A)*b
P is a pseudoinverse of A. If AX = b does not have an exact solution, pinv(A) returns a least-squares solution.
For example,
A = [ 1 3 7 -1 4 4 1 10 18 ]
is singular, as you can verify by typing det(A)
ans =
Solving Linear Systems of Equations
1-17 Note For information about using pinv to solve systems with rectangular
coefficient matrices, see “Pseudoinverses” on page 1-23.
Exact Solutions. For b =[5;2;12], the equation AX = b has an exact solution, given by
pinv(A)*b ans = 0.3850 -0.1103 0.7066
You can verify that pinv(A)*b is an exact solution by typing A*pinv(A)*b
ans = 5.0000 2.0000 12.0000
Least Squares Solutions. On the other hand, if b = [3;6;0], then AX = b does not have an exact solution. In this case, pinv(A)*b returns a least squares solution.
If you type A*pinv(A)*b ans = -1.0000 4.0000 2.0000
you do not get back the original vector b.
You can determine whether AX = b has an exact solution by finding the row reduced echelon form of the augmented matrix [A b]. To do so for this example, enter
rref([A b]) ans =
1.0000 0 2.2857 0 0 1.0000 1.5714 0 0 0 0 1.0000
Since the bottom row contains all zeros except for the last entry, the equation does not have a solution. In this case, pinv(A) returns a least-squares solution.
Overdetermined Systems
Overdetermined systems of simultaneous linear equations are often
encountered in various kinds of curve fitting to experimental data. Here is a hypothetical example. A quantity y is measured at several different values of time, t, to produce the following observations:
Enter the data into MATLAB with the statements t = [0 .3 .8 1.1 1.6 2.3]';
y = [.82 .72 .63 .60 .55 .50]';
Try modeling the data with a decaying exponential function:
t y
0.0 0.82 0.3 0.72 0.8 0.63 1.1 0.60 1.6 0.55 2.3 0.50
y t( )≈c1+c2e–t
Solving Linear Systems of Equations
1-19 The preceding equation says that the vector y should be approximated by a
linear combination of two other vectors, one the constant vector containing all ones and the other the vector with components e-t. The unknown coefficients, c1 and c2, can be computed by doing a least squares fit, which minimizes the sum of the squares of the deviations of the data from the model. There are six equations in two unknowns, represented by the 6-by-2 matrix:
E = [ones(size(t)) exp(-t)]
E =
1.0000 1.0000 1.0000 0.7408 1.0000 0.4493 1.0000 0.3329 1.0000 0.2019 1.0000 0.1003
Use the backslash operator to get the least squares solution:
c = E\y c =
0.4760 0.3413
In other words, the least squares fit to the data is
The following statements evaluate the model at regularly spaced increments in t, and then plot the result, together with the original data:
T = (0:0.1:2.5)';
Y = [ones(size(T)) exp(-T)]*c;
plot(T,Y,'-',t,y,'o')
You can see that E*c is not exactly equal to y, but that the difference might well be less than measurement errors in the original data.
y t( )≈0.4760+0.3413 e–t
A rectangular matrix A is rank deficient if it does not have linearly independent columns. If A is rank deficient, the least squares solution to AX = B is not unique. The backslash operator, A\B, issues a warning if A is rank deficient and produces a least squares solution that has at most rank(A) nonzeros.
Underdetermined Systems
Underdetermined linear systems involve more unknowns than equations. The solution to such underdetermined systems is not unique. The matrix left division operation in MATLAB finds a basic solution, which has at most m nonzero components.
Here is a small, random example:
R = [6 8 7 3; 3 5 4 1]
R =
6 8 7 3 3 5 4 1
0 0.5 1 1.5 2 2.5
0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9
Solving Linear Systems of Equations
1-21 b = fix(10*rand(2,1))
b = 9 2
The linear system Rx = b involves two equations in four unknowns. Since the coefficient matrix contains small integers, it is appropriate to use the format command to display the solution in rational format. The particular solution is obtained with
format rat p = R\b p = 0 -3/7 0 29/7
One of the nonzero components is p(2) because R(:,2) is the column of R with largest norm. The other nonzero component is p(4) because R(:,4) dominates after R(:,2) is eliminated.
The complete solution to the underdetermined system can be characterized by adding an arbitrary vector from the null space, which can be found using the null function with an option requesting a “rational” basis:
Z = null(R,'r') Z =
-1/2 -7/6 -1/2 1/2 1 0 0 1
It can be confirmed that R*Z is zero and that any vector x where x = p + Z*q
for an arbitrary vector q satisfies R*x = b.
Inverses and Determinants
This section provides
•An overview of the use of inverses and determinants for solving square nonsingular systems of linear equations
•A discussion of the Moore-Penrose pseudoinverse for solving rectangular systems of linear equations
Overview
If A is square and nonsingular, the equations AX = I and XA = I have the same solution, X. This solution is called the inverse of A, is denoted by A-1, and is computed by the function inv. The determinant of a matrix is useful in theoretical considerations and some types of symbolic computation, but its scaling and roundoff error properties make it far less satisfactory for numeric computation. Nevertheless, the function det computes the determinant of a square matrix:
A = pascal(3) A =
1 1 1 1 2 3 1 3 6 d = det(A)
X = inv(A) d =
1 X =
3 -3 1 -3 5 -2 1 -2 1
Inverses and Determinants
1-23 Again, because A is symmetric, has integer elements, and has determinant
equal to one, so does its inverse. On the other hand, B = magic(3)
B =
8 1 6 3 5 7 4 9 2 d = det(B)
X = inv(B) d =
-360 X =
0.1472 -0.1444 0.0639 -0.0611 0.0222 0.1056 -0.0194 0.1889 -0.1028
Closer examination of the elements of X, or use of format rat, would reveal that they are integers divided by 360.
If A is square and nonsingular, then without roundoff error, X = inv(A)*B would theoretically be the same as X = A\B and Y = B*inv(A) would theoretically be the same as Y = B/A. But the computations involving the backslash and slash operators are preferable because they require less computer time, less memory, and have better error detection properties.
Pseudoinverses
Rectangular matrices do not have inverses or determinants. At least one of the equations AX = I and XA = I does not have a solution. A partial replacement for the inverse is provided by the Moore-Penrose pseudoinverse, which is computed by the pinv function:
format short rand('state', 0);
C = fix(10*rand(3,2));
X = pinv(C) X =
0.1159 -0.0729 0.0171 -0.0534 0.1152 0.0418 The matrix
Q = X*C Q =
1.0000 0.0000 0.0000 1.0000
is the 2-by-2 identity, but the matrix P = C*X
P =
0.8293 -0.1958 0.3213 -0.1958 0.7754 0.3685 0.3213 0.3685 0.3952
is not the 3-by-3 identity. However, P acts like an identity on a portion of the space in the sense that P is symmetric, P*C is equal to C and X*P is equal to X.
Solving a Rank-Deficient System
If A is m-by-n with m > n and full rank n, then each of the three statements x = A\b
x = pinv(A)*b x = inv(A'*A)*A'*b
theoretically computes the same least squares solution x, although the backslash operator does it faster.
However, if A does not have full rank, the solution to the least squares problem is not unique. There are many vectors x that minimize
norm(A*x -b)
Inverses and Determinants
1-25 The solution computed by x = A\b is a basic solution; it has at most r nonzero components, where r is the rank of A. The solution computed by x = pinv(A)*b is the minimal norm solution because it minimizes norm(x). An attempt to compute a solution with x = inv(A'*A)*A'*b fails because A'*A is singular.
Here is an example that illustrates the various solutions:
A = [ 1 2 3 4 5 6 7 8 9 10 11 12 ]
does not have full rank. Its second column is the average of the first and third columns. If
b = A(:,2)
is the second column, then an obvious solution to A*x = b is x = [0 1 0]'. But none of the approaches computes that x. The backslash operator gives
x = A\b
Warning: Rank deficient, rank = 2.
x =
0.5000 0 0.5000
This solution has two nonzero components. The pseudoinverse approach gives y = pinv(A)*b
y =
0.3333 0.3333 0.3333
There is no warning about rank deficiency. But norm(y) = 0.5774 is less than norm(x) = 0.7071. Finally
z = inv(A'*A)*A'*b fails completely:
Warning: Matrix is singular to working precision.
z = Inf Inf Inf
Cholesky, LU, and QR Factorizations
1-27
Cholesky, LU, and QR Factorizations
The MATLAB linear equation capabilities are based on three basic matrix factorizations:
•Cholesky factorization for symmetric, positive definite matrices
•LU factorization (Gaussian elimination) for general square matrices
•QR factorization (orthogonal) for rectangular matrices
These three factorizations are available through the chol, lu, and qr functions.
All three of these factorizations make use of triangular matrices where all the elements either above or below the diagonal are zero. Systems of linear equations involving triangular matrices are easily and quickly solved using either forward or back substitution.
Cholesky Factorization
The Cholesky factorization expresses a symmetric matrix as the product of a triangular matrix and its transpose
where R is an upper triangular matrix.
Not all symmetric matrices can be factored in this way; the matrices that have such a factorization are said to be positive definite. This implies that all the diagonal elements of A are positive and that the offdiagonal elements are “not too big.” The Pascal matrices provide an interesting example. Throughout this chapter, the example matrix A has been the 3-by-3 Pascal matrix. Temporarily switch to the 6-by-6:
A = pascal(6) A =
1 1 1 1 1 1 1 2 3 4 5 6 1 3 6 10 15 21 1 4 10 20 35 56 1 5 15 35 70 126 1 6 21 56 126 252 A = R′R
The elements of A are binomial coefficients. Each element is the sum of its north and west neighbors. The Cholesky factorization is
R = chol(A) R =
1 1 1 1 1 1 0 1 2 3 4 5 0 0 1 3 6 10 0 0 0 1 4 10 0 0 0 0 1 5 0 0 0 0 0 1
The elements are again binomial coefficients. The fact that R'*R is equal to A demonstrates an identity involving sums of products of binomial coefficients.
Note The Cholesky factorization also applies to complex matrices. Any complex matrix which has a Cholesky factorization satisfies A' = A and is said to be Hermitian positive definite.
The Cholesky factorization allows the linear system
to be replaced by
Because the backslash operator recognizes triangular systems, this can be solved in MATLAB quickly with
x = R\(R'\b)
If A is n-by-n, the computational complexity of chol(A) is O(n3), but the complexity of the subsequent backslash solutions is only O(n2).
Ax = b
R′Rx = b
Cholesky, LU, and QR Factorizations
1-29
LU Factorization
LU factorization, or Gaussian elimination, expresses any square matrix A as the product of a permutation of a lower triangular matrix and an upper triangular matrix
where L is a permutation of a lower triangular matrix with ones on its diagonal and U is an upper triangular matrix.
The permutations are necessary for both theoretical and computational reasons. The matrix
cannot be expressed as the product of triangular matrices without interchanging its two rows. Although the matrix
can be expressed as the product of triangular matrices, when is small the elements in the factors are large and magnify errors, so even though the permutations are not strictly necessary, they are desirable. Partial pivoting ensures that the elements of L are bounded by one in magnitude and that the elements of U are not much larger than those of A.
For example [L,U] = lu(B) L =
1.0000 0 0 0.3750 0.5441 1.0000 0.5000 1.0000 0 U =
8.0000 1.0000 6.0000 0 8.5000 -1.0000 0 0 5.2941 A = LU
0 1 1 0
ε 1 1 0
ε
The LU factorization of A allows the linear system A*x = b
to be solved quickly with x = U\(L\b)
Determinants and inverses are computed from the LU factorization using det(A) = det(L)*det(U)
and
inv(A) = inv(U)*inv(L)
You can also compute the determinants using det(A) = prod(diag(U)), though the signs of the determinants may be reversed.
QR Factorization
An orthogonal matrix, or a matrix with orthonormal columns, is a real matrix whose columns all have unit length and are perpendicular to each other. If Q is orthogonal, then
The simplest orthogonal matrices are two-dimensional coordinate rotations:
For complex matrices, the corresponding term is unitary. Orthogonal and unitary matrices are desirable for numerical computation because they preserve length, preserve angles, and do not magnify errors.
The orthogonal, or QR, factorization expresses any rectangular matrix as the product of an orthogonal or unitary matrix and an upper triangular matrix. A column permutation may also be involved:
or
Q′Q = 1
θ ( )
cos sin( )θ θ
( )
–sin cos( )θ
A = Q R
Cholesky, LU, and QR Factorizations
1-31 where Q is orthogonal or unitary, R is upper triangular, and P is a
permutation.
There are four variants of the QR factorization– full or economy size, and with or without column permutation.
Overdetermined linear systems involve a rectangular matrix with more rows than columns, that is m-by-n with m > n. The full size QR factorization produces a square, m-by-m orthogonal Q and a rectangular m-by-n upper triangular R:
[Q,R] = qr(C) Q =
-0.8182 0.3999 -0.4131 -0.1818 -0.8616 -0.4739 -0.5455 -0.3126 0.7777 R =
-11.0000 -8.5455 0 -7.4817 0 0
In many cases, the last m - n columns of Q are not needed because they are multiplied by the zeros in the bottom portion of R. So the economy size QR factorization produces a rectangular, m-by-n Q with orthonormal columns and a square n-by-n upper triangular R. For the 3-by-2 example, this is not much of a saving, but for larger, highly rectangular matrices, the savings in both time and memory can be quite important:
[Q,R] = qr(C,0) Q =
-0.8182 0.3999 -0.1818 -0.8616 -0.5455 -0.3126 R =
-11.0000 -8.5455 0 -7.4817
In contrast to the LU factorization, the QR factorization does not require any pivoting or permutations. But an optional column permutation, triggered by the presence of a third output argument, is useful for detecting singularity or rank deficiency. At each step of the factorization, the column of the remaining unfactored matrix with largest norm is used as the basis for that step. This ensures that the diagonal elements of R occur in decreasing order and that any linear dependence among the columns is almost certainly be revealed by examining these elements. For the small example given here, the second column of C has a larger norm than the first, so the two columns are exchanged:
[Q,R,P] = qr(C) Q =
-0.3522 0.8398 -0.4131 -0.7044 -0.5285 -0.4739 -0.6163 0.1241 0.7777 R =
-11.3578 -8.2762 0 7.2460 0 0 P =
0 1 1 0
When the economy size and column permutations are combined, the third output argument is a permutation vector, rather than a permutation matrix:
[Q,R,p] = qr(C,0) Q =
-0.3522 0.8398 -0.7044 -0.5285 -0.6163 0.1241 R =
-11.3578 -8.2762 0 7.2460
Cholesky, LU, and QR Factorizations
1-33 p =
2 1
The QR factorization transforms an overdetermined linear system into an equivalent triangular system. The expression
norm(A*x - b) is equal to
norm(Q*R*x - b)
Multiplication by orthogonal matrices preserves the Euclidean norm, so this expression is also equal to
norm(R*x - y)
where y = Q'*b. Since the last m-n rows of R are zero, this expression breaks into two pieces:
norm(R(1:n,1:n)*x - y(1:n)) and
norm(y(n+1:m))
When A has full rank, it is possible to solve for x so that the first of these expressions is zero. Then the second expression gives the norm of the residual.
When A does not have full rank, the triangular structure of R makes it possible to find a basic solution to the least squares problem.
Matrix Powers and Exponentials
This section tells you how to obtain the following matrix powers and exponentials in MATLAB:
•Positive integer
•Inverse and fractional
•Element-by-element
•Exponentials
Positive Integer Powers
If A is a square matrix and p is a positive integer, then A^p effectively multiplies A by itself p-1 times. For example,
A = [1 1 1;1 2 3;1 3 6]
A =
1 1 1 1 2 3 1 3 6 X = A^2
X =
3 6 10 6 14 25 10 25 46
Inverse and Fractional Powers
If A is square and nonsingular, then A^(-p) effectively multiplies inv(A) by itself p-1 times:
Y = A^(-3)
Matrix Powers and Exponentials
1-35 Y =
145.0000 -207.0000 81.0000 -207.0000 298.0000 -117.0000 81.0000 -117.0000 46.0000
Fractional powers, like A^(2/3), are also permitted; the results depend upon the distribution of the eigenvalues of the matrix.
Element-by-Element Powers
The .^ operator produces element-by-element powers. For example, X = A.^2
A =
1 1 1 1 4 9 1 9 36
Exponentials
The functionsqrtm(A)
computes A^(1/2) by a more accurate algorithm. The m in sqrtm distinguishes this function from sqrt(A) which, like A.^(1/2), does its job
element-by-element.
A system of linear, constant coefficient, ordinary differential equations can be written
where x = x(t) is a vector of functions of t and A is a matrix independent of t.
The solution can be expressed in terms of the matrix exponential,
The function expm(A) dx dt⁄ = Ax
x t( )=etAx( )0
computes the matrix exponential. An example is provided by the 3-by-3 coefficient matrix
A =
0 -6 -1 6 2 -16 -5 20 -10 and the initial condition, x(0)
x0 = 1 1 1
The matrix exponential is used to compute the solution, x(t), to the differential equation at 101 points on the interval 0 ≤ t ≤ 1 with
X = [];
for t = 0:.01:1
X = [X expm(t*A)*x0];
end
A three-dimensional phase plane plot obtained with plot3(X(1,:),X(2,:),X(3,:),'-o')
shows the solution spiraling in towards the origin. This behavior is related to the eigenvalues of the coefficient matrix, which are discussed in the next section.
Matrix Powers and Exponentials
1-37
0
0.2 0.4
0.6 0.8
1
−0.5 0 0.5 1 1.5
−0.2 0 0.2 0.4 0.6 0.8 1 1.2
Eigenvalues
An eigenvalue and eigenvector of a square matrix A are a scalar and a nonzero vector v that satisfy
This section explains:
•Eigenvalue decomposition
•Problems associated with defective (not diagonalizable) matrices
•The use of Schur decomposition to avoid problems associated with eigenvalue decomposition
Eigenvalue Decomposition
With the eigenvalues on the diagonal of a diagonal matrix and the corresponding eigenvectors forming the columns of a matrix V, you have
If V is nonsingular, this becomes the eigenvalue decomposition
A good example is provided by the coefficient matrix of the ordinary differential equation in the previous section:
A =
0 -6 -1 6 2 -16 -5 20 -10 The statement
lambda = eig(A)
produces a column vector containing the eigenvalues. For this matrix, the eigenvalues are complex:
lambda =
-3.0710
λ
Av=λv
Λ
AV = VΛ
A = VΛV–1
Eigenvalues
1-39 The real part of each of the eigenvalues is negative, so approaches zero as t increases. The nonzero imaginary part of two of the eigenvalues, , contributes the oscillatory component, , to the solution of the differential equation.
With two output arguments, eig computes the eigenvectors and stores the eigenvalues in a diagonal matrix:
[V,D] = eig(A) V =
-0.8326 0.2003 - 0.1394i 0.2003 + 0.1394i -0.3553 -0.2110 - 0.6447i -0.2110 + 0.6447i
-0.4248 -0.6930 -0.6930
D =
-3.0710 0 0 0 -2.4645+17.6008i 0 0 0 -2.4645-17.6008i
The first eigenvector is real and the other two vectors are complex conjugates of each other. All three vectors are normalized to have Euclidean length, norm(v,2), equal to one.
The matrix V*D*inv(V), which can be written more succinctly as V*D/V, is within roundoff error of A. And, inv(V)*A*V, or V\A*V, is within roundoff error of D.
Defective Matrices
Some matrices do not have an eigenvector decomposition. These matrices are defective, or not diagonalizable. For example,
A = [ 6 12 19 -9 -20 -33
4 9 15 ] For this matrix
[V,D] = eig(A)
eλt
ω ωt ±
( ) sin
produces V =
-0.4741 -0.4082 -0.4082 0.8127 0.8165 0.8165 -0.3386 -0.4082 -0.4082 D =
-1.0000 0 0 0 1.0000 0 0 0 1.0000
There is a double eigenvalue at . The second and third columns of V are the same. For this matrix, a full set of linearly independent eigenvectors does not exist.
The optional Symbolic Math Toolbox extends the capabilities of MATLAB by connecting to Maple, a powerful computer algebra system. One of the functions provided by the toolbox computes the Jordan Canonical Form. This is
appropriate for matrices like the example given here, which is 3-by-3 and has exactly known, integer elements:
[X,J] = jordan(A) X =
-1.7500 1.5000 2.7500 3.0000 -3.0000 -3.0000 -1.2500 1.5000 1.2500 J =
-1 0 0 0 1 1 0 0 1
The Jordan Canonical Form is an important theoretical concept, but it is not a reliable computational tool for larger matrices, or for matrices whose elements are subject to roundoff errors and other uncertainties.
λ = 1
Eigenvalues
1-41
Schur Decomposition in MATLAB Matrix Computations
The MATLAB advanced matrix computations do not require eigenvalue decompositions. They are based, instead, on the Schur decomposition
where U is an orthogonal matrix and S is a block upper triangular matrix with 1-by-1 and 2-by-2 blocks on the diagonal. The eigenvalues are revealed by the diagonal elements and blocks of S, while the columns of U provide a basis with much better numerical properties than a set of eigenvectors. The Schur decomposition of this defective example is
[U,S] = schur(A) U =
-0.4741 0.6648 0.5774 0.8127 0.0782 0.5774 -0.3386 -0.7430 0.5774 S =
-1.0000 20.7846 -44.6948 0 1.0000 -0.6096 0 0 1.0000
The double eigenvalue is contained in the lower 2-by-2 block of S.
Note If A is complex, schur returns the complex Schur form, which is upper triangular with the eigenvalues of A on the diagonal.
A = U S UT
Singular Value Decomposition
A singular value and corresponding singular vectors of a rectangular matrix A are a scalar and a pair of vectors u and v that satisfy
With the singular values on the diagonal of a diagonal matrix and the corresponding singular vectors forming the columns of two orthogonal matrices U and V, you have
Since U and V are orthogonal, this becomes the singular value decomposition
The full singular value decomposition of an m-by-n matrix involves an m-by-m U, an m-by-n , and an n-by-n V. In other words, U and V are both square and is the same size as A. If A has many more rows than columns, the resulting U can be quite large, but most of its columns are multiplied by zeros in . In this situation, the economy sized decomposition saves both time and storage by producing an m-by-n U, an n-by-n and the same V.
The eigenvalue decomposition is the appropriate tool for analyzing a matrix when it represents a mapping from a vector space into itself, as it does for an ordinary differential equation. On the other hand, the singular value
decomposition is the appropriate tool for analyzing a mapping from one vector space into another vector space, possibly with a different dimension. Most systems of simultaneous linear equations fall into this second category.
If A is square, symmetric, and positive definite, then its eigenvalue and singular value decompositions are the same. But, as A departs from symmetry and positive definiteness, the difference between the two decompositions increases. In particular, the singular value decomposition of a real matrix is always real, but the eigenvalue decomposition of a real, nonsymmetric matrix might be complex.
σ Av = σu ATu = σv
Σ
A V = UΣ ATU = VΣ
A = UΣVT
Σ Σ
Σ Σ
Singular Value Decomposition
1-43 For the example matrix
A =
9 4 6 8 2 7
the full singular value decomposition is [U,S,V] = svd(A)
U =
-0.6105 0.7174 0.3355 -0.6646 -0.2336 -0.7098 -0.4308 -0.6563 0.6194 S =
14.9359 0 0 5.1883 0 0 V =
-0.6925 0.7214 -0.7214 -0.6925
You can verify that U*S*V' is equal to A to within roundoff error. For this small problem, the economy size decomposition is only slightly smaller:
[U,S,V] = svd(A,0) U =
-0.6105 0.7174 -0.6646 -0.2336 -0.4308 -0.6563 S =
14.9359 0 0 5.1883 V =
-0.6925 0.7214 -0.7214 -0.6925
Again, U*S*V' is equal to A to within roundoff error.
2
Polynomials and Interpolation
Polynomials (p. 2-2) Functions for standard polynomial operations. Additional topics include curve fitting and partial fraction expansion.
Interpolation (p. 2-9) Two- and multi-dimensional interpolation techniques, taking into account speed, memory, and smoothness considerations.
Selected Bibliography (p. 2-37) Published materials that support concepts implemented in
“Polynomials and Interpolation”
Polynomials
This section provides
•A summary of the MATLAB polynomial functions
•Instructions for representing polynomials in MATLAB It also describes the MATLAB polynomial functions that
•Calculate the roots of a polynomial
•Calculate the coefficients of the characteristic polynomial of a matrix
•Evaluate a polynomial at a specified value
•Convolve (multiply) and deconvolve (divide) polynomials
•Compute the derivative of a polynomial
•Fit a polynomial to a set of data
•Convert between partial fraction expansion and polynomial coefficients
Polynomial Function Summary
MATLAB provides functions for standard polynomial operations, such as polynomial roots, evaluation, and differentiation. In addition, there are functions for more advanced applications, such as curve fitting and partial fraction expansion.
The polynomial functions reside in the MATLAB polyfun directory.
Polynomial Function Summary Function Description
conv Multiply polynomials.
deconv Divide polynomials.
poly Polynomial with specified roots.
polyder Polynomial derivative.
polyfit Polynomial curve fitting.
Polynomials
2-3 The Symbolic Math Toolbox contains additional specialized support for
polynomial operations.
Representing Polynomials
MATLAB represents polynomials as row vectors containing coefficients ordered by descending powers. For example, consider the equation
This is the celebrated example Wallis used when he first represented Newton’s method to the French Academy. To enter this polynomial into MATLAB, use
p = [1 0 -2 -5];
Polynomial Roots
The roots function calculates the roots of a polynomial:
r = roots(p) r =
2.0946 -1.0473 + 1.1359i -1.0473 - 1.1359i
By convention, MATLAB stores roots in column vectors. The function poly returns to the polynomial coefficients:
p2 = poly(r) p2 =
1 8.8818e-16 -2 -5
polyvalm Matrix polynomial evaluation.
residue Partial-fraction expansion (residues).
roots Find polynomial roots.
Polynomial Function Summary (Continued) Function Description
p x( ) = x3–2x–5
poly and roots are inverse functions, up to ordering, scaling, and roundoff error.
Characteristic Polynomials
The poly function also computes the coefficients of the characteristic polynomial of a matrix:
A = [1.2 3 -0.9; 5 1.75 6; 9 0 1];
poly(A) ans =
1.0000 -3.9500 -1.8500 -163.2750
The roots of this polynomial, computed with roots, are the characteristic roots, or eigenvalues, of the matrix A. (Use eig to compute the eigenvalues of a matrix directly.)
Polynomial Evaluation
The polyval function evaluates a polynomial at a specified value. To evaluate p at s = 5, use
polyval(p,5) ans =
110
It is also possible to evaluate a polynomial in a matrix sense. In this case
becomes , where X is a square
matrix and I is the identity matrix. For example, create a square matrix X and evaluate the polynomial p at X:
X = [2 4 5; -1 0 3; 7 1 5];
Y = polyvalm(p,X) Y =
377 179 439 111 81 136 490 253 639
p s( ) = x3–2x–5 p X( ) = X3–2X–5I
Polynomials
2-5
Convolution and Deconvolution
Polynomial multiplication and division correspond to the operations convolution and deconvolution. The functions conv and deconv implement these operations.
Consider the polynomials and . To
compute their product,
a = [1 2 3]; b = [4 5 6];
c = conv(a,b) c =
4 13 28 27 18
Use deconvolution to divide back out of the product:
[q,r] = deconv(c,a) q =
4 5 6 r =
0 0 0 0 0
Polynomial Derivatives
The polyder function computes the derivative of any polynomial. To obtain the derivative of the polynomial p = [1 0 -2 -5],
q = polyder(p) q =
3 0 -2
polyder also computes the derivative of the product or quotient of two polynomials. For example, create two polynomials a and b:
a = [1 3 5];
b = [2 4 6];
a s( ) = s2+2s+3 b s( ) = 4s2+5s+6
a s( )
Calculate the derivative of the product a*b by calling polyder with a single output argument:
c = polyder(a,b) c =
8 30 56 38
Calculate the derivative of the quotient a/b by calling polyder with two output arguments:
[q,d] = polyder(a,b) q =
-2 -8 -2 d =
4 16 40 48 36 q/d is the result of the operation.
Polynomial Curve Fitting
polyfit finds the coefficients of a polynomial that fits a set of data in a least-squares sense:
p = polyfit(x,y,n)
x and y are vectors containing the x and y data to be fitted, and n is the degree of the polynomial to return. For example, consider the x-y test data
x = [1 2 3 4 5]; y = [5.5 43.1 128 290.7 498.4];
A third degree polynomial that approximately fits the data is p = polyfit(x,y,3)
p =
-0.1917 31.5821 -60.3262 35.3400
Polynomials
2-7 Compute the values of the polyfit estimate over a finer range, and plot the estimate over the real data values for comparison:
x2 = 1:.1:5;
y2 = polyval(p,x2);
plot(x,y,'o',x2,y2) grid on
To use these functions in an application example, see “Data Fitting Using Linear Regression” in the MATLAB Data Analysis book.
Partial Fraction Expansion
residue finds the partial fraction expansion of the ratio of two polynomials.
This is particularly useful for applications that represent systems in transfer function form. For polynomials b and a, if there are no multiple roots,
1 1.5 2 2.5 3 3.5 4 4.5 5
0 50 100 150 200 250 300 350 400 450 500
b s( ) a s( ) --- r1
s–p1 --- r2
s–p2
--- … rn s–pn --- ks
+ + + +
=
where r is a column vector of residues, p is a column vector of pole locations, and k is a row vector of direct terms. Consider the transfer function
b = [-4 8];
a = [1 6 8];
[r,p,k] = residue(b,a) r =
-12 8 p = -4 -2 k = []
Given three input arguments (r, p, and k), residue converts back to polynomial form:
[b2,a2] = residue(r,p,k) b2 =
-4 8 a2 =
1 6 8 4s
– +8 s2+6s+8 ---
Interpolation
2-9
Interpolation
Interpolation is a process for estimating values that lie between known data points. It has important applications in areas such as signal and image processing.
This section
•Provides a summary of the MATLAB interpolation functions
•Discusses one-dimensional interpolation
•Discusses two-dimensional interpolation
•Uses an example to compare nearest neighbor, bilinear, and bicubic interpolation methods
•Discusses interpolation of multidimensional data
•Discusses triangulation and interpolation of scattered data
Interpolation Function Summary
MATLAB provides a number of interpolation techniques that let you balance the smoothness of the data fit with speed of execution and memory usage.
The interpolation functions reside in the MATLAB polyfun directory.
Interpolation Function Summary Function Description
griddata Data gridding and surface fitting.
griddata3 Data gridding and hypersurface fitting for three-dimensional data.
griddatan Data gridding and hypersurface fitting (dimension >= 3).
interp1 One-dimensional interpolation (table lookup).
interp2 Two-dimensional interpolation (table lookup).
interp3 Three-dimensional interpolation (table lookup).
interpft One-dimensional interpolation using FFT method.
One-Dimensional Interpolation
There are two kinds of one-dimensional interpolation in MATLAB:
•Polynomial interpolation
•FFT-based interpolation
Polynomial Interpolation
The function interp1 performs one-dimensional interpolation, an important operation for data analysis and curve fitting. This function uses polynomial techniques, fitting the supplied data with polynomial functions between data points and evaluating the appropriate function at the desired interpolation points. Its most general form is
yi = interp1(x,y,xi,method)
y is a vector containing the values of a function, and x is a vector of the same length containing the points for which the values in y are given. xi is a vector containing the points at which to interpolate. method is an optional string specifying an interpolation method:
•Nearest neighbor interpolation (method = 'nearest'). This method sets the value of an interpolated point to the value of the nearest existing data point.
•Linear interpolation (method = 'linear'). This method fits a different linear interpn N-dimensional interpolation (table lookup).
mkpp Make a piecewise polynomial
pchip Piecewise Cubic Hermite Interpolating Polynomial (PCHIP).
ppval Piecewise polynomial evaluation spline Cubic spline data interpolation unmkpp Piecewise polynomial details Interpolation Function Summary (Continued)
Function Description