| DoubleLeastSquares Class |
Class DoubleLeastSquares computes the minimum-norm solution to a linear
system Ax = y.
Inheritance Hierarchy Namespace: CenterSpace.NMath.CoreAssembly: NMath (in NMath.dll) Version: 7.4
Syntax [SerializableAttribute]
public class DoubleLeastSquares : ICloneable
<SerializableAttribute>
Public Class DoubleLeastSquares
Implements ICloneable
[SerializableAttribute]
public ref class DoubleLeastSquares : ICloneable
[<SerializableAttribute>]
type DoubleLeastSquares =
class
interface ICloneable
end
The DoubleLeastSquares type exposes the following members.
Constructors | Name | Description |
---|
| DoubleLeastSquares(DoubleMatrix, DoubleVector) |
Constructs a least squares solution for the given linear system
Ax = y.
|
| DoubleLeastSquares(DoubleMatrix, DoubleVector, Boolean) |
Constructs a least squares solution for the given linear system
Ax = y, optionally adding an intercept parameter to the
model.
|
| DoubleLeastSquares(DoubleMatrix, DoubleVector, Double) |
Constructs a least squares solution for the given linear system
Ax = y using the specified tolerance to compute the
effective rank.
|
| DoubleLeastSquares(DoubleMatrix, DoubleVector, Boolean, Double) |
Constructs a least squares solution for the given linear system
Ax = y, optionally adding an intercept parameter, and using
the specified tolerance to compute the effective rank.
|
TopProperties | Name | Description |
---|
| Rank |
Gets the effective rank of the matrix A.
|
| Residuals |
Gets the vector of residuals. If y is the right-hand side of the
least squares equation Ax = y, and we denote by yhat the vector
Ax where x is the computed least squares solution,
then the vector of residuals r is the vector whose ith component is
r[i] = y[i] - yhat[i].
|
| ResidualSumOfSquares |
Gets the residual sum of squares. If y is the right-hand side of the
least squares equation Ax = y, and we denote by yhat the vector
Ax where x is the computed least squares solution,
then the residual sum of squares is defined to be
(y[0] - yhat[0])^2 + (y[1] - yhat[1])^2 + ... + (y[m-1] - yhat[m-1])^2.
|
| Tolerance |
Gets the tolerance used to compute the effective rank of the input matrix A.
|
| X |
Gets the least squares solution x for the least squares problem
Ax = y.
|
| Yhat |
Gets the predicted value of y by computing yHat = Ax,
where x is the calculated solution to the least squares
problem Ax = y.
|
TopMethods | Name | Description |
---|
| Clone |
Creates a deep copy of this least squares.
|
TopRemarks
In a least squares problem, we assume a linear model for
a quantity y that depends on one or more independent variables
a1, a2,...,an; that is, y = x0 + x1*a1 + ... + xn*an.
x0 is called the intercept parameter.
The goal of a least squares problem is to solve for the best values of
x0, x1,...,xn. Several observations of the independent values
ai are recorded, along with the corresponding values of the
dependent variable y. If m observations are performed, and
for the ith observation we denote the values of the independent
variables ai1, ai2,...ain and the corresponding dependent value
of y as yi, then we form the linear system Ax = y, where
A = (aij) and y = (yi). The least squares solution is the
value of x that minimizes ||Ax - y||.
Note that if the model contains a non-zero intercept parameter, then the
first column of A is all ones. Class DoubleLeastSquares uses a
complete orthogonal factorization of A to compute the solution. Matrix A
is rectangular and may be rank deficient.
See Also