Click or drag to resize

FloatLeastSquares Class

Class FloatLeastSquares computes the minimum-norm solution to a linear system Ax = y.
Inheritance Hierarchy
SystemObject
  CenterSpace.NMath.CoreFloatLeastSquares

Namespace: CenterSpace.NMath.Core
Assembly: NMath (in NMath.dll) Version: 7.4
Syntax
[SerializableAttribute]
public class FloatLeastSquares : ICloneable

The FloatLeastSquares type exposes the following members.

Constructors
 NameDescription
Public methodFloatLeastSquares(FloatMatrix, FloatVector) Constructs a least squares solution for the given linear system Ax = y.
Public methodFloatLeastSquares(FloatMatrix, FloatVector, Boolean) Constructs a least squares solution for the given linear system Ax = y, optionally adding an intercept parameter to the model.
Public methodFloatLeastSquares(FloatMatrix, FloatVector, Single) Constructs a least squares solution for the given linear system Ax = y using the specified tolerance to compute the effective rank.
Public methodFloatLeastSquares(FloatMatrix, FloatVector, Boolean, Single) Constructs a least squares solution for the given linear system Ax = y, optionally adding an intercept parameter, and using the specified tolerance to compute the effective rank.
Top
Properties
 NameDescription
Public propertyRank Gets the effective rank of the matrix A.
Public propertyResiduals Gets the vector of residuals. If y is the right-hand side of the least squares equation Ax = y, and we denote by yhat the vector Ax where x is the computed least squares solution, then the vector of residuals r is the vector whose ith component is r[i] = y[i] - yhat[i].
Public propertyResidualSumOfSquares Gets the residual sum of squares. If y is the right-hand side of the least squares equation Ax = y, and we denote by yhat the vector Ax where x is the computed least squares solution, then the residual sum of squares is defined to be (y[0] - yhat[0])^2 + (y[1] - yhat[1])^2 + ... + (y[m-1] - yhat[m-1])^2.
Public propertyTolerance Gets the tolerance used to compute the effective rank of the input matrix A.
Public propertyX Gets the least squares solution x for the least squares problem Ax = y.
Public propertyYhat Gets the predicted value of y by computing yHat = Ax, where x is the calculated solution to the least squares problem Ax = y.
Top
Methods
 NameDescription
Public methodClone Creates a deep copy of this least squares.
Top
Remarks
In a least squares problem, we assume a linear model for a quantity y that depends on one or more independent variables a1, a2,...,an; that is, y = x0 + x1*a1 + ... + xn*an. x0 is called the intercept parameter.
The goal of a least squares problem is to solve for the best values of x0, x1,...,xn. Several observations of the independent values ai are recorded, along with the corresponding values of the dependent variable y. If m observations are performed, and for the ith observation we denote the values of the independent variables ai1, ai2,...ain and the corresponding dependent value of y as yi, then we form the linear system Ax = y, where A = (aij) and y = (yi). The least squares solution is the value of x that minimizes ||Ax - y||.
Note that if the model contains a non-zero intercept parameter, then the first column of A is all ones. Class FloatLeastSquares uses a complete orthogonal factorization of A to compute the solution. Matrix A is rectangular and may be rank deficient.
See Also