<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	
	xmlns:georss="http://www.georss.org/georss"
	xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#"
	>

<channel>
	<title>CenterSpace</title>
	<atom:link href="https://www.centerspace.net/feed" rel="self" type="application/rss+xml" />
	<link>https://www.centerspace.net/</link>
	<description>.NET numerical class libraries</description>
	<lastBuildDate>Tue, 07 Feb 2023 21:29:09 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.1.7</generator>
<site xmlns="com-wordpress:feed-additions:1">104092929</site>	<item>
		<title>Getting Started with NMath</title>
		<link>https://www.centerspace.net/getting-started-with-nmath</link>
					<comments>https://www.centerspace.net/getting-started-with-nmath#respond</comments>
		
		<dc:creator><![CDATA[Trevor Misfeldt]]></dc:creator>
		<pubDate>Sun, 10 Apr 2022 22:05:54 +0000</pubDate>
				<category><![CDATA[NMath]]></category>
		<guid isPermaLink="false">https://www.centerspace.net/?p=8235</guid>

					<description><![CDATA[<p>We are often asked how to get started with using NMath. This has gotten much simpler over the years. Here&#8217;s the quickest way to get going&#8230; Install Visual Studio Code. Create a folder. Run VS Code then open the folder with File&#124;Open Folder&#8230;. View&#124;Terminal to bring up a command-line. In the terminal window, type dotnet [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/getting-started-with-nmath">Getting Started with NMath</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>We are often asked how to get started with using NMath. This has gotten much simpler over the years. Here&#8217;s the quickest way to get going&#8230;</p>



<ol><li>Install <a href="https://code.visualstudio.com/">Visual Studio Code</a>. </li><li>Create a folder.</li><li>Run VS Code then open the folder with <em>File|Open Folder&#8230;</em>.</li><li><em>View|Terminal</em> to bring up a command-line.</li><li>In the terminal window, type <code>dotnet new console</code> to create a new project. You&#8217;ll see it creates a .csproj file and a Program.cs.</li><li><code>dotnet add package CenterSpace.NMath.Standard.Windows.X64</code> to add <em>NMath</em>.</li><li>Write some code in <em>Program.cs</em>.</li><li>Compile using <code>dotnet build</code>.</li><li>Run using <code>dotnet run</code>.</li></ol>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/getting-started-with-nmath">Getting Started with NMath</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.centerspace.net/getting-started-with-nmath/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">8235</post-id>	</item>
		<item>
		<title>NMath and X86</title>
		<link>https://www.centerspace.net/nmath-and-x86</link>
					<comments>https://www.centerspace.net/nmath-and-x86#respond</comments>
		
		<dc:creator><![CDATA[Trevor Misfeldt]]></dc:creator>
		<pubDate>Sat, 19 Dec 2020 18:39:50 +0000</pubDate>
				<category><![CDATA[NMath]]></category>
		<guid isPermaLink="false">https://www.centerspace.net/?p=8149</guid>

					<description><![CDATA[<p>NMath customers are overwhelmingly developing with 64-bit packages, therefore CenterSpace has decided to drop support for 32-bit operating systems with the release of NMath 7.2. However, we will continue to support x86 versions of NMath for the foreseeable future and their packages will continue to be available from nuget. Release of NMath 7.2 NMath version [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/nmath-and-x86">NMath and X86</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p><strong>NMath </strong>customers are overwhelmingly developing with 64-bit packages, therefore CenterSpace has decided to drop support for 32-bit operating systems with the release of <strong>NMath </strong>7.2.  However, we will continue to support x86 versions of <strong>NMath </strong>for the foreseeable future and their packages will continue to be available from <a href="https://www.nuget.org/profiles/centerspace">nuget</a>.</p>



<h2>Release of NMath 7.2</h2>



<p><strong>NMath </strong>version 7.2 has been released and includes the following packages:</p>



<ul><li>CenterSpace.NMath.Standard.Windows.X64</li><li>CenterSpace.NMath.Standard.Linux.X64</li><li>CenterSpace.NMath.Standard.OSX.X64</li><li>CenterSpace.NMath.Charting.Framework.Windows.X64</li></ul>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/nmath-and-x86">NMath and X86</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.centerspace.net/nmath-and-x86/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">8149</post-id>	</item>
		<item>
		<title>Updated NMath API for LP and MIP related classes</title>
		<link>https://www.centerspace.net/updated-lp-and-mip-classes</link>
					<comments>https://www.centerspace.net/updated-lp-and-mip-classes#respond</comments>
		
		<dc:creator><![CDATA[Paul Shirkey]]></dc:creator>
		<pubDate>Wed, 11 Nov 2020 18:15:12 +0000</pubDate>
				<category><![CDATA[NMath]]></category>
		<category><![CDATA[Google OR-Tools]]></category>
		<category><![CDATA[Linear Programming]]></category>
		<category><![CDATA[LP Solver]]></category>
		<category><![CDATA[MIP Solver]]></category>
		<category><![CDATA[Mixed Integer Programming]]></category>
		<category><![CDATA[MS Solver Foundation]]></category>
		<guid isPermaLink="false">https://www.centerspace.net/?p=8126</guid>

					<description><![CDATA[<p>NMath is moving from Microsoft Solver Foundation to Google OR Tools. This change improves our LP and MIP Solver performance.</p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/updated-lp-and-mip-classes">Updated NMath API for LP and MIP related classes</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>The Linear Programming (LP) and Mixed Integer Programming (MIP) classes in <strong>NMath </strong>are currently built upon the Microsoft Solver Foundation (MSF) library.  However development and maintenance of the library was stopped in January 2017 with the final release of 3.1.0.  With the release of <strong>NMath 7.2</strong> the LP and MIP solver classes will be built on the <a href="https://github.com/google/or-tools">Google OR-Tools</a> library (GORT).  With this change the API has been simplified primarily by reducing the complexity of the algorithm parameterization.  And most importantly to many users, migrating to GORT will free <strong>NMath </strong>users from the MSF variable limits [1].  Finally, GORT is a modern .NET Standard Library and therefore can be used with .NET Framework, .NET Core, and .NET 5 projects, whereas MS Solver Foundation is restricted to NET Framework.</p>



<figure class="wp-block-table"><table><tbody><tr><td><strong>MS Solver Foundation</strong></td><td><strong>Google OR-Tools</strong></td></tr><tr><td>Variable Limits</td><td>No Variable Limits</td></tr><tr><td>Requires .NET Framework</td><td>.NET Standard Library</td></tr><tr><td>Unsupported as of January 2017</td><td>Actively Supported</td></tr></tbody></table><figcaption>Key differences between MS Solver Foundation and Google OR-Tools</figcaption></figure>



<p>Beginning with the release of <strong>NMath </strong>7.2 the following table lists the deprecated MS Solver Foundation classes on the left and their Google OR-Tools replacements, if necessary.</p>



<div class="is-layout-flex wp-container-2 wp-block-columns">
<div class="is-layout-flow wp-block-column" style="flex-basis:100%">
<figure class="wp-block-table is-style-stripes"><table><tbody><tr><td>Deprecated</td><td>Replacement</td></tr><tr><td><code>PrimalSimplexSolver</code></td><td><code>PrimalSimplexSolverORTools</code></td></tr><tr><td><code>DualSimplexSolver</code></td><td><code>DualSimplexSolverORTools</code></td></tr><tr><td><code>SimplexSolverBase</code></td><td><code>SimpleSolverBaseORTools</code></td></tr><tr><td><code>SimplexSolverMixedIntParams</code></td><td><em><code>replacement not needed</code></em></td></tr><tr><td><code>SimplexSolverParams</code></td><td><em><code>replacement not needed</code></em></td></tr></tbody></table><figcaption>Deprecated LP and MIP classes in NMath 7.2</figcaption></figure>
</div>
</div>



<h2>API Changes</h2>



<p>The primary change between the deprecated MSF classes and the GORT classes is in the reduced algorithm parameterization.  For example, in the toy MIP problem coded below the only change in the API regards constructing the new GORT solver and the lack of need for the parameter helper class.  Note that the entire problem setup and related classes are unchanged making it a simple job to migrate to the new solver classes.</p>



<div class="is-layout-flex wp-container-4 wp-block-columns">
<div class="is-layout-flow wp-block-column">
<pre class="wp-block-code"><code>  // The problem setup is identical between the new and the deprecated API. 

  // minimize -3*x1 -2*x2 -x3
  var mip = new MixedIntegerLinearProgrammingProblem( new DoubleVector( -3.0, -2.0, -1.0 ) );

  // x1 + x2 + x3 &lt;= 7
  mip.AddUpperBoundConstraint( new DoubleVector( 1.0, 1.0, 1.0 ), 7.0 );

  // 4*x1 + 2*x2 +x3 = 12
  mip.AddEqualityConstraint( new DoubleVector( 4.0, 2.0, 1.0 ), 12.0 );

  // x1, x2 &gt;= 0
  mip.AddLowerBound( 0, 0.0 );
  mip.AddLowerBound( 1, 0.0 );

  // x3 is 0 or 1
  mip.AddBinaryConstraint( 2 );
 
  // Make a new Google OR-Tools solver and solve the MIP
  var solver = new PrimalSimplexSolverORTools();
  solver.Solve( mip, true ); // true -&gt; minimize

  // Solve the same MIP with the old deprecated API
  var solver = new PrimalSimplexSolver();
  var solverParams = new PrimalSimplexSolverParams { Minimize = true };
  solver.Solve( mip, solverParams );</code></pre>



<p><strong>NMath </strong>7.2 is released on <a href="https://www.nuget.org/profiles/centerspace">Nuget </a>as usual.</p>
</div>
</div>



<p class="has-text-align-left"><sub>[1] MS Solver Foundation variable limits: NonzeroLimit = 100000, MipVariableLimit = 2000, MipRowLimit = 2000, MipNonzeroLimit = 10000, CspTermLimit = 25000, LP variable limit = 1000</sub></p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/updated-lp-and-mip-classes">Updated NMath API for LP and MIP related classes</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.centerspace.net/updated-lp-and-mip-classes/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">8126</post-id>	</item>
		<item>
		<title>Chromatographic and Spectographic Data Analysis</title>
		<link>https://www.centerspace.net/chromatographic-and-spectographic-data-analysis</link>
					<comments>https://www.centerspace.net/chromatographic-and-spectographic-data-analysis#respond</comments>
		
		<dc:creator><![CDATA[Paul Shirkey]]></dc:creator>
		<pubDate>Wed, 24 Jun 2020 19:52:34 +0000</pubDate>
				<category><![CDATA[NMath]]></category>
		<category><![CDATA[chromatographic]]></category>
		<category><![CDATA[electrophretic]]></category>
		<category><![CDATA[mass spec]]></category>
		<category><![CDATA[peak finding]]></category>
		<category><![CDATA[peak modeling]]></category>
		<category><![CDATA[spectographic]]></category>
		<guid isPermaLink="false">https://www.centerspace.net/?p=7608</guid>

					<description><![CDATA[<p>Chromatographic and spectographic data analysis is a common application of the NMath class library and usually involves some or all of the following computing activities: Noise removal Baseline adjustment Peak finding Peak modeling Peak statistical analysis In this blog article we will discuss each of these activities and provide some NMath C# code on how [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/chromatographic-and-spectographic-data-analysis">Chromatographic and Spectographic Data Analysis</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Chromatographic and spectographic data analysis is a common application of the <strong>NMath</strong> class library and usually involves some or all of the following computing activities:</p>



<ul><li>Noise removal</li><li>Baseline adjustment</li><li>Peak finding</li><li>Peak modeling </li><li>Peak statistical analysis</li></ul>



<p>In this blog article we will discuss each of these activities and provide some NMath C# code on how they may be accomplished.  This is big subject but the goal here is to get you started solving your spectographic data analysis problems, perhaps introduce you to a new technique, and finally to provide some helpful code snippets that can be expanded upon.</p>



<p>Throughout this article we will be using the following electrophoretic data set below in our code examples.  This data set contains four obvious peaks and one partially convolved peak infilled with underlying white noise.</p>



<figure class="wp-block-image size-large"><img decoding="async" width="700" height="350" src="https://www.centerspace.net/wp-content/uploads/2020/06/Blog_RawData-1.gif" alt="" class="wp-image-7610"/><figcaption><em>Our example data set</em></figcaption></figure>



<h2>Noise Removal</h2>



<p>Chromatographic, spectographic, fMRI or EEG data, and many other types of time series are non-stationary.  This non-stationarity means that Fourier based filtering methods are ill suited to removing noise from these signals.  Fortunately we can effectively apply wavelet analysis, which does not depend on signal periodicity, to suppress the signal noise without altering the signal&#8217;s phase or magnitude.  Briefly, the discrete wavelet transform (DWT) can be used to recursively decompose the signal successively into <em>details </em>and <em>approximations </em>components.  From a filtering perspective the signal <em>details</em> contain the higher frequency parts and the <em>approximations</em> contain the lower frequency components.  As you&#8217;d expect the inverse DWT can elegantly reconstruct the original signal but, to meet our noise removal goals, the higher frequency noisy parts of the signal can be suppressed during the signal reconstruction and be effectively removed.  This technique is called <em>wavelet shrinkage</em> and is described in more detail in an earlier <a href="https://www.centerspace.net/wavelet-transforms">blog article</a> with references.</p>



<figure class="wp-block-image size-large"><img decoding="async" loading="lazy" width="700" height="351" src="https://www.centerspace.net/wp-content/uploads/2020/06/Blog_FilteredData-1.gif" alt="" class="wp-image-7613"/><figcaption>Signal noise removed using wavelet shrinkage.</figcaption></figure>



<p>These results can be refined, but this starting point is seen to have successfully removed the noise but not altered the position or general shape of the peaks. Choosing the right wavelet for wavelet shrinkage is done empirically with a representative data set at hand.</p>



<pre class="wp-block-code"><code>public DoubleVector SuppressNoise( DoubleVector DataSet  )  
{
  var wavelet = new DoubleWavelet( Wavelet.Wavelets.D4 );
  var dwt = new DoubleDWT( DataSet.ToArray(), wavelet );
  dwt.Decompose( 5 );
  double lambdaU = dwt.ComputeThreshold( FloatDWT.ThresholdMethod.Sure, 1 );

  dwt.ThresholdAllLevels( FloatDWT.ThresholdPolicy.Soft, new double&#91;] { lambdaU, lambdaU, lambdaU, lambdaU, lambdaU } );

  double&#91;] reconstructedData = dwt.Reconstruct();
  var filteredData= new DoubleVector( reconstructedData );
  return filteredData;
}</code></pre>



<p>With our example data set a Daubechies 4 wavelet worked well for noise removal.  Note that the same threshold was applied to all DWT decomposition levels; Improved white noise suppression can be realized by adopting other thresholding strategies.</p>



<h2>Baseline Adjustment</h2>



<p>Dozens of methods have been developed for modeling and removing a baseline from various types of spectra data.  The R package <a rel="noreferrer noopener" href="https://cran.r-project.org/web/packages/baseline/baseline.pdf" target="_blank"><code><em>baseline</em></code> </a>has collected together a range of these techniques and can serve as a good starting point for exploration.  The techniques variously use regression, iterative erosion and dilation, spectral filtering, convex hulls, or partitioning and create baseline models of lines, polynomials, or more complex curves that can then be subtracted from the raw data.  (Another R package, <a href="https://cran.r-project.org/web/packages/MALDIquant/MALDIquant.pdf">MALDIquant</a>, contains several more useful baseline removal techniques.)  Due to the wide variety of baseline removal techniques and the lack of standards across datasets, <strong>NMath </strong>does not natively offer any baseline removal algorithms.</p>



<h4>Example baseline modeling</h4>



<p>The C# example baseline modeling code below uses z-scores and iterative peak suppression to create a polynomial model of the baseline.  Peaks that extend beyond 1.5 z-scores are iteratively cut down by a quarter and then a polynomial is fitted to this modified data set.  Once the baseline polynomial fits well and stops improving upon iterative suppression the model is returned.</p>



<pre class="wp-block-code"><code>private PolynomialLeastSquares findBaseLine( DoubleVector x, DoubleVector y, int PolynomialDegree )
 {
   var lsFit = new PolynomialLeastSquares( PolynomialDegree, x, y );
   var previousRSoS = 1.0;

   while ( lsFit.LeastSquaresSolution.ResidualSumOfSquares > 0.1 &amp;&amp; Math.Abs( previousRSoS - lsFit.LeastSquaresSolution.ResidualSumOfSquares ) > 0.00001 )
   {
     // compute the Z-scores of the residues and erode data beyond 1.5 stds.
     var residues = lsFit.LeastSquaresSolution.Residuals;
     var Zscores = ( residues - NMathFunctions.Mean( residues ) ) / Math.Sqrt( NMathFunctions.Variance( residues ) );
     previousRSoS = lsFit.LeastSquaresSolution.ResidualSumOfSquares;

     y&#91;0] = Zscores&#91;0] > 1.5 ? 0 : y&#91;0];
     for ( int i = 1; i &lt; this.Length; i++ )
     {
       if ( Zscores&#91;i] > 1.5 )
       {
         y&#91;i] = y&#91;i-1] / 4.0;
       }
     }
     lsFit = new PolynomialLeastSquares( PolynomialDegree, x, y );
    }
    return lsFit;
 }</code></pre>



<p>This algorithm has proven reliable for estimating both 1 and 2 degree polynomial baselines with electrophoretic data sets.  It is not designed to model the wandering baselines sometimes found in mass spec data.  The SNIP [2] method or Asymmetric Least Squares Smoothing [1] would be better suited for those data sets.</p>



<h2>Peak Finding</h2>



<p>Locating peaks in a data set usually involves, at some level, finding the zero crossings of the first derivative of the signal.  However, directly differentiating a signal amplifies noise and so more sophisticated indirect methods are usually employed.  Savitzky-Golay polynomials are commonly used to provide high quality smoothed derivatives of a noisy signal and are widely employed with chromatographic and other data sets (See this<a href="https://www.centerspace.net/savitzky-golay-smoothing"> blog article</a> for more details).  </p>



<div class="is-layout-flow wp-block-group"><div class="wp-block-group__inner-container">
<figure class="wp-block-image size-large is-resized"><img decoding="async" loading="lazy" src="https://www.centerspace.net/wp-content/uploads/2020/06/Blog_PeaksThresholded.gif" alt="" class="wp-image-7617" width="580" height="290"/><figcaption>Located peaks using Savitzky-Golay derivatives and thresholding</figcaption></figure>
</div></div>



<pre class="wp-block-code"><code>// Code snippet for locating peaks.
var sgFilter = new SavitzkyGolayFilter( 4, 4, 2 );
DoubleVector filteredData = sgFilter.Filter( DataSet );
var rbPeakFinder = new PeakFinderRuleBased( filteredData );
rbPeakFinder.AddRule( PeakFinderRuleBased.Rules.MinHeight, 0.005 );
List&lt;int> pkIndicies = rbPeakFinder.LocatePeakIndices();</code></pre>



<p>Without thresholding many small noisy undulations are returned as peaks. Thresholding with this data set works well in separating the data peaks from the noise, however sometimes peak modeling is necessary for separating data peaks from noise when they are both present at similar scales.</p>



<h2>Peak Modeling and Statistics</h2>



<p>In addition to separating out false peaks, peaks are also modeled to compute various peak statistical measures such as FWHM, CV,  area, or standard deviation.  The Gaussian is an excellent place to start for peak modeling and for many applications this model is sufficient.  However there are many other peak models including: Lorenzian, Voigt, [3] CSR, and variations on exponentially modified Gaussians (EMG&#8217;s).  Many combinations, convolutions, and refinements of these models are gathered together and presented in a useful paper by <a rel="noreferrer noopener" href="https://www.centerspace.net/dimarco2001" target="_blank">DeMarko &amp; Bombi, 2001</a>.  Their paper focused on chromatographic peaks but the models surveyed therein have wide application.</p>



<pre class="wp-block-code"><code>/// &lt;summary>
/// Gaussian Func&lt;> for trust region fitter.
/// p&#91;0] = mean, p&#91;1] = sigma, p&#91;2] = baseline offset
/// &lt;/summary>
private static Func&lt;DoubleVector, double, double> Gaussian = delegate ( DoubleVector p, double x )
{
   double a = ( 1.0 / ( p&#91;1] * Math.Sqrt( 2.0 * Math.PI ) ) );
   return a * Math.Exp( -1 * Math.Pow( x - p&#91;0], 2 ) / ( 2 * p&#91;1] * p&#91;1] ) ) + p&#91;2];
};</code></pre>



<p>Above is a <code>Func&lt;&gt;</code> representing a Gaussian that allows for some vertical offset.  The <code>TrustRegionMinimizer</code> in <strong>NMath </strong>is one of the most powerful and flexible methods for peak fitting.  Once the start and end indices of the peaks are determined, the following code snippet fits this Gaussian model to the peak&#8217;s data.</p>



<pre class="wp-block-code"><code>// The DoubleVector's xValues and yValues contain the peak's data.

// Pass in the model (above) to the function fitter ctor
var modelFitter = new BoundedOneVariableFunctionFitter&lt;TrustRegionMinimizer>( Gaussian );

// Gaussian for peak finding
var lowerBounds = new DoubleVector( new double&#91;] { xValues&#91;0], 1.0, -0.05 } );
var upperBounds = new DoubleVector( new double&#91;] { xValues&#91;xValues.Length - 1], 10.0, 0.05 } );
var initialGuess = new DoubleVector( new double&#91;] { 0.16, 6.0, 0.001 } );

// The lower and upper bounds aren't required, but are suggested.
var soln = modelFitter.Fit( xValues, yValues, initialGuess, lowerBounds, upperBounds );

// Fit statistics
var gof = new GoodnessOfFit( modelFitter, xValues, yValues, soln );</code></pre>



<p>The <code>GoodnessOfFit</code> class is a very useful tool for peak modeling.  In one line of code one gets the f-statistics for the goodness of the fit of the model along with confidence intervals for all of the model parameters.  These statistics are very useful in automating the sorting out of noisy peaks from actual data peaks and of course for determining if the model is appropriate for the data at hand.</p>



<h4>Peak Area</h4>



<p>Computing peak areas or peak area proportions is essential in most applications of spectographic or electrophoretic data analysis.  This is this a two-liner with <strong>NMath</strong>.</p>



<pre class="wp-block-code"><code>// The peak starts and ends at: startIndex, endIndex.
var integrator = new DiscreteDataIntegrator();
double area =  integrator.Integrate( DataSet&#91; new Slice( startIndex, endIndex - startIndex + 1) ] );</code></pre>



<p>The <code>DiscreteDataIntegrator</code> defaults to integrating with cubic spline segments.  Other discrete data integration methods available are trapezoidal and parabolic.</p>



<h2>Summary</h2>



<p>Contact us if you need help or have questions about analyzing your team&#8217;s data sets.  We can quickly help you get started solving your computing problems using <strong><a href="https://www.centerspace.net/product-overviews">NMath </a></strong>or go deeper and accelerate your team&#8217;s application development with consulting.</p>



<h4>Assorted References</h4>



<ol><li>Eilers, Paul &amp; Boelens, Hans. (2005). Baseline Correction with Asymmetric Least Squares Smoothing. Unpubl. Manuscr.</li><li>C.G. Ryan, E. Clayton, W.L. Griffin, S.H. Sie, and D.R. Cousens. 1988. SNIP, a statistics-sensitive background treatment for the quantitative analysis of pixe spectra in geoscience applications. Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms, 34(3): 396-402.</li><li>García-Alvarez-Coque MC, Simó-Alfonso EF, Sanchis-Mallols JM, Baeza-Baeza JJ. A new mathematical function for describing electrophoretic peaks. <em>Electrophoresis</em>. 2005;26(11):2076-2085. doi:10.1002/elps.200410370</li></ol>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/chromatographic-and-spectographic-data-analysis">Chromatographic and Spectographic Data Analysis</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.centerspace.net/chromatographic-and-spectographic-data-analysis/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">7608</post-id>	</item>
		<item>
		<title>Fitting the Weibull Distribution</title>
		<link>https://www.centerspace.net/fitting-the-weibull-distribution</link>
					<comments>https://www.centerspace.net/fitting-the-weibull-distribution#respond</comments>
		
		<dc:creator><![CDATA[Paul Shirkey]]></dc:creator>
		<pubDate>Wed, 24 Jul 2019 18:30:45 +0000</pubDate>
				<category><![CDATA[NMath]]></category>
		<category><![CDATA[Statistics]]></category>
		<category><![CDATA[.NET weibull]]></category>
		<category><![CDATA[C# weibull]]></category>
		<category><![CDATA[fitting the Weibull distribution]]></category>
		<category><![CDATA[Weibull]]></category>
		<category><![CDATA[weibull distribution]]></category>
		<guid isPermaLink="false">https://www.centerspace.net/?p=7434</guid>

					<description><![CDATA[<p>The Weibull distribution is widely used in reliability analysis, hazard analysis, for modeling part failure rates and in many other applications. The NMath library currently includes 19 probably distributions and has recently added a fitting function to the Weibull distribution class at the request of a customer. The Weibull probability distribution, over the random variable [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/fitting-the-weibull-distribution">Fitting the Weibull Distribution</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>The Weibull distribution is widely used in reliability analysis, hazard analysis, for modeling part failure rates and in many other applications.  The <strong>NMath </strong>library currently includes 19 probably distributions and has recently added a fitting function to the Weibull distribution class at the request of a customer.  </p>



<p>The Weibull probability distribution, over the random variable <em>x</em>, has two parameters:</p>



<ul><li>k &gt; 0, is the <em>shape parameter</em></li><li>λ &gt; 0, is the <em>scale parameter </em></li></ul>



<p>Frequently engineers have data that is known to be well modeled by the Weibull distribution but the shape and scale parameters are unknown. In this case a data fitting strategy can be used; <strong>NMath </strong>now has a maximum likelihood Weibull fitting function demonstrated in the code example below.</p>



<pre class="wp-block-code"><code>    public void WiebullFit()
    {
      double[] t = new double[] { 16, 34, 53, 75, 93, 120 };
      double initialShape = 2.2;
      double initialScale = 50.0;

      WeibullDistribution fittedDist = WeibullDistribution.Fit( t, initialScale, initialShape );

      // fittedDist.Shape parameter will equal 1.933
      // fittedDist.Scale parameter will equal 73.526
    }</code></pre>



<p>If the Weibull fitting algorithm fails the returned distribution will be <code>null</code>.  In this case improving the initial parameter guesses can help. The <code>WeibullDistribution.Fit()</code> function accepts either arrays, as seen above, or <code>DoubleVectors</code>.</p>



<p>The latest version of <strong>NMath</strong>, including this maximum likelihood Weibull fit function, is available on the CenterSpace <a href="https://www.nuget.org/profiles/centerspace">NuGet</a> gallery.</p>



<p></p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/fitting-the-weibull-distribution">Fitting the Weibull Distribution</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.centerspace.net/fitting-the-weibull-distribution/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">7434</post-id>	</item>
		<item>
		<title>NMath 7.0 &#038; the .NET Standard Library</title>
		<link>https://www.centerspace.net/nmath-on-net-standard-library</link>
					<comments>https://www.centerspace.net/nmath-on-net-standard-library#comments</comments>
		
		<dc:creator><![CDATA[Trevor Misfeldt]]></dc:creator>
		<pubDate>Mon, 27 May 2019 17:43:14 +0000</pubDate>
				<category><![CDATA[.NET]]></category>
		<category><![CDATA[NMath]]></category>
		<category><![CDATA[.NET Core]]></category>
		<category><![CDATA[.NET Standard]]></category>
		<category><![CDATA[C# Math Libraries]]></category>
		<guid isPermaLink="false">https://www.centerspace.net/?p=7347</guid>

					<description><![CDATA[<p>In December, CenterSpace Software rolled out a major new release of NMath, version 7.0, built on the .NET Standard Library 2.0. The focus of this release has been to support the .NET Standard library, to further improve the ease of use of the NMath library, and to unify all CenterSpace libraries into one. CenterSpace now [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/nmath-on-net-standard-library">NMath 7.0 &#038; the .NET Standard Library</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>In December, CenterSpace Software rolled out a major new release of <strong>NMath</strong>, version 7.0,<em> built on the .NET Standard Library 2.0</em>. The focus of this release has been to support the .NET Standard library, to further improve the ease of use of the <strong>NMath </strong>library, and to unify all CenterSpace libraries into one. CenterSpace now offers one unified, powerful, math library: <strong>NMath 7.0</strong>.</p>



<p>This version requires developers to be using at least .NET Core 2.0 or at least .NET Framework 4.6.1.  Adding support of .NET Core to <strong>NMath </strong>has been in the works for over a year and was done at the request of many of our active developers.</p>



<p>Future development work will concentrate on the .NET Standard based <strong>NMath 7.0</strong>. However, <strong>NMath 6.2</strong>, build on .NET 4.0, but not supporting the .NET Standard Library, will be available for years to come.</p>



<p>Below is a list of major changes released in NMath 7.0:</p>



<ul><li>32-bit support has been dropped. Demand has been waning for years for this. Dropping it has made usage simpler and easier.</li><li>GPU support has been dropped. As developers, we liked the automatic GPU offloading. However, the technical advantages have dissipated as multi-core processors have improved.  We believe that this is no longer compelling for a general math library.</li><li>NMath Stats has been merged into NMath. This is for ease of use for our users.</li><li>In summer of 2019, our pricing will be streamlined to reflect these changes. There will be one price for a perpetual NMath license and there will be one price for annual NMath maintenance which includes technical support and all upgrades available on NuGet. NMath Stats will no longer be sold separately.</li><li>We have merged the four NMath namespaces into one, <code>CenterSpace.NMath.Core</code>, to simplify development. Originally, CenterSpace had four NMath products and four namespaces, these namespaces <code>CenterSpace.NMath.Core,  CenterSpace.NMath.Matrix, CenterSpace.NMath.Stats,  CenterSpace.NMath.Analysis</code> reflect that history. We have left stubs so users won&#8217;t have any breaking changes.</li><li>We have dropped charting. The ecosystem is full of powerful visualization packages. We have only three main data structures in NMath, vectors, matrices and data frames, and all can be easily used with different charting packages.</li><li>Some of our optimizations use Microsoft Solver Foundation. If you use these, you&#8217;ll need to be on the .NET Framework track and not on the .NET Core track.</li><li>We have dropped the installers. The compelling ease of NuGet for our users has made these obsolete.</li></ul>



<hr class="wp-block-separator"/>



<p><a href="https://www.nuget.org/packages/CenterSpace.NMath.Standard.WindowsAndLinux.X64/">NMath 7.0 on Windows and Linux</a></p>



<p><a href="https://www.nuget.org/packages/CenterSpace.NMath.Standard.Windows.X64/">NMath 7.0 on Windows</a></p>



<p><a href="https://www.nuget.org/packages/CenterSpace.NMath.Standard.Linux.X64/">NMath 7.0 on Linux</a></p>



<p><a href="https://www.nuget.org/packages/CenterSpace.NMath.Standard.OSX.X64/">NMath 7.0 on OSX</a></p>



<p>Please try the new versions on NuGet. Feedback welcome as always.</p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/nmath-on-net-standard-library">NMath 7.0 &#038; the .NET Standard Library</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.centerspace.net/nmath-on-net-standard-library/feed</wfw:commentRss>
			<slash:comments>9</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">7347</post-id>	</item>
		<item>
		<title>NMath is Adding .NET Core Support and has Dropped Support of OSX and Linux86</title>
		<link>https://www.centerspace.net/nmath-adding-net-core-net-standard</link>
					<comments>https://www.centerspace.net/nmath-adding-net-core-net-standard#comments</comments>
		
		<dc:creator><![CDATA[Paul Shirkey]]></dc:creator>
		<pubDate>Tue, 13 Mar 2018 00:29:27 +0000</pubDate>
				<category><![CDATA[.NET]]></category>
		<category><![CDATA[NMath]]></category>
		<category><![CDATA[NMath Premium]]></category>
		<category><![CDATA[.NET Core]]></category>
		<category><![CDATA[.NET Standard]]></category>
		<guid isPermaLink="false">https://www.centerspace.net/?p=7300</guid>

					<description><![CDATA[<p>CenterSpace will be adding support for both .NET Core and .NET Standard to NMath by the end of 2018.  NMath has also dropped support of both the OSX and Linux86 operating systems in NMath release 6.2.0.41.</p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/nmath-adding-net-core-net-standard">NMath is Adding .NET Core Support and has Dropped Support of OSX and Linux86</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h3> Changes to Supported Operating Systems </h3>
<p>With the release of <strong>NMath</strong> 6.2.0.41, on March 10, 2018, <strong>NMath</strong> no longer supports OSX or the Linux x86 operating systems.  We are dropping the support of these operating systems due to a decline of demand by our customers.  Please contact us with any concerns regarding this change.  This release is currently available on <a href="https://www.nuget.org/packages/CenterSpace.NMath.Premium/6.2.0.41">NuGet</a>.</p>
<p>Going forward <strong>NMath</strong> and <strong>NMath Premium</strong> will naturally continue to support both the 32-bit and 64-bit Windows and 64-bit Linux.  </p>
<h3> Adding .NET Standard and .NET Core Support  </h3>
<p><em>By the end of 2018, NMath will support both the .NET Core and .NET Standard</em>.  Supporting both of these .NET standards have been increasingly requested by our customers.  If you are unfamiliar with these newest additions to the .NET world, the following briefly defines them.</p>
<ul>
<li> .NET Core: This is the latest .NET implementation. It’s open source and available for multiple OSes. With .NET Core, you can build cross-platform console apps and ASP.NET Core Web applications and cloud services.</li>
<li>.NET Standard: This is the set of fundamental APIs (commonly referred to as base class library or BCL) that all .NET implementations must implement. By targeting .NET Standard, you can build libraries that you can share across all your .NET apps, no matter on which .NET implementation or OS they run.</li>
</ul>
<p>For further reading on these .NET standards see this <a href="https://msdn.microsoft.com/en-us/magazine/mt842506.aspx">MSDN magazine</a> article for an introduction.</p>
<p>Please don&#8217;t hesitate to contact us in the comments below or via email with any questions regarding these changes to the CenterSpace .NET <strong>NMath</strong> library. </p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/nmath-adding-net-core-net-standard">NMath is Adding .NET Core Support and has Dropped Support of OSX and Linux86</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.centerspace.net/nmath-adding-net-core-net-standard/feed</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">7300</post-id>	</item>
		<item>
		<title>Principal Components Regression: Part 3 – The NIPALS Algorithm</title>
		<link>https://www.centerspace.net/principal-components-regression</link>
					<comments>https://www.centerspace.net/principal-components-regression#respond</comments>
		
		<dc:creator><![CDATA[Steve Sneller]]></dc:creator>
		<pubDate>Tue, 29 Nov 2016 19:23:13 +0000</pubDate>
				<category><![CDATA[Statistics]]></category>
		<category><![CDATA[Theory]]></category>
		<category><![CDATA[NIPALS]]></category>
		<category><![CDATA[PCR]]></category>
		<category><![CDATA[PCR c#]]></category>
		<category><![CDATA[PCR estimator]]></category>
		<category><![CDATA[principal component analysis C#]]></category>
		<category><![CDATA[Principal Components Regression]]></category>
		<guid isPermaLink="false">http://www.centerspace.net/?p=7075</guid>

					<description><![CDATA[<p>In this final entry of our three part series on Principle Component Regression (PCR) we described the NIPALS algorithm used to compute the principle components.  This is followed by a theoretical discussion of why the NIPALS algorithm works that is accessible to non-experts. </p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/principal-components-regression">Principal Components Regression: Part 3 – The NIPALS Algorithm</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h2>Principal Components Regression: Recap of Part 2</h2>



<p>Recall that the least squares solution <img src="https://s0.wp.com/latex.php?latex=%5Cbeta&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="&#92;beta" class="latex" /> to the multiple linear problem <img src="https://s0.wp.com/latex.php?latex=X+%5Cbeta+%3D+y&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X &#92;beta = y" class="latex" /> is given by<br>(1) <img decoding="async" src="https://s0.wp.com/latex.php?latex=%5Chat%7B%5Cbeta%7D+%3D+%28X%5ET+X%29%5E%7B-1%7D+X%5ET+y+&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="&#92;hat{&#92;beta} = (X^T X)^{-1} X^T y " class="latex" /></p>



<p>And that problems occurred finding <img src="https://s0.wp.com/latex.php?latex=%5Chat%7B%5Cbeta%7D&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="&#92;hat{&#92;beta}" class="latex" /> when the matrix<br>(2) <img src="https://s0.wp.com/latex.php?latex=X%5ET+X&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X^T X" class="latex" /></p>



<p>was close to being singular. The Principal Components Regression approach to addressing the problem is to replace <img src="https://s0.wp.com/latex.php?latex=X%5ET+X&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X^T X" class="latex" /> in equation (1) with a better conditioned approximation. This approximation is formed by computing the eigenvalue decomposition for <img src="https://s0.wp.com/latex.php?latex=X%5ET+X&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X^T X" class="latex" /> and retaining only the r largest eigenvalues. This yields the PCR solution:<br>(3) <img src="https://s0.wp.com/latex.php?latex=%5Chat%7B%5Cbeta%7D_r%3D+V_1+%5CLambda_1%5E%7B-1%7D+V_1%5ET+X%5ET+y&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="&#92;hat{&#92;beta}_r= V_1 &#92;Lambda_1^{-1} V_1^T X^T y" class="latex" /></p>



<p>where <img src="https://s0.wp.com/latex.php?latex=%5CLambda_1&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="&#92;Lambda_1" class="latex" /> is an r x r diagonal matrix consisting of the r largest eigenvalues of <img src="https://s0.wp.com/latex.php?latex=X%5ET+X%2CV_1%3D%28v_1%2C...%2Cv_r+%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X^T X,V_1=(v_1,...,v_r )" class="latex" /><br>are the corresponding eigenvectors of <img src="https://s0.wp.com/latex.php?latex=X%5ET+X&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X^T X" class="latex" />. In this piece we shall develop code for computing the PCR solution using the NMath libraries.</p>


<p>[eds: This blog article is final entry of a three part series on principal component regression. The first article in this series, &#8220;Principal Component Regression: Part 1 – The Magic of the SVD&#8221; is <a href="https://www.centerspace.net/theoretical-motivation-behind-pcr">here</a>. And the second, &#8220;Principal Components Regression: Part 2 – The Problem With Linear Regression&#8221; is <a href="https://www.centerspace.net/priniciple-components-regression-in-csharp">here</a>.]</p>



<h2>Review: Eigenvalues and Singular Values</h2>



<p>In order to develop the algorithm, I want to go back to the Singular Value Decomposition (SVD) of a matrix and its relationship to the eigenvalue decomposition. Recall that the SVD of a matrix X is given by<br>(4) <img src="https://s0.wp.com/latex.php?latex=X%3DU+%5CSigma+V%5ET&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X=U &#92;Sigma V^T" class="latex" /></p>



<p>Where U is the matrix of left singular vectors, V is the matrix of right singular vectors, and Σ is a diagonal matrix with positive entries equal to the singular values. The eigenvalue decomposition of <img src="https://s0.wp.com/latex.php?latex=X%5ET+X&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X^T X" class="latex" /> is given by<br>(5) <img src="https://s0.wp.com/latex.php?latex=X%5ET+X%3DV+%5CLambda+V%5ET&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X^T X=V &#92;Lambda V^T" class="latex" /></p>



<p>Where the eigenvalues of X are the diagonal entries of the diagonal matrix <img src="https://s0.wp.com/latex.php?latex=%5CLambda&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="&#92;Lambda" class="latex" /> and the columns of V are the eigenvectors of <img src="https://s0.wp.com/latex.php?latex=X%5ET+X&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X^T X" class="latex" /> (V is also composed of the right singular vectors of X).<br>Recall further that if the matrix X has rank r then X can be written as<br>(6) <img src="https://s0.wp.com/latex.php?latex=X%3D+%5Csum_%7Bj%3D1%7D%5E%7Br%7D+%5Csigma_j+u_j+v_j%5ET&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X= &#92;sum_{j=1}^{r} &#92;sigma_j u_j v_j^T" class="latex" /></p>



<p>Where <img src="https://s0.wp.com/latex.php?latex=%5Csigma_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="&#92;sigma_j" class="latex" /> is the jth singular value (jth diagonal element of the diagonal matrix <img src="https://s0.wp.com/latex.php?latex=%5CSigma&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="&#92;Sigma" class="latex" />), <img src="https://s0.wp.com/latex.php?latex=u_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="u_j" class="latex" /> is the jth column of U, and <img src="https://s0.wp.com/latex.php?latex=v_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="v_j" class="latex" /> is the jth column of V. An equivalent way of expressing the PCR solution (3) to the least squares problem in terms of the SVD for X is that we’ve replaced X in the solution (1) by its rank r approximation shown in (6).</p>



<h2>Principal Components</h2>



<p>The subject here is Principal Components Regression (PCR), but we have yet to mention principal components. All we have talked about are eigenvalues, eigenvectors, singular values, and singular vectors. We’ve seen how singular stuff and eigen stuff are related, but what are principal components?<br>Principal component analysis applies when one considers statistical properties of data. In linear regression each column of our matrix X represents a variable and each row is a set of observed value for these variables. The variables being observed are random variables and as such have means and variances. If we center the matrix X by subtracting from each column of X its corresponding mean, then we’ve normalized the random variables being observed so that they have zero mean. Once the matrix X is centered in this way, the matrix <img src="https://s0.wp.com/latex.php?latex=X%5ET+X&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X^T X" class="latex" /> is then proportional to the variance/covariance for the variables. In this context the eigenvectors of <img src="https://s0.wp.com/latex.php?latex=X%5ET+X&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X^T X" class="latex" /> are called the Principal Components of X. For completeness (and because they are used in discussing the PCR algorithm), we define two more terms.<br>In the SVD given by equation (4), define the matrix T by<br>(7) <img src="https://s0.wp.com/latex.php?latex=T%3DU%5CSigma&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="T=U&#92;Sigma" class="latex" /></p>



<p>The matrix T is called the <em>scores</em> for X. Note that T is orthogonal, but not necessarily orthonormal. Substituting this into the SVD for X yields<br>(8) <img src="https://s0.wp.com/latex.php?latex=X%3DTV%5ET&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X=TV^T" class="latex" /></p>



<p>Using the fact that V is orthogonal we can also write<br>(9) <img src="https://s0.wp.com/latex.php?latex=T%3DXV&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="T=XV" class="latex" /></p>



<p>We call the matrix V the <em>loadings</em>. The goal of our algorithm is to obtain the representation given by equation (8) for X, retaining all the most significant principal components (or eigenvalues, or singular values – depending on where your heads at at the time).</p>



<h2>Computing the Solution</h2>



<p>Using equation (3) to compute the solution to our problem involves forming the matrix <img src="https://s0.wp.com/latex.php?latex=X%5ET+X&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X^T X" class="latex" /> and obtaining its eigenvalue decomposition. This solution is fairly straight forward and has reasonable performance for moderately sized matrices X. However, in practice, the matrix X can be quite large, containing hundreds, even thousands of columns. In addition, many procedures for choosing the optimal number r of eigenvalues/singular values to retain involve computing the solution for many different values of r and comparing them. We therefore introduce an algorithm which computes only the number of eigenvalues we need.</p>



<h2>The NIPALS Algorithm</h2>



<p>We will be using an algorithm known as NIPALS (Nonlinear Iterative PArtial Least Squares). The NIPALS algorithm for the matrix X in our least squares problem and r, the number of retained principal components, proceeds as follows:<br>Initialize <img src="https://s0.wp.com/latex.php?latex=j%3D1&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="j=1" class="latex" /> and <img src="https://s0.wp.com/latex.php?latex=X_1%3DX&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X_1=X" class="latex" />. Then iterate through the following steps –</p>



<ol><li>Choose <img src="https://s0.wp.com/latex.php?latex=t_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="t_j" class="latex" /> as any column of <img src="https://s0.wp.com/latex.php?latex=X_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X_j" class="latex" /></li><li>Let <img src="https://s0.wp.com/latex.php?latex=v_j+%3D+%28X_j%5ET+t_j%29+%2F+%5Cleft+%5C%7C+X_j%5ET+t_j+%5Cright+%5C%7C&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="v_j = (X_j^T t_j) / &#92;left &#92;| X_j^T t_j &#92;right &#92;|" class="latex" /></li><li>Let <img src="https://s0.wp.com/latex.php?latex=t_j%3D+X_j+v_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="t_j= X_j v_j" class="latex" /></li><li>If <img src="https://s0.wp.com/latex.php?latex=t_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="t_j" class="latex" /> is unchanged continue to step 5. Otherwise return to step 2.</li><li>Let <img src="https://s0.wp.com/latex.php?latex=X_%7Bj%2B1%7D%3D+X_j-+t_j+v_j%5ET&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X_{j+1}= X_j- t_j v_j^T" class="latex" /></li><li>If <img src="https://s0.wp.com/latex.php?latex=j%3Dr&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="j=r" class="latex" /> stop. Otherwise increment j and return to step 1.</li></ol>



<h2>Properties of the NIPALS Algorithm</h2>



<p>Let us see how the NIPALS algorithm produces principal components for us.<br>Let <img src="https://s0.wp.com/latex.php?latex=%5Clambda_j+%3D+%5Cleft+%5C%7C+X%5ET+t_j+%5Cright+%5C%7C&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="&#92;lambda_j = &#92;left &#92;| X^T t_j &#92;right &#92;|" class="latex" /> and write step (2) as<br>(10) <img src="https://s0.wp.com/latex.php?latex=X%5ET+t_j+%3D+%5Clambda_j+v_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X^T t_j = &#92;lambda_j v_j" class="latex" /></p>



<p>Setting <img src="https://s0.wp.com/latex.php?latex=t_j+%3D+X+v_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="t_j = X v_j" class="latex" /> in step 3 yields<br>(11) <img src="https://s0.wp.com/latex.php?latex=X%5ET+X+v_j%3D+%5Clambda_j+v_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X^T X v_j= &#92;lambda_j v_j" class="latex" /></p>



<p>This equation is satisfied upon completion of the loop 2-4. This shows that <img src="https://s0.wp.com/latex.php?latex=%5Clambda_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="&#92;lambda_j" class="latex" /> and <img src="https://s0.wp.com/latex.php?latex=p_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="p_j" class="latex" /> are an eigenvalue and eigenvector of <img src="https://s0.wp.com/latex.php?latex=X%5ET+X&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X^T X" class="latex" />. The astute reader will note that the loop 2-4 is essentially the power method for computing a dominant eigenvalue and eigenvector for a linear transformation. Note further that using <img src="https://s0.wp.com/latex.php?latex=t_j%3DX+v_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="t_j=X v_j" class="latex" /> and equation (11) we obtain<br>(12)</p>



<ul><li><img src="https://s0.wp.com/latex.php?latex=t_j%5ET+t_j%3D+v_j%5ET+X%5ET+Xv_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="t_j^T t_j= v_j^T X^T Xv_j" class="latex" /></li><li><img src="https://s0.wp.com/latex.php?latex=%3D+v_j%5ET+%28X%5ET+Xv_j+%29&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="= v_j^T (X^T Xv_j )" class="latex" /></li><li><img src="https://s0.wp.com/latex.php?latex=%3D+%5Clambda_j+v_j%5ET+v_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="= &#92;lambda_j v_j^T v_j" class="latex" /></li><li><img src="https://s0.wp.com/latex.php?latex=%3D+%5Clambda_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="= &#92;lambda_j" class="latex" /></li></ul>



<p>After one iteration of the NIPALS algorithm we end up at step 5 with <img src="https://s0.wp.com/latex.php?latex=j%3D1&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="j=1" class="latex" /> and<br>(13) <img src="https://s0.wp.com/latex.php?latex=X%3D+t_1+v_1%5ET%2B+X_2&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X= t_1 v_1^T+ X_2" class="latex" /></p>



<p>Note that <img src="https://s0.wp.com/latex.php?latex=t_1&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="t_1" class="latex" /> and <img src="https://s0.wp.com/latex.php?latex=X_2%3DX+-+t_1+v_1%5ET&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X_2=X - t_1 v_1^T" class="latex" /><br>are orthogonal:<br>(14)</p>



<ul><li><img src="https://s0.wp.com/latex.php?latex=%28X-+t_1+v_1%5ET+%29%5ET+t_1+%3D+X%5ET+t_1-+v_1+t_1%5ET+t_1&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="(X- t_1 v_1^T )^T t_1 = X^T t_1- v_1 t_1^T t_1" class="latex" /></li><li><img src="https://s0.wp.com/latex.php?latex=%3D+X%5ET+X+v_1-+v_1+%5Clambda_1&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="= X^T X v_1- v_1 &#92;lambda_1" class="latex" /></li><li><img src="https://s0.wp.com/latex.php?latex=%3D0&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="=0" class="latex" /></li></ul>



<p>Furthermore, since <img src="https://s0.wp.com/latex.php?latex=t_2&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="t_2" class="latex" /> is initially picked as a column of <img src="https://s0.wp.com/latex.php?latex=X_2&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X_2" class="latex" />, it is orthogonal to <img src="https://s0.wp.com/latex.php?latex=t_1&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="t_1" class="latex" />. Upon completion of the algorithm we form the following two matrices:</p>



<ul><li><img src="https://s0.wp.com/latex.php?latex=T_r&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="T_r" class="latex" />, whose columns are the vectors <img src="https://s0.wp.com/latex.php?latex=t_i&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="t_i" class="latex" />, <img src="https://s0.wp.com/latex.php?latex=T_r&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="T_r" class="latex" /> is orthogonal</li><li><img src="https://s0.wp.com/latex.php?latex=V_r&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="V_r" class="latex" /> whose columns are the <img src="https://s0.wp.com/latex.php?latex=v_i&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="v_i" class="latex" />, <img src="https://s0.wp.com/latex.php?latex=V_r&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="V_r" class="latex" /> is orthonormal.</li></ul>



<p>(15) <img src="https://s0.wp.com/latex.php?latex=X_r%3DT_r+V_r%5ET&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X_r=T_r V_r^T" class="latex" /></p>



<p>If r is equal to the rank of X then, using the information obtained from equations (12) and (14), it follows that (15) yields the matrix decomposition (8). The idea behind Principal Components Regression is that after choosing an appropriate r the important features of X have been captured in <img src="https://s0.wp.com/latex.php?latex=T_r&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="T_r" class="latex" />. We then perform a linear regression with <img src="https://s0.wp.com/latex.php?latex=T_r&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="T_r" class="latex" /> in place of X,<br>(16) <img src="https://s0.wp.com/latex.php?latex=T_r+c%3Dy&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="T_r c=y" class="latex" />.</p>



<p>The least squares solution then gives<br>(17) <img src="https://s0.wp.com/latex.php?latex=%5Chat%7Bc%7D%3D+%28T_r%5ET+T_r+%29%5E%7B-1%7D+T_r%5ET+y&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="&#92;hat{c}= (T_r^T T_r )^{-1} T_r^T y" class="latex" /></p>



<p>Note that since <img src="https://s0.wp.com/latex.php?latex=T_r&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="T_r" class="latex" /> is diagonal it is easy to invert. Also note that we left out the loadings matrix <img src="https://s0.wp.com/latex.php?latex=V_r&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="V_r" class="latex" />. This is due to the fact that the scores <img src="https://s0.wp.com/latex.php?latex=t_j&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="t_j" class="latex" /> are linear combinations of the columns of X, and the PCR method amounts to singling out those combinations that are best for predicting y. Finally, using (9) and (16) we rewrite our linear regression problem <img src="https://s0.wp.com/latex.php?latex=X+%5Cbeta%3Dy&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="X &#92;beta=y" class="latex" /> as<br>(18) <img src="https://s0.wp.com/latex.php?latex=XV_r+c%3Dy&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="XV_r c=y" class="latex" /></p>



<p>From (18) we see that the PCR estimation <img src="https://s0.wp.com/latex.php?latex=%5Chat%7B%5Cbeta%7D_r&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="&#92;hat{&#92;beta}_r" class="latex" /> is given by<br>(19) <img src="https://s0.wp.com/latex.php?latex=%5Chat%7B%5Cbeta%7D_r%3D+V_r+%5Chat%7Bc%7D&#038;bg=ffffff&#038;fg=000&#038;s=0&#038;c=20201002" alt="&#92;hat{&#92;beta}_r= V_r &#92;hat{c}" class="latex" />.</p>



<p>Steve</p>



<p></p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/principal-components-regression">Principal Components Regression: Part 3 – The NIPALS Algorithm</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.centerspace.net/principal-components-regression/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">7075</post-id>	</item>
		<item>
		<title>CenterSpace partner releases symbolic, computational library</title>
		<link>https://www.centerspace.net/centerspace-partner-releases-symbolic-computational-library</link>
					<comments>https://www.centerspace.net/centerspace-partner-releases-symbolic-computational-library#respond</comments>
		
		<dc:creator><![CDATA[Trevor Misfeldt]]></dc:creator>
		<pubDate>Sun, 23 Oct 2016 23:09:20 +0000</pubDate>
				<category><![CDATA[Visualization]]></category>
		<category><![CDATA[partner]]></category>
		<category><![CDATA[symbolic]]></category>
		<guid isPermaLink="false">http://www.centerspace.net/?p=7060</guid>

					<description><![CDATA[<p>Our partner, Scientific Research Software, has released a new product on top of the NMath libraries. NMath ANALYTICS allows users to use symbolic expressions for functions and then visualize resulting fits. Please check them out. &#8211; Trevor</p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/centerspace-partner-releases-symbolic-computational-library">CenterSpace partner releases symbolic, computational library</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Our partner, Scientific Research Software, has released a new product on top of the NMath libraries. <a href="http://sergey-l-gladkiy.narod.ru/index/nmath-analytics/0-21" target="_blank">NMath ANALYTICS</a> allows users to use symbolic expressions for functions and then visualize resulting fits. </p>
<p><img decoding="async" loading="lazy" src="https://www.centerspace.net/wp-content/uploads/2016/10/DataFitting-300x225.png" alt="Data Fitting Example" width="600" height="450" class="aligncenter size-medium wp-image-7061" srcset="https://www.centerspace.net/wp-content/uploads/2016/10/DataFitting-300x225.png 300w, https://www.centerspace.net/wp-content/uploads/2016/10/DataFitting-768x576.png 768w, https://www.centerspace.net/wp-content/uploads/2016/10/DataFitting-135x100.png 135w, https://www.centerspace.net/wp-content/uploads/2016/10/DataFitting.png 1000w" sizes="(max-width: 600px) 100vw, 600px" /></p>
<p>Please check them out.</p>
<p>&#8211; Trevor</p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/centerspace-partner-releases-symbolic-computational-library">CenterSpace partner releases symbolic, computational library</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.centerspace.net/centerspace-partner-releases-symbolic-computational-library/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">7060</post-id>	</item>
		<item>
		<title>Announcing NMath 6.2 and NMath Stats 4.2</title>
		<link>https://www.centerspace.net/announcing-nmath-6-2-and-nmath-stats-4-2</link>
					<comments>https://www.centerspace.net/announcing-nmath-6-2-and-nmath-stats-4-2#respond</comments>
		
		<dc:creator><![CDATA[Ken Baldwin]]></dc:creator>
		<pubDate>Mon, 07 Mar 2016 17:20:23 +0000</pubDate>
				<category><![CDATA[NMath]]></category>
		<category><![CDATA[NMath Premium]]></category>
		<category><![CDATA[NMath Stats]]></category>
		<category><![CDATA[C# Math Libraries]]></category>
		<category><![CDATA[C# NMath]]></category>
		<category><![CDATA[centerspace news]]></category>
		<category><![CDATA[VB NMath]]></category>
		<guid isPermaLink="false">http://www.centerspace.net/?p=6938</guid>

					<description><![CDATA[<p>Centerspace Software is pleased to announce new versions of the NMath libraries - NMath 6.2, and NMath Stats 4.2.</p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/announcing-nmath-6-2-and-nmath-stats-4-2">Announcing NMath 6.2 and NMath Stats 4.2</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>We&#8217;re pleased to announce new versions of the NMath libraries &#8211; NMath 6.2 and NMath Stats 4.2.</p>
<p>Added functionality includes:</p>
<ul>
<li>Upgraded to Intel MKL 11.3 Update 2 with resulting performance increases.</li>
<li>Updated NMath Premium GPU code to CUDA 7.5.</li>
<li>Added classes for performing <a href="/wavelet-transforms/">Discrete Wavelet Transforms (DWTs)</a> using most common wavelet families, including Harr, Daubechies, Symlet, Best Localized, and Coiflet.</li>
<li>Added classes for solving stiff ordinary differential equations. The algorithm uses higher order methods and smaller step size when the solution varies rapidly.</li>
<li>Added classes for performing two-way ANOVA with unbalanced designs.</li>
<li>Added classes for performing Partial Least Squares Discriminant Analysis (PLS-DA), a variant of PLS used when the response variable is categorical.</li>
</ul>
<p>For more complete changelogs, see:</p>
<ul>
<li><a href="/doc/NMath/changelog.txt">NMath changelog</a></li>
<li>NMath Stats changelog</li>
</ul>
<p>Upgrades are provided free of charge to customers with current annual maintenance contracts. To request an upgrade, please visit our <a href="/upgrades/">upgrade page</a>, or contact <a href="mailto:sales@centerspace.net">sales@centerspace.net</a>. Maintenance contracts are available through our <a href="/order/">webstore</a>.</p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/announcing-nmath-6-2-and-nmath-stats-4-2">Announcing NMath 6.2 and NMath Stats 4.2</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.centerspace.net/announcing-nmath-6-2-and-nmath-stats-4-2/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">6938</post-id>	</item>
	</channel>
</rss>
