<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	
	xmlns:georss="http://www.georss.org/georss"
	xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#"
	>

<channel>
	<title>peak finding Archives - CenterSpace</title>
	<atom:link href="https://www.centerspace.net/tag/peak-finding/feed" rel="self" type="application/rss+xml" />
	<link>https://www.centerspace.net/tag/peak-finding</link>
	<description>.NET numerical class libraries</description>
	<lastBuildDate>Wed, 15 Jul 2020 18:57:30 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.1.1</generator>
<site xmlns="com-wordpress:feed-additions:1">104092929</site>	<item>
		<title>Chromatographic and Spectographic Data Analysis</title>
		<link>https://www.centerspace.net/chromatographic-and-spectographic-data-analysis</link>
					<comments>https://www.centerspace.net/chromatographic-and-spectographic-data-analysis#respond</comments>
		
		<dc:creator><![CDATA[Paul Shirkey]]></dc:creator>
		<pubDate>Wed, 24 Jun 2020 19:52:34 +0000</pubDate>
				<category><![CDATA[NMath]]></category>
		<category><![CDATA[chromatographic]]></category>
		<category><![CDATA[electrophretic]]></category>
		<category><![CDATA[mass spec]]></category>
		<category><![CDATA[peak finding]]></category>
		<category><![CDATA[peak modeling]]></category>
		<category><![CDATA[spectographic]]></category>
		<guid isPermaLink="false">https://www.centerspace.net/?p=7608</guid>

					<description><![CDATA[<p>Chromatographic and spectographic data analysis is a common application of the NMath class library and usually involves some or all of the following computing activities: Noise removal Baseline adjustment Peak finding Peak modeling Peak statistical analysis In this blog article we will discuss each of these activities and provide some NMath C# code on how [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/chromatographic-and-spectographic-data-analysis">Chromatographic and Spectographic Data Analysis</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Chromatographic and spectographic data analysis is a common application of the <strong>NMath</strong> class library and usually involves some or all of the following computing activities:</p>



<ul><li>Noise removal</li><li>Baseline adjustment</li><li>Peak finding</li><li>Peak modeling </li><li>Peak statistical analysis</li></ul>



<p>In this blog article we will discuss each of these activities and provide some NMath C# code on how they may be accomplished.  This is big subject but the goal here is to get you started solving your spectographic data analysis problems, perhaps introduce you to a new technique, and finally to provide some helpful code snippets that can be expanded upon.</p>



<p>Throughout this article we will be using the following electrophoretic data set below in our code examples.  This data set contains four obvious peaks and one partially convolved peak infilled with underlying white noise.</p>



<figure class="wp-block-image size-large"><img decoding="async" width="700" height="350" src="https://www.centerspace.net/wp-content/uploads/2020/06/Blog_RawData-1.gif" alt="" class="wp-image-7610"/><figcaption><em>Our example data set</em></figcaption></figure>



<h2>Noise Removal</h2>



<p>Chromatographic, spectographic, fMRI or EEG data, and many other types of time series are non-stationary.  This non-stationarity means that Fourier based filtering methods are ill suited to removing noise from these signals.  Fortunately we can effectively apply wavelet analysis, which does not depend on signal periodicity, to suppress the signal noise without altering the signal&#8217;s phase or magnitude.  Briefly, the discrete wavelet transform (DWT) can be used to recursively decompose the signal successively into <em>details </em>and <em>approximations </em>components.  From a filtering perspective the signal <em>details</em> contain the higher frequency parts and the <em>approximations</em> contain the lower frequency components.  As you&#8217;d expect the inverse DWT can elegantly reconstruct the original signal but, to meet our noise removal goals, the higher frequency noisy parts of the signal can be suppressed during the signal reconstruction and be effectively removed.  This technique is called <em>wavelet shrinkage</em> and is described in more detail in an earlier <a href="https://www.centerspace.net/wavelet-transforms">blog article</a> with references.</p>



<figure class="wp-block-image size-large"><img decoding="async" loading="lazy" width="700" height="351" src="https://www.centerspace.net/wp-content/uploads/2020/06/Blog_FilteredData-1.gif" alt="" class="wp-image-7613"/><figcaption>Signal noise removed using wavelet shrinkage.</figcaption></figure>



<p>These results can be refined, but this starting point is seen to have successfully removed the noise but not altered the position or general shape of the peaks. Choosing the right wavelet for wavelet shrinkage is done empirically with a representative data set at hand.</p>



<pre class="wp-block-code"><code>public DoubleVector SuppressNoise( DoubleVector DataSet  )  
{
  var wavelet = new DoubleWavelet( Wavelet.Wavelets.D4 );
  var dwt = new DoubleDWT( DataSet.ToArray(), wavelet );
  dwt.Decompose( 5 );
  double lambdaU = dwt.ComputeThreshold( FloatDWT.ThresholdMethod.Sure, 1 );

  dwt.ThresholdAllLevels( FloatDWT.ThresholdPolicy.Soft, new double&#91;] { lambdaU, lambdaU, lambdaU, lambdaU, lambdaU } );

  double&#91;] reconstructedData = dwt.Reconstruct();
  var filteredData= new DoubleVector( reconstructedData );
  return filteredData;
}</code></pre>



<p>With our example data set a Daubechies 4 wavelet worked well for noise removal.  Note that the same threshold was applied to all DWT decomposition levels; Improved white noise suppression can be realized by adopting other thresholding strategies.</p>



<h2>Baseline Adjustment</h2>



<p>Dozens of methods have been developed for modeling and removing a baseline from various types of spectra data.  The R package <a rel="noreferrer noopener" href="https://cran.r-project.org/web/packages/baseline/baseline.pdf" target="_blank"><code><em>baseline</em></code> </a>has collected together a range of these techniques and can serve as a good starting point for exploration.  The techniques variously use regression, iterative erosion and dilation, spectral filtering, convex hulls, or partitioning and create baseline models of lines, polynomials, or more complex curves that can then be subtracted from the raw data.  (Another R package, <a href="https://cran.r-project.org/web/packages/MALDIquant/MALDIquant.pdf">MALDIquant</a>, contains several more useful baseline removal techniques.)  Due to the wide variety of baseline removal techniques and the lack of standards across datasets, <strong>NMath </strong>does not natively offer any baseline removal algorithms.</p>



<h4>Example baseline modeling</h4>



<p>The C# example baseline modeling code below uses z-scores and iterative peak suppression to create a polynomial model of the baseline.  Peaks that extend beyond 1.5 z-scores are iteratively cut down by a quarter and then a polynomial is fitted to this modified data set.  Once the baseline polynomial fits well and stops improving upon iterative suppression the model is returned.</p>



<pre class="wp-block-code"><code>private PolynomialLeastSquares findBaseLine( DoubleVector x, DoubleVector y, int PolynomialDegree )
 {
   var lsFit = new PolynomialLeastSquares( PolynomialDegree, x, y );
   var previousRSoS = 1.0;

   while ( lsFit.LeastSquaresSolution.ResidualSumOfSquares > 0.1 &amp;&amp; Math.Abs( previousRSoS - lsFit.LeastSquaresSolution.ResidualSumOfSquares ) > 0.00001 )
   {
     // compute the Z-scores of the residues and erode data beyond 1.5 stds.
     var residues = lsFit.LeastSquaresSolution.Residuals;
     var Zscores = ( residues - NMathFunctions.Mean( residues ) ) / Math.Sqrt( NMathFunctions.Variance( residues ) );
     previousRSoS = lsFit.LeastSquaresSolution.ResidualSumOfSquares;

     y&#91;0] = Zscores&#91;0] > 1.5 ? 0 : y&#91;0];
     for ( int i = 1; i &lt; this.Length; i++ )
     {
       if ( Zscores&#91;i] > 1.5 )
       {
         y&#91;i] = y&#91;i-1] / 4.0;
       }
     }
     lsFit = new PolynomialLeastSquares( PolynomialDegree, x, y );
    }
    return lsFit;
 }</code></pre>



<p>This algorithm has proven reliable for estimating both 1 and 2 degree polynomial baselines with electrophoretic data sets.  It is not designed to model the wandering baselines sometimes found in mass spec data.  The SNIP [2] method or Asymmetric Least Squares Smoothing [1] would be better suited for those data sets.</p>



<h2>Peak Finding</h2>



<p>Locating peaks in a data set usually involves, at some level, finding the zero crossings of the first derivative of the signal.  However, directly differentiating a signal amplifies noise and so more sophisticated indirect methods are usually employed.  Savitzky-Golay polynomials are commonly used to provide high quality smoothed derivatives of a noisy signal and are widely employed with chromatographic and other data sets (See this<a href="https://www.centerspace.net/savitzky-golay-smoothing"> blog article</a> for more details).  </p>



<div class="is-layout-flow wp-block-group"><div class="wp-block-group__inner-container">
<figure class="wp-block-image size-large is-resized"><img decoding="async" loading="lazy" src="https://www.centerspace.net/wp-content/uploads/2020/06/Blog_PeaksThresholded.gif" alt="" class="wp-image-7617" width="580" height="290"/><figcaption>Located peaks using Savitzky-Golay derivatives and thresholding</figcaption></figure>
</div></div>



<pre class="wp-block-code"><code>// Code snippet for locating peaks.
var sgFilter = new SavitzkyGolayFilter( 4, 4, 2 );
DoubleVector filteredData = sgFilter.Filter( DataSet );
var rbPeakFinder = new PeakFinderRuleBased( filteredData );
rbPeakFinder.AddRule( PeakFinderRuleBased.Rules.MinHeight, 0.005 );
List&lt;int> pkIndicies = rbPeakFinder.LocatePeakIndices();</code></pre>



<p>Without thresholding many small noisy undulations are returned as peaks. Thresholding with this data set works well in separating the data peaks from the noise, however sometimes peak modeling is necessary for separating data peaks from noise when they are both present at similar scales.</p>



<h2>Peak Modeling and Statistics</h2>



<p>In addition to separating out false peaks, peaks are also modeled to compute various peak statistical measures such as FWHM, CV,  area, or standard deviation.  The Gaussian is an excellent place to start for peak modeling and for many applications this model is sufficient.  However there are many other peak models including: Lorenzian, Voigt, [3] CSR, and variations on exponentially modified Gaussians (EMG&#8217;s).  Many combinations, convolutions, and refinements of these models are gathered together and presented in a useful paper by <a rel="noreferrer noopener" href="https://www.centerspace.net/dimarco2001" target="_blank">DeMarko &amp; Bombi, 2001</a>.  Their paper focused on chromatographic peaks but the models surveyed therein have wide application.</p>



<pre class="wp-block-code"><code>/// &lt;summary>
/// Gaussian Func&lt;> for trust region fitter.
/// p&#91;0] = mean, p&#91;1] = sigma, p&#91;2] = baseline offset
/// &lt;/summary>
private static Func&lt;DoubleVector, double, double> Gaussian = delegate ( DoubleVector p, double x )
{
   double a = ( 1.0 / ( p&#91;1] * Math.Sqrt( 2.0 * Math.PI ) ) );
   return a * Math.Exp( -1 * Math.Pow( x - p&#91;0], 2 ) / ( 2 * p&#91;1] * p&#91;1] ) ) + p&#91;2];
};</code></pre>



<p>Above is a <code>Func&lt;&gt;</code> representing a Gaussian that allows for some vertical offset.  The <code>TrustRegionMinimizer</code> in <strong>NMath </strong>is one of the most powerful and flexible methods for peak fitting.  Once the start and end indices of the peaks are determined, the following code snippet fits this Gaussian model to the peak&#8217;s data.</p>



<pre class="wp-block-code"><code>// The DoubleVector's xValues and yValues contain the peak's data.

// Pass in the model (above) to the function fitter ctor
var modelFitter = new BoundedOneVariableFunctionFitter&lt;TrustRegionMinimizer>( Gaussian );

// Gaussian for peak finding
var lowerBounds = new DoubleVector( new double&#91;] { xValues&#91;0], 1.0, -0.05 } );
var upperBounds = new DoubleVector( new double&#91;] { xValues&#91;xValues.Length - 1], 10.0, 0.05 } );
var initialGuess = new DoubleVector( new double&#91;] { 0.16, 6.0, 0.001 } );

// The lower and upper bounds aren't required, but are suggested.
var soln = modelFitter.Fit( xValues, yValues, initialGuess, lowerBounds, upperBounds );

// Fit statistics
var gof = new GoodnessOfFit( modelFitter, xValues, yValues, soln );</code></pre>



<p>The <code>GoodnessOfFit</code> class is a very useful tool for peak modeling.  In one line of code one gets the f-statistics for the goodness of the fit of the model along with confidence intervals for all of the model parameters.  These statistics are very useful in automating the sorting out of noisy peaks from actual data peaks and of course for determining if the model is appropriate for the data at hand.</p>



<h4>Peak Area</h4>



<p>Computing peak areas or peak area proportions is essential in most applications of spectographic or electrophoretic data analysis.  This is this a two-liner with <strong>NMath</strong>.</p>



<pre class="wp-block-code"><code>// The peak starts and ends at: startIndex, endIndex.
var integrator = new DiscreteDataIntegrator();
double area =  integrator.Integrate( DataSet&#91; new Slice( startIndex, endIndex - startIndex + 1) ] );</code></pre>



<p>The <code>DiscreteDataIntegrator</code> defaults to integrating with cubic spline segments.  Other discrete data integration methods available are trapezoidal and parabolic.</p>



<h2>Summary</h2>



<p>Contact us if you need help or have questions about analyzing your team&#8217;s data sets.  We can quickly help you get started solving your computing problems using <strong><a href="https://www.centerspace.net/product-overviews">NMath </a></strong>or go deeper and accelerate your team&#8217;s application development with consulting.</p>



<h4>Assorted References</h4>



<ol><li>Eilers, Paul &amp; Boelens, Hans. (2005). Baseline Correction with Asymmetric Least Squares Smoothing. Unpubl. Manuscr.</li><li>C.G. Ryan, E. Clayton, W.L. Griffin, S.H. Sie, and D.R. Cousens. 1988. SNIP, a statistics-sensitive background treatment for the quantitative analysis of pixe spectra in geoscience applications. Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms, 34(3): 396-402.</li><li>García-Alvarez-Coque MC, Simó-Alfonso EF, Sanchis-Mallols JM, Baeza-Baeza JJ. A new mathematical function for describing electrophoretic peaks. <em>Electrophoresis</em>. 2005;26(11):2076-2085. doi:10.1002/elps.200410370</li></ol>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/chromatographic-and-spectographic-data-analysis">Chromatographic and Spectographic Data Analysis</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.centerspace.net/chromatographic-and-spectographic-data-analysis/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">7608</post-id>	</item>
		<item>
		<title>Finding Peaks in Data with NMath</title>
		<link>https://www.centerspace.net/finding-peaks-in-data-with-nmath</link>
					<comments>https://www.centerspace.net/finding-peaks-in-data-with-nmath#respond</comments>
		
		<dc:creator><![CDATA[Paul Shirkey]]></dc:creator>
		<pubDate>Wed, 13 Oct 2010 17:06:58 +0000</pubDate>
				<category><![CDATA[NMath]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[nmath peak finding]]></category>
		<category><![CDATA[peak finding]]></category>
		<category><![CDATA[peak finding c#]]></category>
		<category><![CDATA[savitzy-golay]]></category>
		<guid isPermaLink="false">http://www.centerspace.net/blog/?p=2682</guid>

					<description><![CDATA[<p><img src="https://www.centerspace.net/blog/wp-content/uploads/2010/10/peaksexample.png" alt="Example peaks" title="Peaks Example"  class="excerpt" /><br />
Finding peaks in experimental data is a very common computing activity, and because of its intuitive nature there are many established techniques and literally dozens of heuristics built on top of those.  CenterSpace Software has jumped into this algorithmic fray with a new peak finding class based on smooth Savitzy-Golay polynomials.  If  you are not familiar with Savitzy-Golay polynomial smoothing, take a look at our previous <a href="https://www.centerspace.net/blog/statistics/savtizky-golay-smoothing/">blog article</a>. </p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/finding-peaks-in-data-with-nmath">Finding Peaks in Data with NMath</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Finding peaks in experimental data is a very common computing activity, and because of its intuitive nature there are many established techniques and literally dozens of heuristics built on top of those.  CenterSpace Software has jumped into this algorithmic fray with a new peak finding class based on smooth Savitzy-Golay polynomials.  If  you are not familiar with Savitzy-Golay polynomial smoothing, take a look at our previous <a href="/savitzky-golay-smoothing/">blog article</a>.  When used for peak finding, we simply report the zero crossing derivatives of the smoothing, locally-fit, Savitzy-Golay polynomials.  This is a very fast peak finder because the Savitzy-Golay smoothing algorithm can be slightly altered to directly report the first derivatives, which remarkably, can be done with a convolve operation.  Because this peak finder is based on Savitzy-Golay polynomials, it requires that the data be sampled at regular intervals.  </p>
<h3> An Introductory Example </h3>
<p>Suppose we have the following sampled data.</p>
<p><figure id="attachment_2697" aria-describedby="caption-attachment-2697" style="width: 450px" class="wp-caption aligncenter"><a href="https://www.centerspace.net/blog/wp-content/uploads/2010/09/peakfinder-example-1.png"><img decoding="async" src="https://www.centerspace.net/blog/wp-content/uploads/2010/09/peakfinder-example-1.png" alt="graph of example data" title="simple graph" width="450"  class="size-full wp-image-2697" srcset="https://www.centerspace.net/wp-content/uploads/2010/09/peakfinder-example-1.png 557w, https://www.centerspace.net/wp-content/uploads/2010/09/peakfinder-example-1-300x131.png 300w" sizes="(max-width: 557px) 100vw, 557px" /></a><figcaption id="caption-attachment-2697" class="wp-caption-text">Simple data with a single peak.</figcaption></figure></p>
<p>The following C# code builds the test data and locates the single peak in this simple data set.</p>
<pre lang="csharp">
Using CenterSpace.NMath.Core;

DoubleVector d = new DoubleVector(0,-1, 1.5, 2, 3, 4, 4.5, 4, 3, 2.2, 1.0, -3.0, 0);
PeakFinderSavitzkyGolay pfa = new PeakFinderSavitzkyGolay(d, 5, 4);
pfa.LocatePeaks();
</pre>
<p>The peak data reported back by the Savitzky-Golay peak finder can be either the <code>(x,y)</code> location of the peak, or the index lying on or preceding the found peak.</p>
<pre class="code">
Peak found at x:6.00, y:4.50
Peak found at index: 6
</pre>
<p>The peak finding <code>PeakFinderSavitzkyGolay</code> class requires three parameters: a data vector, the filter window width, and the degree of the smoothing polynomial.</p>
<h3> A More Complex Example </h3>
<p>To build a peak finder on top on this class for some domain specific data, we need to understand the basic parameters that control which peaks are reported.  For this second example, let&#8217;s use the complex signal show below, which contains a mixture of isolated peaks, adjacent peaks, and narrow and broad peaks.</p>
<p><figure id="attachment_2713" aria-describedby="caption-attachment-2713" style="width: 450px" class="wp-caption aligncenter"><a href="https://www.centerspace.net/blog/wp-content/uploads/2010/10/peaksexample.png"><img decoding="async" src="https://www.centerspace.net/blog/wp-content/uploads/2010/10/peaksexample.png" alt="Example peaks" title="Peaks Example" width="450"  class="size-full wp-image-2713" srcset="https://www.centerspace.net/wp-content/uploads/2010/10/peaksexample.png 572w, https://www.centerspace.net/wp-content/uploads/2010/10/peaksexample-300x192.png 300w" sizes="(max-width: 572px) 100vw, 572px" /></a><figcaption id="caption-attachment-2713" class="wp-caption-text">Peaks of various widths and heights</figcaption></figure></p>
<p>This signal also includes some very subtle peaks near <code>x = 23.5</code> and <code>x = 29</code>.  Applying the <code> PeakFinderSavitzkyGolay </code> class as shown below, all 15 peaks are found (the top of the first peak is off the scale in our image).</p>
<pre lang="csharp">
PeakFinderSavitzkyGolay pfa = new PeakFinderSavitzkyGolay(signal, 10, 5);
pfa.AbscissaInterval = 0.1;
pfa.LocatePeaks();
Console.WriteLine("Number of peaks found: " + pfa.NumberPeaks.ToString());
Console.WriteLine(String.Format("Peak found at x:{0:0.00}, y:{1:0.00}", pfa[4].X, pfa[4].Y));
</pre>
<p>The position of the fifth peak is reported to the console, using indexing notation, <code>pfa[4].X, pfa[4].Y</code>, on the peak finder object.</p>
<pre class="code">
Number of peaks found: 15
Peak found at x:9.03, y:0.35
</pre>
<p>By setting the <code> AbsciassaInterval </code> to the signal sample rate (in this example 0.1 seconds) the class can scale the x-axis according to your units, and supply the <code>(x, y)</code> positions of all found peaks.  If you only need the peak location down to the resolution of the sample interval, the peak finder will just report the index that either precedes or lies on the peak abscissa location.  This avoids the an extra interpolation step to locate the inter-sample peak abcissa, and increases performance.</p>
<p>Now supposing that we want to eliminate all broad peaks and can increase the peak finders selectivity.</p>
<pre lang="csharp">
PeakFinderSavitzkyGolay pfa = new PeakFinderSavitzkyGolay(signal, 10, 5);
pfa.AbscissaInterval = 0.1;
pfa.SlopeSelectivity = 0.003;
pfa.LocatePeaks();
</pre>
<p>The property <code> SlopeSelectivity </code> defaults to zero, causing the peak finder to report all found peaks.  By increasing the selectivity a hair to <code>0.003</code>, the peak finder no longer reports the last three peaks.  Both the two subtle peaks are eliminated along with the final broad peak near <code>26</code>.  The slope selectivity is simply the slope of the smoothed first derivative of the Savitzy-Golay polynomial at each zero crossing &#8211; so as it&#8217;s value is increased only the more pronounced peaks (with steeply diving smoothed first derivatives) are reported.</p>
<p>If we want to heavily filter the peaks and only see the peaks of the general trend line, we could increase the filter width dramatically from 10 to 80.</p>
<pre lang="csharp">
PeakFinderSavitzkyGolay pfa = new PeakFinderSavitzkyGolay(signal, 80, 5);
pfa.AbscissaInterval = 0.1;
pfa.SlopeSelectivity = 0.0;
pfa.LocatePeaks();
</pre>
<p>Using these parameters, we find only the four peaks that capture the low frequency variation of the signal above.</p>
<pre class="code">
Peak found at x:7.53, y:0.34
Peak found at x:14.27, y:0.26
Peak found at x:20.28, y:0.25
Peak found at x:26.76, y:0.24
</pre>
<p>Note that the y-values here correspond to the <em> smoothed, fitted polynomial </em> not the actual data at the <code>x</code> value.  This demonstrates the ability of the this peak finder to act as a low pass filter, which can be use to sort out the peaks of interest from higher frequency noise.  This is why Savitzy-Golay data smoothing is often used for baseline subtraction (for example in pre-processing mass spectrometry data before peak finding).</p>
<h3> Summary &#038; Performance </h3>
<p>The first example used a smoothing polynomial of degree 4, and the second example a polynomial degree of 5.  Experience shows that typically a polynomial degree between 3-7 will be best suited to smooth measurement data and correctly locate peaks.   However, there is no hard and fast rule, so feel free to experiment.  Having said that, the polynomial degree must always be strictly less than the window width or an exception will be thrown.  </p>
<pre class="code">
polynomial degree < window width
</pre>
<p>Because, after object construction, this peak finder boils down to a convolution, it's performance is far better that many peak finders.  On my 2.8Ghz Intel i7 Quad Core, I can find peaks in 3 million data points in about 80 ms, and in 30 million data that requires about 700ms.  That would give us the ability to do peak finding in a real time system running with a sample rate of  ~43 MHz.   In a real system there would likely be other peak filtering overhead, but we would still be able to process data at a very high rate - suitable for most real-time data sampling applications. </p>
<p>Happy Computing<br />
<em><br />
-Paul Shirkey<br />
</em></p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/finding-peaks-in-data-with-nmath">Finding Peaks in Data with NMath</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.centerspace.net/finding-peaks-in-data-with-nmath/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2682</post-id>	</item>
	</channel>
</rss>
