<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	
	xmlns:georss="http://www.georss.org/georss"
	xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#"
	>

<channel>
	<title>Paul Shirkey, Author at CenterSpace</title>
	<atom:link href="https://www.centerspace.net/author/shirkey/feed" rel="self" type="application/rss+xml" />
	<link>https://www.centerspace.net/author/shirkey</link>
	<description>.NET numerical class libraries</description>
	<lastBuildDate>Wed, 30 Dec 2020 18:16:50 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.1.1</generator>
<site xmlns="com-wordpress:feed-additions:1">104092929</site>	<item>
		<title>Updated NMath API for LP and MIP related classes</title>
		<link>https://www.centerspace.net/updated-lp-and-mip-classes</link>
					<comments>https://www.centerspace.net/updated-lp-and-mip-classes#respond</comments>
		
		<dc:creator><![CDATA[Paul Shirkey]]></dc:creator>
		<pubDate>Wed, 11 Nov 2020 18:15:12 +0000</pubDate>
				<category><![CDATA[NMath]]></category>
		<category><![CDATA[Google OR-Tools]]></category>
		<category><![CDATA[Linear Programming]]></category>
		<category><![CDATA[LP Solver]]></category>
		<category><![CDATA[MIP Solver]]></category>
		<category><![CDATA[Mixed Integer Programming]]></category>
		<category><![CDATA[MS Solver Foundation]]></category>
		<guid isPermaLink="false">https://www.centerspace.net/?p=8126</guid>

					<description><![CDATA[<p>NMath is moving from Microsoft Solver Foundation to Google OR Tools. This change improves our LP and MIP Solver performance.</p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/updated-lp-and-mip-classes">Updated NMath API for LP and MIP related classes</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>The Linear Programming (LP) and Mixed Integer Programming (MIP) classes in <strong>NMath </strong>are currently built upon the Microsoft Solver Foundation (MSF) library.  However development and maintenance of the library was stopped in January 2017 with the final release of 3.1.0.  With the release of <strong>NMath 7.2</strong> the LP and MIP solver classes will be built on the <a href="https://github.com/google/or-tools">Google OR-Tools</a> library (GORT).  With this change the API has been simplified primarily by reducing the complexity of the algorithm parameterization.  And most importantly to many users, migrating to GORT will free <strong>NMath </strong>users from the MSF variable limits [1].  Finally, GORT is a modern .NET Standard Library and therefore can be used with .NET Framework, .NET Core, and .NET 5 projects, whereas MS Solver Foundation is restricted to NET Framework.</p>



<figure class="wp-block-table"><table><tbody><tr><td><strong>MS Solver Foundation</strong></td><td><strong>Google OR-Tools</strong></td></tr><tr><td>Variable Limits</td><td>No Variable Limits</td></tr><tr><td>Requires .NET Framework</td><td>.NET Standard Library</td></tr><tr><td>Unsupported as of January 2017</td><td>Actively Supported</td></tr></tbody></table><figcaption>Key differences between MS Solver Foundation and Google OR-Tools</figcaption></figure>



<p>Beginning with the release of <strong>NMath </strong>7.2 the following table lists the deprecated MS Solver Foundation classes on the left and their Google OR-Tools replacements, if necessary.</p>



<div class="is-layout-flex wp-container-2 wp-block-columns">
<div class="is-layout-flow wp-block-column" style="flex-basis:100%">
<figure class="wp-block-table is-style-stripes"><table><tbody><tr><td>Deprecated</td><td>Replacement</td></tr><tr><td><code>PrimalSimplexSolver</code></td><td><code>PrimalSimplexSolverORTools</code></td></tr><tr><td><code>DualSimplexSolver</code></td><td><code>DualSimplexSolverORTools</code></td></tr><tr><td><code>SimplexSolverBase</code></td><td><code>SimpleSolverBaseORTools</code></td></tr><tr><td><code>SimplexSolverMixedIntParams</code></td><td><em><code>replacement not needed</code></em></td></tr><tr><td><code>SimplexSolverParams</code></td><td><em><code>replacement not needed</code></em></td></tr></tbody></table><figcaption>Deprecated LP and MIP classes in NMath 7.2</figcaption></figure>
</div>
</div>



<h2>API Changes</h2>



<p>The primary change between the deprecated MSF classes and the GORT classes is in the reduced algorithm parameterization.  For example, in the toy MIP problem coded below the only change in the API regards constructing the new GORT solver and the lack of need for the parameter helper class.  Note that the entire problem setup and related classes are unchanged making it a simple job to migrate to the new solver classes.</p>



<div class="is-layout-flex wp-container-4 wp-block-columns">
<div class="is-layout-flow wp-block-column">
<pre class="wp-block-code"><code>  // The problem setup is identical between the new and the deprecated API. 

  // minimize -3*x1 -2*x2 -x3
  var mip = new MixedIntegerLinearProgrammingProblem( new DoubleVector( -3.0, -2.0, -1.0 ) );

  // x1 + x2 + x3 &lt;= 7
  mip.AddUpperBoundConstraint( new DoubleVector( 1.0, 1.0, 1.0 ), 7.0 );

  // 4*x1 + 2*x2 +x3 = 12
  mip.AddEqualityConstraint( new DoubleVector( 4.0, 2.0, 1.0 ), 12.0 );

  // x1, x2 &gt;= 0
  mip.AddLowerBound( 0, 0.0 );
  mip.AddLowerBound( 1, 0.0 );

  // x3 is 0 or 1
  mip.AddBinaryConstraint( 2 );
 
  // Make a new Google OR-Tools solver and solve the MIP
  var solver = new PrimalSimplexSolverORTools();
  solver.Solve( mip, true ); // true -&gt; minimize

  // Solve the same MIP with the old deprecated API
  var solver = new PrimalSimplexSolver();
  var solverParams = new PrimalSimplexSolverParams { Minimize = true };
  solver.Solve( mip, solverParams );</code></pre>



<p><strong>NMath </strong>7.2 is released on <a href="https://www.nuget.org/profiles/centerspace">Nuget </a>as usual.</p>
</div>
</div>



<p class="has-text-align-left"><sub>[1] MS Solver Foundation variable limits: NonzeroLimit = 100000, MipVariableLimit = 2000, MipRowLimit = 2000, MipNonzeroLimit = 10000, CspTermLimit = 25000, LP variable limit = 1000</sub></p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/updated-lp-and-mip-classes">Updated NMath API for LP and MIP related classes</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.centerspace.net/updated-lp-and-mip-classes/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">8126</post-id>	</item>
		<item>
		<title>Chromatographic and Spectographic Data Analysis</title>
		<link>https://www.centerspace.net/chromatographic-and-spectographic-data-analysis</link>
					<comments>https://www.centerspace.net/chromatographic-and-spectographic-data-analysis#respond</comments>
		
		<dc:creator><![CDATA[Paul Shirkey]]></dc:creator>
		<pubDate>Wed, 24 Jun 2020 19:52:34 +0000</pubDate>
				<category><![CDATA[NMath]]></category>
		<category><![CDATA[chromatographic]]></category>
		<category><![CDATA[electrophretic]]></category>
		<category><![CDATA[mass spec]]></category>
		<category><![CDATA[peak finding]]></category>
		<category><![CDATA[peak modeling]]></category>
		<category><![CDATA[spectographic]]></category>
		<guid isPermaLink="false">https://www.centerspace.net/?p=7608</guid>

					<description><![CDATA[<p>Chromatographic and spectographic data analysis is a common application of the NMath class library and usually involves some or all of the following computing activities: Noise removal Baseline adjustment Peak finding Peak modeling Peak statistical analysis In this blog article we will discuss each of these activities and provide some NMath C# code on how [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/chromatographic-and-spectographic-data-analysis">Chromatographic and Spectographic Data Analysis</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Chromatographic and spectographic data analysis is a common application of the <strong>NMath</strong> class library and usually involves some or all of the following computing activities:</p>



<ul><li>Noise removal</li><li>Baseline adjustment</li><li>Peak finding</li><li>Peak modeling </li><li>Peak statistical analysis</li></ul>



<p>In this blog article we will discuss each of these activities and provide some NMath C# code on how they may be accomplished.  This is big subject but the goal here is to get you started solving your spectographic data analysis problems, perhaps introduce you to a new technique, and finally to provide some helpful code snippets that can be expanded upon.</p>



<p>Throughout this article we will be using the following electrophoretic data set below in our code examples.  This data set contains four obvious peaks and one partially convolved peak infilled with underlying white noise.</p>



<figure class="wp-block-image size-large"><img decoding="async" width="700" height="350" src="https://www.centerspace.net/wp-content/uploads/2020/06/Blog_RawData-1.gif" alt="" class="wp-image-7610"/><figcaption><em>Our example data set</em></figcaption></figure>



<h2>Noise Removal</h2>



<p>Chromatographic, spectographic, fMRI or EEG data, and many other types of time series are non-stationary.  This non-stationarity means that Fourier based filtering methods are ill suited to removing noise from these signals.  Fortunately we can effectively apply wavelet analysis, which does not depend on signal periodicity, to suppress the signal noise without altering the signal&#8217;s phase or magnitude.  Briefly, the discrete wavelet transform (DWT) can be used to recursively decompose the signal successively into <em>details </em>and <em>approximations </em>components.  From a filtering perspective the signal <em>details</em> contain the higher frequency parts and the <em>approximations</em> contain the lower frequency components.  As you&#8217;d expect the inverse DWT can elegantly reconstruct the original signal but, to meet our noise removal goals, the higher frequency noisy parts of the signal can be suppressed during the signal reconstruction and be effectively removed.  This technique is called <em>wavelet shrinkage</em> and is described in more detail in an earlier <a href="https://www.centerspace.net/wavelet-transforms">blog article</a> with references.</p>



<figure class="wp-block-image size-large"><img decoding="async" loading="lazy" width="700" height="351" src="https://www.centerspace.net/wp-content/uploads/2020/06/Blog_FilteredData-1.gif" alt="" class="wp-image-7613"/><figcaption>Signal noise removed using wavelet shrinkage.</figcaption></figure>



<p>These results can be refined, but this starting point is seen to have successfully removed the noise but not altered the position or general shape of the peaks. Choosing the right wavelet for wavelet shrinkage is done empirically with a representative data set at hand.</p>



<pre class="wp-block-code"><code>public DoubleVector SuppressNoise( DoubleVector DataSet  )  
{
  var wavelet = new DoubleWavelet( Wavelet.Wavelets.D4 );
  var dwt = new DoubleDWT( DataSet.ToArray(), wavelet );
  dwt.Decompose( 5 );
  double lambdaU = dwt.ComputeThreshold( FloatDWT.ThresholdMethod.Sure, 1 );

  dwt.ThresholdAllLevels( FloatDWT.ThresholdPolicy.Soft, new double&#91;] { lambdaU, lambdaU, lambdaU, lambdaU, lambdaU } );

  double&#91;] reconstructedData = dwt.Reconstruct();
  var filteredData= new DoubleVector( reconstructedData );
  return filteredData;
}</code></pre>



<p>With our example data set a Daubechies 4 wavelet worked well for noise removal.  Note that the same threshold was applied to all DWT decomposition levels; Improved white noise suppression can be realized by adopting other thresholding strategies.</p>



<h2>Baseline Adjustment</h2>



<p>Dozens of methods have been developed for modeling and removing a baseline from various types of spectra data.  The R package <a rel="noreferrer noopener" href="https://cran.r-project.org/web/packages/baseline/baseline.pdf" target="_blank"><code><em>baseline</em></code> </a>has collected together a range of these techniques and can serve as a good starting point for exploration.  The techniques variously use regression, iterative erosion and dilation, spectral filtering, convex hulls, or partitioning and create baseline models of lines, polynomials, or more complex curves that can then be subtracted from the raw data.  (Another R package, <a href="https://cran.r-project.org/web/packages/MALDIquant/MALDIquant.pdf">MALDIquant</a>, contains several more useful baseline removal techniques.)  Due to the wide variety of baseline removal techniques and the lack of standards across datasets, <strong>NMath </strong>does not natively offer any baseline removal algorithms.</p>



<h4>Example baseline modeling</h4>



<p>The C# example baseline modeling code below uses z-scores and iterative peak suppression to create a polynomial model of the baseline.  Peaks that extend beyond 1.5 z-scores are iteratively cut down by a quarter and then a polynomial is fitted to this modified data set.  Once the baseline polynomial fits well and stops improving upon iterative suppression the model is returned.</p>



<pre class="wp-block-code"><code>private PolynomialLeastSquares findBaseLine( DoubleVector x, DoubleVector y, int PolynomialDegree )
 {
   var lsFit = new PolynomialLeastSquares( PolynomialDegree, x, y );
   var previousRSoS = 1.0;

   while ( lsFit.LeastSquaresSolution.ResidualSumOfSquares > 0.1 &amp;&amp; Math.Abs( previousRSoS - lsFit.LeastSquaresSolution.ResidualSumOfSquares ) > 0.00001 )
   {
     // compute the Z-scores of the residues and erode data beyond 1.5 stds.
     var residues = lsFit.LeastSquaresSolution.Residuals;
     var Zscores = ( residues - NMathFunctions.Mean( residues ) ) / Math.Sqrt( NMathFunctions.Variance( residues ) );
     previousRSoS = lsFit.LeastSquaresSolution.ResidualSumOfSquares;

     y&#91;0] = Zscores&#91;0] > 1.5 ? 0 : y&#91;0];
     for ( int i = 1; i &lt; this.Length; i++ )
     {
       if ( Zscores&#91;i] > 1.5 )
       {
         y&#91;i] = y&#91;i-1] / 4.0;
       }
     }
     lsFit = new PolynomialLeastSquares( PolynomialDegree, x, y );
    }
    return lsFit;
 }</code></pre>



<p>This algorithm has proven reliable for estimating both 1 and 2 degree polynomial baselines with electrophoretic data sets.  It is not designed to model the wandering baselines sometimes found in mass spec data.  The SNIP [2] method or Asymmetric Least Squares Smoothing [1] would be better suited for those data sets.</p>



<h2>Peak Finding</h2>



<p>Locating peaks in a data set usually involves, at some level, finding the zero crossings of the first derivative of the signal.  However, directly differentiating a signal amplifies noise and so more sophisticated indirect methods are usually employed.  Savitzky-Golay polynomials are commonly used to provide high quality smoothed derivatives of a noisy signal and are widely employed with chromatographic and other data sets (See this<a href="https://www.centerspace.net/savitzky-golay-smoothing"> blog article</a> for more details).  </p>



<div class="is-layout-flow wp-block-group"><div class="wp-block-group__inner-container">
<figure class="wp-block-image size-large is-resized"><img decoding="async" loading="lazy" src="https://www.centerspace.net/wp-content/uploads/2020/06/Blog_PeaksThresholded.gif" alt="" class="wp-image-7617" width="580" height="290"/><figcaption>Located peaks using Savitzky-Golay derivatives and thresholding</figcaption></figure>
</div></div>



<pre class="wp-block-code"><code>// Code snippet for locating peaks.
var sgFilter = new SavitzkyGolayFilter( 4, 4, 2 );
DoubleVector filteredData = sgFilter.Filter( DataSet );
var rbPeakFinder = new PeakFinderRuleBased( filteredData );
rbPeakFinder.AddRule( PeakFinderRuleBased.Rules.MinHeight, 0.005 );
List&lt;int> pkIndicies = rbPeakFinder.LocatePeakIndices();</code></pre>



<p>Without thresholding many small noisy undulations are returned as peaks. Thresholding with this data set works well in separating the data peaks from the noise, however sometimes peak modeling is necessary for separating data peaks from noise when they are both present at similar scales.</p>



<h2>Peak Modeling and Statistics</h2>



<p>In addition to separating out false peaks, peaks are also modeled to compute various peak statistical measures such as FWHM, CV,  area, or standard deviation.  The Gaussian is an excellent place to start for peak modeling and for many applications this model is sufficient.  However there are many other peak models including: Lorenzian, Voigt, [3] CSR, and variations on exponentially modified Gaussians (EMG&#8217;s).  Many combinations, convolutions, and refinements of these models are gathered together and presented in a useful paper by <a rel="noreferrer noopener" href="https://www.centerspace.net/dimarco2001" target="_blank">DeMarko &amp; Bombi, 2001</a>.  Their paper focused on chromatographic peaks but the models surveyed therein have wide application.</p>



<pre class="wp-block-code"><code>/// &lt;summary>
/// Gaussian Func&lt;> for trust region fitter.
/// p&#91;0] = mean, p&#91;1] = sigma, p&#91;2] = baseline offset
/// &lt;/summary>
private static Func&lt;DoubleVector, double, double> Gaussian = delegate ( DoubleVector p, double x )
{
   double a = ( 1.0 / ( p&#91;1] * Math.Sqrt( 2.0 * Math.PI ) ) );
   return a * Math.Exp( -1 * Math.Pow( x - p&#91;0], 2 ) / ( 2 * p&#91;1] * p&#91;1] ) ) + p&#91;2];
};</code></pre>



<p>Above is a <code>Func&lt;&gt;</code> representing a Gaussian that allows for some vertical offset.  The <code>TrustRegionMinimizer</code> in <strong>NMath </strong>is one of the most powerful and flexible methods for peak fitting.  Once the start and end indices of the peaks are determined, the following code snippet fits this Gaussian model to the peak&#8217;s data.</p>



<pre class="wp-block-code"><code>// The DoubleVector's xValues and yValues contain the peak's data.

// Pass in the model (above) to the function fitter ctor
var modelFitter = new BoundedOneVariableFunctionFitter&lt;TrustRegionMinimizer>( Gaussian );

// Gaussian for peak finding
var lowerBounds = new DoubleVector( new double&#91;] { xValues&#91;0], 1.0, -0.05 } );
var upperBounds = new DoubleVector( new double&#91;] { xValues&#91;xValues.Length - 1], 10.0, 0.05 } );
var initialGuess = new DoubleVector( new double&#91;] { 0.16, 6.0, 0.001 } );

// The lower and upper bounds aren't required, but are suggested.
var soln = modelFitter.Fit( xValues, yValues, initialGuess, lowerBounds, upperBounds );

// Fit statistics
var gof = new GoodnessOfFit( modelFitter, xValues, yValues, soln );</code></pre>



<p>The <code>GoodnessOfFit</code> class is a very useful tool for peak modeling.  In one line of code one gets the f-statistics for the goodness of the fit of the model along with confidence intervals for all of the model parameters.  These statistics are very useful in automating the sorting out of noisy peaks from actual data peaks and of course for determining if the model is appropriate for the data at hand.</p>



<h4>Peak Area</h4>



<p>Computing peak areas or peak area proportions is essential in most applications of spectographic or electrophoretic data analysis.  This is this a two-liner with <strong>NMath</strong>.</p>



<pre class="wp-block-code"><code>// The peak starts and ends at: startIndex, endIndex.
var integrator = new DiscreteDataIntegrator();
double area =  integrator.Integrate( DataSet&#91; new Slice( startIndex, endIndex - startIndex + 1) ] );</code></pre>



<p>The <code>DiscreteDataIntegrator</code> defaults to integrating with cubic spline segments.  Other discrete data integration methods available are trapezoidal and parabolic.</p>



<h2>Summary</h2>



<p>Contact us if you need help or have questions about analyzing your team&#8217;s data sets.  We can quickly help you get started solving your computing problems using <strong><a href="https://www.centerspace.net/product-overviews">NMath </a></strong>or go deeper and accelerate your team&#8217;s application development with consulting.</p>



<h4>Assorted References</h4>



<ol><li>Eilers, Paul &amp; Boelens, Hans. (2005). Baseline Correction with Asymmetric Least Squares Smoothing. Unpubl. Manuscr.</li><li>C.G. Ryan, E. Clayton, W.L. Griffin, S.H. Sie, and D.R. Cousens. 1988. SNIP, a statistics-sensitive background treatment for the quantitative analysis of pixe spectra in geoscience applications. Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms, 34(3): 396-402.</li><li>García-Alvarez-Coque MC, Simó-Alfonso EF, Sanchis-Mallols JM, Baeza-Baeza JJ. A new mathematical function for describing electrophoretic peaks. <em>Electrophoresis</em>. 2005;26(11):2076-2085. doi:10.1002/elps.200410370</li></ol>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/chromatographic-and-spectographic-data-analysis">Chromatographic and Spectographic Data Analysis</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.centerspace.net/chromatographic-and-spectographic-data-analysis/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">7608</post-id>	</item>
		<item>
		<title>Fitting the Weibull Distribution</title>
		<link>https://www.centerspace.net/fitting-the-weibull-distribution</link>
					<comments>https://www.centerspace.net/fitting-the-weibull-distribution#respond</comments>
		
		<dc:creator><![CDATA[Paul Shirkey]]></dc:creator>
		<pubDate>Wed, 24 Jul 2019 18:30:45 +0000</pubDate>
				<category><![CDATA[NMath]]></category>
		<category><![CDATA[Statistics]]></category>
		<category><![CDATA[.NET weibull]]></category>
		<category><![CDATA[C# weibull]]></category>
		<category><![CDATA[fitting the Weibull distribution]]></category>
		<category><![CDATA[Weibull]]></category>
		<category><![CDATA[weibull distribution]]></category>
		<guid isPermaLink="false">https://www.centerspace.net/?p=7434</guid>

					<description><![CDATA[<p>The Weibull distribution is widely used in reliability analysis, hazard analysis, for modeling part failure rates and in many other applications. The NMath library currently includes 19 probably distributions and has recently added a fitting function to the Weibull distribution class at the request of a customer. The Weibull probability distribution, over the random variable [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/fitting-the-weibull-distribution">Fitting the Weibull Distribution</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>The Weibull distribution is widely used in reliability analysis, hazard analysis, for modeling part failure rates and in many other applications.  The <strong>NMath </strong>library currently includes 19 probably distributions and has recently added a fitting function to the Weibull distribution class at the request of a customer.  </p>



<p>The Weibull probability distribution, over the random variable <em>x</em>, has two parameters:</p>



<ul><li>k &gt; 0, is the <em>shape parameter</em></li><li>λ &gt; 0, is the <em>scale parameter </em></li></ul>



<p>Frequently engineers have data that is known to be well modeled by the Weibull distribution but the shape and scale parameters are unknown. In this case a data fitting strategy can be used; <strong>NMath </strong>now has a maximum likelihood Weibull fitting function demonstrated in the code example below.</p>



<pre class="wp-block-code"><code>    public void WiebullFit()
    {
      double[] t = new double[] { 16, 34, 53, 75, 93, 120 };
      double initialShape = 2.2;
      double initialScale = 50.0;

      WeibullDistribution fittedDist = WeibullDistribution.Fit( t, initialScale, initialShape );

      // fittedDist.Shape parameter will equal 1.933
      // fittedDist.Scale parameter will equal 73.526
    }</code></pre>



<p>If the Weibull fitting algorithm fails the returned distribution will be <code>null</code>.  In this case improving the initial parameter guesses can help. The <code>WeibullDistribution.Fit()</code> function accepts either arrays, as seen above, or <code>DoubleVectors</code>.</p>



<p>The latest version of <strong>NMath</strong>, including this maximum likelihood Weibull fit function, is available on the CenterSpace <a href="https://www.nuget.org/profiles/centerspace">NuGet</a> gallery.</p>



<p></p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/fitting-the-weibull-distribution">Fitting the Weibull Distribution</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.centerspace.net/fitting-the-weibull-distribution/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">7434</post-id>	</item>
		<item>
		<title>NMath is Adding .NET Core Support and has Dropped Support of OSX and Linux86</title>
		<link>https://www.centerspace.net/nmath-adding-net-core-net-standard</link>
					<comments>https://www.centerspace.net/nmath-adding-net-core-net-standard#comments</comments>
		
		<dc:creator><![CDATA[Paul Shirkey]]></dc:creator>
		<pubDate>Tue, 13 Mar 2018 00:29:27 +0000</pubDate>
				<category><![CDATA[.NET]]></category>
		<category><![CDATA[NMath]]></category>
		<category><![CDATA[NMath Premium]]></category>
		<category><![CDATA[.NET Core]]></category>
		<category><![CDATA[.NET Standard]]></category>
		<guid isPermaLink="false">https://www.centerspace.net/?p=7300</guid>

					<description><![CDATA[<p>CenterSpace will be adding support for both .NET Core and .NET Standard to NMath by the end of 2018.  NMath has also dropped support of both the OSX and Linux86 operating systems in NMath release 6.2.0.41.</p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/nmath-adding-net-core-net-standard">NMath is Adding .NET Core Support and has Dropped Support of OSX and Linux86</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h3> Changes to Supported Operating Systems </h3>
<p>With the release of <strong>NMath</strong> 6.2.0.41, on March 10, 2018, <strong>NMath</strong> no longer supports OSX or the Linux x86 operating systems.  We are dropping the support of these operating systems due to a decline of demand by our customers.  Please contact us with any concerns regarding this change.  This release is currently available on <a href="https://www.nuget.org/packages/CenterSpace.NMath.Premium/6.2.0.41">NuGet</a>.</p>
<p>Going forward <strong>NMath</strong> and <strong>NMath Premium</strong> will naturally continue to support both the 32-bit and 64-bit Windows and 64-bit Linux.  </p>
<h3> Adding .NET Standard and .NET Core Support  </h3>
<p><em>By the end of 2018, NMath will support both the .NET Core and .NET Standard</em>.  Supporting both of these .NET standards have been increasingly requested by our customers.  If you are unfamiliar with these newest additions to the .NET world, the following briefly defines them.</p>
<ul>
<li> .NET Core: This is the latest .NET implementation. It’s open source and available for multiple OSes. With .NET Core, you can build cross-platform console apps and ASP.NET Core Web applications and cloud services.</li>
<li>.NET Standard: This is the set of fundamental APIs (commonly referred to as base class library or BCL) that all .NET implementations must implement. By targeting .NET Standard, you can build libraries that you can share across all your .NET apps, no matter on which .NET implementation or OS they run.</li>
</ul>
<p>For further reading on these .NET standards see this <a href="https://msdn.microsoft.com/en-us/magazine/mt842506.aspx">MSDN magazine</a> article for an introduction.</p>
<p>Please don&#8217;t hesitate to contact us in the comments below or via email with any questions regarding these changes to the CenterSpace .NET <strong>NMath</strong> library. </p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/nmath-adding-net-core-net-standard">NMath is Adding .NET Core Support and has Dropped Support of OSX and Linux86</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.centerspace.net/nmath-adding-net-core-net-standard/feed</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">7300</post-id>	</item>
		<item>
		<title>Filtering with Wavelet Transforms</title>
		<link>https://www.centerspace.net/wavelet-transforms</link>
					<comments>https://www.centerspace.net/wavelet-transforms#respond</comments>
		
		<dc:creator><![CDATA[Paul Shirkey]]></dc:creator>
		<pubDate>Fri, 18 Dec 2015 17:00:25 +0000</pubDate>
				<category><![CDATA[NMath Tutorial]]></category>
		<category><![CDATA[DWT]]></category>
		<category><![CDATA[ECG filtering]]></category>
		<category><![CDATA[filtering]]></category>
		<category><![CDATA[filtering with wavelets]]></category>
		<category><![CDATA[mass spec filtering]]></category>
		<category><![CDATA[wavelet filtering]]></category>
		<category><![CDATA[wavelets]]></category>
		<guid isPermaLink="false">http://www.centerspace.net/blog/?p=5713</guid>

					<description><![CDATA[<p><img class="excerpt" src="https://www.centerspace.net/wp-content/uploads/2015/10/ec13.jpg" alt="ECG waveform" width="350" class="size-full wp-image-5819" /><br />
Wavelet transforms have found engineering applications in computer vision, pattern recognition, signal filtering and perhaps most widely in signal and image compression.  In 2000 the ISO JPEG committee proposed a new<br />
JPEG2000 image compression standard that is based on the wavelet transform using two Daubechies wavelets.  This standard made the relatively new image decomposition algorithm ubiquitous on desktop around the world. </p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/wavelet-transforms">Filtering with Wavelet Transforms</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Discrete time wavelet transforms have found engineering applications in computer vision, pattern recognition, signal filtering and perhaps most widely in signal and image compression.  In 2000 the ISO JPEG committee proposed a new JPEG2000 image compression standard that is based on the wavelet transform using two Daubechies wavelets.  This standard made the relatively new image decomposition algorithm ubiquitous on desktops around the world.  </p>
<p>In signal processing, wavelets have been widely investigated for use in filtering bio-electric signals, among many other applications.  Bio-electric signals are good candidates for multi-resolution wavelet filtering over standard Fourier analysis due to their non-stationary character.  In this article we&#8217;ll discuss the filtering of electrocardiograms or ECGs and demonstrate with code examples how to filter an ECG waveform using <strong>NMath</strong>&#8216;s new wavelet classes; keeping in mind that the techniques and code shown here apply to a wide class of time series measurements.  If wavelets and their applications to filtering are unfamiliar to the reader, read a gentle and brief introduction to the subject in <em>Wavelets for Kids: A Tutorial Introduction</em> <a href="#5">[5]</a>.</p>
<h2> Filtering a Time Series with Wavelets</h2>
<p><a href="http://www.physionet.org/">PhysioNet </a>provides free access to a large collections of recorded physiologic signals, including many ECG&#8217;s.  The ECG signal we will filter here, named <em>aami-ec13</em> on PhysioNet, is shown below.<br />
<figure id="attachment_5721" aria-describedby="caption-attachment-5721" style="width: 600px" class="wp-caption alignnone"><a href="https://www.centerspace.net/blog/wp-content/uploads/2015/03/ScreenClip.png"><img decoding="async" src="https://www.centerspace.net/blog/wp-content/uploads/2015/03/ScreenClip.png" alt="ECG Signal" width="600" class="size-full wp-image-5721" srcset="https://www.centerspace.net/wp-content/uploads/2015/03/ScreenClip.png 1090w, https://www.centerspace.net/wp-content/uploads/2015/03/ScreenClip-300x59.png 300w, https://www.centerspace.net/wp-content/uploads/2015/03/ScreenClip-1024x202.png 1024w" sizes="(max-width: 1090px) 100vw, 1090px" /></a><figcaption id="caption-attachment-5721" class="wp-caption-text">ECG Signal</figcaption></figure></p>
<p>Our goal will be to remove the high frequency noise while preserving the character of the wave form, including the high frequency transitions at the signal peaks.  Fourier based filter methods are ill suited for filtering this type of signal due to both it&#8217;s non-stationarity, as mentioned, but also the need to preserve the peak locations (phase) and shape.</p>
<h3> A Wavelet Filter </h3>
<p>As with Fourier analysis there are three basic steps to filtering signals using wavelets.</p>
<ol>
<li> <em>Decompose </em>the signal using the DWT.
<li> Filter the signal in the wavelet space using <em>thresholding</em>.
<li> Invert the filtered signal to <em>reconstruct </em>the original, now filtered signal, using the inverse DWT.
</ol>
<p>Briefly, the filtering of signals using wavelets is based on the idea that as the DWT decomposes the signal into <em>details </em>and <em>approximation </em> parts, at some scale the details contain mostly the insignificant noise and can be removed or zeroed out using thresholding without affecting the signal.  This idea is discussed in more detail in the introductory paper [5].  To implement this DWT filtering scheme there are two basic filter design parameters: the wavelet type and a threshold.  Typically the shape and form of the signal to be filtered is qualitatively matched to the general shape of the wavelet.  In this example we will use the Daubechies forth order wavelet.   </p>
<table>
<tr>
<td><figure id="attachment_5827" aria-describedby="caption-attachment-5827" style="width: 220px" class="wp-caption alignnone"><img decoding="async" src="https://www.centerspace.net/blog/wp-content/uploads/2015/10/Image.png" alt="ECG Signal closeup" width="200" class="alignnone size-full wp-image-5852" srcset="https://www.centerspace.net/wp-content/uploads/2015/10/Image.png 706w, https://www.centerspace.net/wp-content/uploads/2015/10/Image-300x226.png 300w" sizes="(max-width: 706px) 100vw, 706px" /><figcaption id="caption-attachment-5827" class="wp-caption-text">ECG Waveform</figcaption></figure></td>
<td><figure id="attachment_5827" aria-describedby="caption-attachment-5827" style="width: 220px" class="wp-caption alignnone"><img decoding="async" src="https://www.centerspace.net/blog/wp-content/uploads/2015/10/DB4.png" alt="Daubechies 4 wavelet" width="200" class="size-full wp-image-5827" srcset="https://www.centerspace.net/wp-content/uploads/2015/10/DB4.png 560w, https://www.centerspace.net/wp-content/uploads/2015/10/DB4-300x214.png 300w" sizes="(max-width: 560px) 100vw, 560px" /><figcaption id="caption-attachment-5827" class="wp-caption-text">Daubechies 4 wavelet</figcaption></figure></td>
</tr>
</table>
<p>The general shape of this wavelet roughly matches, at various scales, the morphology of the ECG signal.  Currently <strong>NMath </strong>supports the following wavelet families: Harr, Daubechies, Symlet, Best Localized, and Coiflet, 27 in all.  Additionally, any custom wavelet of your invention can be created by passing in the wavelet&#8217;s low &#038; high pass decimation filter values.  The wavelet class then imposes the wavelet&#8217;s symmetry properties to compute the reconstruction filters.</p>
<pre lang="csharp">
   // Build a Coiflet wavelet.
   var wavelet = new FloatWavelet( Wavelet.Wavelets.C4 );

   // Build a custom reverse bi-orthogonal wavelet.
   var wavelet = new DoubleWavelet( new double[] {0.0, 0.0, 0.7071068, 0.7071068, 0.0, 0.0}, new double[] {0.0883883, 0.0883883, -0.7071068, 0.7071068, -0.0883883, -0.0883883} );
</pre>
<p>The <code>FloatDWT</code> class provides four different thresholding strategies: Universal, UniversalMAD, Sure, and Hybrid (a.k.a SureShrink).  We&#8217;ll use the Universal threshold strategy here.  This is a good starting point but this strategy can over smooth the signal.  Typically some empirical experimentation is done here to find the best threshold for the data (see [1], also see [4] for a good overview of common thresholding strategies.)</p>
<h3> Wavelet Filtering Code</h3>
<p>The three steps outlined above are easily coded using two classes in the <b>NMath</b> library: the <code>FloatDWT</code> class and the <code>FloatWavelet</code> class.  As always in <b>NMath</b>, the library offers both a float precision and a double precision version of each of these classes.  Let&#8217;s look at a code snippet that implements a DWT based filter with <b>NMath</b>.</p>
<pre lang="csharp">
   // Choose wavelet, the Daubechies 4 wavelet
   var wavelet = new FloatWavelet( Wavelet.Wavelets.D4 );

   // Build DWT object using our wavelet & data
   var dwt = new FloatDWT( data, wavelet );

   // Decompose signal with DWT to level 5
   dwt.Decompose( 5 );

   // Find Universal threshold & threshold all detail levels
   double lambdaU = dwt.ComputeThreshold( FloatDWT.ThresholdMethod.Universal, 1 );
   dwt.ThresholdAllLevels( FloatDWT.ThresholdPolicy.Soft, new double[] { lambdaU, 
       lambdaU, lambdaU, lambdaU, lambdaU } );

   // Rebuild the filtered signal.
   float[] reconstructedData = dwt.Reconstruct();
</pre>
<p>The first two lines of code build the wavelet object and the DWT object using both the input data signal and the abbreviated Daubechies wavelet name <code>Wavelet.Wavelets.D4</code>.  The third line of code executes the wavelet decomposition at five consecutive scales.  Both the signal&#8217;s <em>details</em> and <em>approximations</em> are stored in the DWT object at each step in the decomposition.  Next, the <code>Universal</code> threshold is computed and the wavelet details are thresholded using the same threshold with a <code>Soft</code> policy (see [1], pg. 63).  Lastly, the now filtered signal is reconstructed.</p>
<p>Below, the chart on the left shows the unfiltered ECG signal and the chart on the right shows the wavelet filtered ECG signal.  It&#8217;s clear that this filter very effectively removed the noise while preserving the signal.</p>
<table>
<tr>
<td>
<figure id="attachment_5888" aria-describedby="caption-attachment-5888" style="width: 350px" class="wp-caption alignnone"><img decoding="async" src="https://www.centerspace.net/blog/wp-content/uploads/2015/10/Image2.png" alt="Raw ECG Signal" width="350" class="size-full wp-image-5888" srcset="https://www.centerspace.net/wp-content/uploads/2015/10/Image2.png 506w, https://www.centerspace.net/wp-content/uploads/2015/10/Image2-300x180.png 300w" sizes="(max-width: 506px) 100vw, 506px" /><figcaption id="caption-attachment-5888" class="wp-caption-text">Raw ECG Signal</figcaption></figure>
</td>
<td>
<figure id="attachment_5887" aria-describedby="caption-attachment-5887" style="width: 350px" class="wp-caption alignnone"><img decoding="async" src="https://www.centerspace.net/blog/wp-content/uploads/2015/10/Image1.png" alt="Filtered ECG Signal" width="350"  class="size-full wp-image-5887" srcset="https://www.centerspace.net/wp-content/uploads/2015/10/Image1.png 506w, https://www.centerspace.net/wp-content/uploads/2015/10/Image1-300x180.png 300w" sizes="(max-width: 506px) 100vw, 506px" /><figcaption id="caption-attachment-5887" class="wp-caption-text">Filtered ECG Signal</figcaption></figure></td>
</table>
<p>These two charts below show a detail from the chart above from indices 500 to 1000.  Note how well the signal shape, phase, and amplitude has been preserved in this non-stationary wavelet-filtered signal.</p>
<table>
<tr>
<td>
<figure id="attachment_5890" aria-describedby="caption-attachment-5890" style="width: 350px" class="wp-caption alignnone"><img decoding="async" src="https://www.centerspace.net/blog/wp-content/uploads/2015/10/Image4.png" alt="Detail Raw ECG Signal" width="350"  class="size-full wp-image-5890" srcset="https://www.centerspace.net/wp-content/uploads/2015/10/Image4.png 506w, https://www.centerspace.net/wp-content/uploads/2015/10/Image4-300x180.png 300w" sizes="(max-width: 506px) 100vw, 506px" /><figcaption id="caption-attachment-5890" class="wp-caption-text">Detail Raw ECG Signal</figcaption></figure>
</td>
<td>
<figure id="attachment_5889" aria-describedby="caption-attachment-5889" style="width: 350px" class="wp-caption alignnone"><img decoding="async" src="https://www.centerspace.net/blog/wp-content/uploads/2015/10/Image3.png" alt="Detail Filtered ECG Signal" width="350"  class="size-full wp-image-5889" srcset="https://www.centerspace.net/wp-content/uploads/2015/10/Image3.png 506w, https://www.centerspace.net/wp-content/uploads/2015/10/Image3-300x180.png 300w" sizes="(max-width: 506px) 100vw, 506px" /><figcaption id="caption-attachment-5889" class="wp-caption-text">Detail Filtered ECG Signal</figcaption></figure>
</td>
</table>
<p>It is this ability to preserve phase, form, and amplitude in DWT based filters all while having a O(n log n) runtime that Fourier-based filters enjoy that has made wavelets such an important part of signal processing today.  The complete code for this example along with a link to the ECG data is provided below.</p>
<p>Paul</p>
<h3> References </h3>
<div id="1">[1] Guomin Luo and Daming Zhang (2012). <em>Wavelet Denoising</em>, Advances in Wavelet Theory and Their Applications in Engineering, Physics and Technology, Dr. Dumitru Baleanu (Ed.), ISBN: 978-953-51-0494-0, InTech, pp. 59-80.  Available from: <a href="http://www.intechopen.com/books/advances-in-wavelet-theory-and-their-applicationsin-engineering-physics-and-technology/wavelet-denoising">http://www.intechopen.com/books/advances-in-wavelet-theory-and-their-applicationsin-engineering-physics-and-technology/wavelet-denoising</a> </div>
<div id="2">[2] Burhan Ergen (2012). <em>Signal and Image Denoising Using Wavelet Transform</em>, Advances in Wavelet Theory and Their Applications in Engineering, Physics and Technology, Dr. Dumitru Baleanu (Ed.), ISBN: 978-953-51-0494-0, InTech, DOI: 10.5772/36434.  Available from: <a href="http://www.intechopen.com/books/advances-in-wavelet-theory-and-their-applications-in-engineering-physics-and-technology/wavelet-signal-and-image-denoising">http://www.intechopen.com/books/advances-in-wavelet-theory-and-their-applications-in-engineering-physics-and-technology/wavelet-signal-and-image-denoising</a> </div>
<div id="3">[3] Rami Cohen: <em>Signal Denoising Using Wavelets</em>, Project Report, 2012.  Available from: <a href="https://pdfs.semanticscholar.org/3dfd/6b2bd3d6ad3c6eca50747e686d5ad88b4fc1.pdf">https://pdfs.semanticscholar.org/3dfd/6b2bd3d6ad3c6eca50747e686d5ad88b4fc1.pdf</a> </div>
<div id="4">[4] M. C. E. Rosas-Orea, M. Hernandez-Diaz, V. Alarcon-Aquino, and L. G. Guerrero-Ojeda, <em>A Comparative Simulation Study of Wavelet Based Denoising Algorithms</em>, Proceedings of the 15th International Conference on Electronics, Communications and Computers (CONIELECOMP 2005), 2005 © IEEE </div>
<div id="5">[5] Brani Vidakovic and Peter Mueller, <em>Wavelets for Kids: A Tutorial Introduction</em>, Duke University, 1991.  Available from: <a target="_blank" href="http://gtwavelet.bme.gatech.edu/wp/kidsA.pdf">http://gtwavelet.bme.gatech.edu/wp/kidsA.pdf</a> </div>
<h3> Test Data </h3>
<p>To copy the data file provided by <a href="http://www.physionet.org/">PhysioNet</a> for this example click: <a href="https://www.centerspace.net/blog/wp-content/uploads/2015/10/ECG_AAMIEC13.data_.txt">ECG_AAMIEC13.data</a><br />
This ECG data was taken from the ANSI EC13 test data set <a href="http://www.physionet.org/physiobank/database/aami-ec13/">waveforms</a>.</p>
<h3> Complete Test Code </h3>
<pre lang="csharp">

    public void BlogECGExample()
    {
      // Define your own dataDir
      var dataDir = "................";

      // Load ECG wave from physionet.org data file.
      string filename = Path.Combine( dataDir, "ECG_AAMIEC13.data.txt" );
      string line;
      int cnt = 0;
      FloatVector ecgMeasurement = new FloatVector( 3000 );
      var fileStrm = new System.IO.StreamReader( filename );
      fileStrm.ReadLine(); fileStrm.ReadLine();
      while ( ( line = fileStrm.ReadLine() ) != null && cnt < 3000 )
      {
        ecgMeasurement[cnt] = Single.Parse( line.Split( ',' )[1] );
        cnt++;
      }

      // Choose wavelet
      var wavelet = new FloatWavelet( Wavelet.Wavelets.D4 );

      // Build DWT object
      var dwt = new FloatDWT( ecgMeasurement.DataBlock.Data, wavelet );

      // Decompose signal with DWT to level 5
      dwt.Decompose( 5 );

      // Find Universal threshold &#038; threshold all detail levels with lambdaU
      double lambdaU = dwt.ComputeThreshold( FloatDWT.ThresholdMethod.Universal, 1 );
      dwt.ThresholdAllLevels( FloatDWT.ThresholdPolicy.Soft, new double[] { lambdaU, lambdaU, lambdaU, lambdaU, lambdaU } );

      // Rebuild the signal to level 1 - the original (filtered) signal.
      float[] reconstructedData = dwt.Reconstruct();

      // Display DWT results.
      BlogECGExampleBuildCharts( dwt, ecgMeasurement, reconstructedData );

    }

    public void BlogECGExampleBuildCharts( FloatDWT dwt, FloatVector ECGMeasurement, float[] ReconstructedData )
    {

      // Plot out approximations at various levels of decomposition.
      var approxAllLevels = new FloatVector();
      for ( int n = 5; n > 0; n-- )
      {
        var approx = new FloatVector( dwt.WaveletCoefficients( DiscreteWaveletTransform.WaveletCoefficientType.Approximation, n ) );
        approxAllLevels.Append( new FloatVector( approx ) );
      }

      var detailsAllLevels = new FloatVector();
      for ( int n = 5; n > 0; n-- )
      {
        var approx = new FloatVector( dwt.WaveletCoefficients( DiscreteWaveletTransform.WaveletCoefficientType.Details, n ) );
        detailsAllLevels.Append( new FloatVector( approx ) );
      }

      // Create and display charts.
      Chart chart0 = NMathChart.ToChart( detailsAllLevels );
      chart0.Titles.Add( "Concatenated DWT Details to Level 5" );
      chart0.ChartAreas[0].AxisY.Title = "DWT Details";
      chart0.Height = 270;
      NMathChart.Show( chart0 );

      Chart chart1 = NMathChart.ToChart( approxAllLevels );
      chart1.Titles.Add("Concatenated DWT Approximations to Level 5");
      chart1.ChartAreas[0].AxisY.Title = "DWT Approximations";
      chart1.Height = 270;
      NMathChart.Show( chart1 );

      Chart chart2 = NMathChart.ToChart( (new FloatVector( ReconstructedData ))[new Slice(500,500)] );
      chart2.Titles[0].Text = "Thresholded & Reconstructed ECG Signal";
      chart2.ChartAreas[0].AxisY.Title = "mV";
      chart2.Height= 270;
      NMathChart.Show( chart2 );

      Chart chart3 = NMathChart.ToChart( (new FloatVector( ECGMeasurement ))[new Slice(500,500)] );
      chart3.Titles[0].Text = "Raw ECG Signal";
      chart3.ChartAreas[0].AxisY.Title = "mV";
      chart3.Height = 270;
      NMathChart.Show( chart3 );

    }
</pre>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/wavelet-transforms">Filtering with Wavelet Transforms</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.centerspace.net/wavelet-transforms/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5713</post-id>	</item>
		<item>
		<title>Precision and Reproducibility in Computing</title>
		<link>https://www.centerspace.net/precision-and-reproducibility-in-computing</link>
					<comments>https://www.centerspace.net/precision-and-reproducibility-in-computing#respond</comments>
		
		<dc:creator><![CDATA[Paul Shirkey]]></dc:creator>
		<pubDate>Mon, 16 Nov 2015 22:32:31 +0000</pubDate>
				<category><![CDATA[MKL]]></category>
		<category><![CDATA[NMath]]></category>
		<category><![CDATA[Object-Oriented Numerics]]></category>
		<category><![CDATA[Performance]]></category>
		<category><![CDATA[floating point precision]]></category>
		<category><![CDATA[MKL repeatability]]></category>
		<category><![CDATA[MKL reproducibility]]></category>
		<category><![CDATA[NMath repeatability]]></category>
		<category><![CDATA[NMath Reproducibility]]></category>
		<category><![CDATA[repeatability]]></category>
		<category><![CDATA[repeatability in computing]]></category>
		<category><![CDATA[Reproducibility in computing]]></category>
		<guid isPermaLink="false">http://www.centerspace.net/blog/?p=5810</guid>

					<description><![CDATA[<p>Run-to-run reproducibility in computing is often assumed as an obvious truth.  However software running on modern computer architectures, among many other processes, particularly when coupled with advanced performance-optimized libraries, is often only guaranteed to produce reproducible results only up to a certain precision; beyond that results can and do vary run-to-run.</p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/precision-and-reproducibility-in-computing">Precision and Reproducibility in Computing</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Run-to-run reproducibility in computing is often assumed as an obvious truth.  However software running on modern computer architectures, among many other processes, particularly when coupled with advanced performance-optimized libraries, is often only guaranteed to produce reproducible results only up to a certain precision; beyond that results can and do vary run-to-run.  Reproducibility is interrelated with the precision of floating-point point types and the resultant rounding, operation re-ordering, memory structure and use, and finally how real numbers are represented internally in a computer&#8217;s registers.  </p>
<p>This issue of reproducibility arises with <strong>NMath</strong> users when writing and running unit tests; which is why it&#8217;s important when writing tests to compare floating point numbers only up to their designed precision, at an absolute maximum.  With the IEEE 754 floating point representation which virtually all modern computers adhere to, the single precision <code>float </code>type uses 32 bits or 4 bytes and offers 24 bits of precision or about <em>7 decimal digits</em>. While the double precision <code>double </code>type requires 64 bits or 8 bytes and offers 53 bits of precision or about <em>15 decimal digits</em>.  Few algorithms can achieve significant results to the 15th decimal place due to rounding, loss of precision due to subtraction and other sources of numerical precision degradation.  <strong>NMath&#8217;s</strong> numerical results are tested, at a maximum, to the 14th decimal place.</p>
<h4 style="padding-left: 30px;"><em>A Precision Example</em></h4>
<p style="padding-left: 30px;">As an example, what does the following code output?</p>
<pre style="padding-left: 30px;" lang="csharp">      double x = .050000000000000003;
      double y = .050000000000000000;
      if ( x == y )
        Console.WriteLine( "x is y" );
      else
        Console.WriteLine( "x is not y" );
</pre>
<p style="padding-left: 30px;">I get &#8220;x is y&#8221;, which is clearly not the case, but the number x specified is beyond the precision of a <code>double </code>type.</p>
<p>Due to these limits on decimal number representation and the resulting rounding, the numerical results of some operations can be affected by the associative reordering of operations. For example, in some cases <code>a*x + a*z</code> may not equal <code>a*(x + z)</code> with floating point types.  Although this can be difficult to test using modern optimizing compilers because the code you write and the code that runs can be organized in a very different way, but is mathematically equivalent if not numerically.</p>
<p>So <em>reproducibility </em>is impacted by precision via dynamic operation reorderings in the ALU and additionally by run-time processor dispatching, data-array alignment, and variation in thread number among other factors.  These issues can create <em>run-to-run</em> differences in the least significant digits.  Two runs, same code, two answers.  <em>This is by design and is not an issue of correctness</em>.  Subtle changes in the memory layout of the program&#8217;s data, differences in loading of the ALU registers and operation order, and differences in threading all due to unrelated processes running on the same machine cause these run-to-run differences. </p>
<h3> Managing Reproducibility </h3>
<p>Most importantly, one should test code&#8217;s numerical results only to the precision that can be expected by the algorithm, input data, and finally the limits of floating point arithmetic.  To do this in unit tests, compare floating point numbers carefully only to a fixed number of digits.  The code snippet below compares two double numbers and returns true only if the numbers match to a specified number of digits.  </p>
<pre lang="csharp">
private static bool EqualToNumDigits( double expected, double actual, int numDigits )
    {
      double max = System.Math.Abs( expected ) > System.Math.Abs( actual ) ? System.Math.Abs( expected ) : System.Math.Abs( actual );
      double diff = System.Math.Abs( expected - actual );
      double relDiff = max > 1.0 ? diff / max : diff;
      if ( relDiff <= DOUBLE_EPSILON )
      {
        return true;
      }

      int numDigitsAgree = (int) ( -System.Math.Floor( Math.Log10( relDiff ) ) - 1 );
      return numDigitsAgree >= numDigits;
    }
</pre>
<p>This type of comparison should be used throughout unit testing code.  The full code listing, which we use for our internal testing, is provided at the end of this article.</p>
<p>If it is essential to enforce binary run-to-run reproducibility to the limits of precision, <strong>NMath </strong>provides a flag in its configuration class to ensure this is the case.  However this flag should be set for unit testing only because there can be a significant cost to performance.  In general, expect a 10% to 20% reduction in performance with some common operations degrading far more than that.  For example, some matrix multiplications will take twice the time with this flag set.</p>
<p>Note that the number of threads that Intel&#8217;s MKL library uses ( which <strong>NMath</strong> depends on ) must also be fixed before setting the reproducibility flag.</p>
<pre lang="csharp">
int numThreads = 2;  // This must be fixed for reproducibility.
NMathConfiguration.SetMKLNumThreads( numThreads );
NMathConfiguration.Reproducibility = true;
</pre>
<p>This reproducibility run configuration for <strong>NMath </strong>cannot be unset at a later point in the program.  Note that both setting the number of threads and the reproducibility flag may be set in the AppConfig or in environmental variables.  See the <a href="https://www.centerspace.net/doc/NMath/user/overview-83549.htm#Xoverview-83549">NMath User Guide</a> for instructions on how to do this. </p>
<p>Paul</p>
<p><strong>References</strong></p>
<p>M. A. Cornea-Hasegan, B. Norin.  <em>IA-64 Floating-Point Operations and the IEEE Standard for Binary Floating-Point Arithmetic</em>. Intel Technology Journal, Q4, 1999.<br />
<a href="http://gec.di.uminho.pt/discip/minf/ac0203/icca03/ia64fpbf1.pdf">http://gec.di.uminho.pt/discip/minf/ac0203/icca03/ia64fpbf1.pdf</a></p>
<p>D. Goldberg, <em>What Every Computer Scientist Should Know About Floating-Point Arithmetic</em>. Computing Surveys. March 1991.<br />
<a href="http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html">http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html</a></p>
<h3> Full <code>double</code> Comparison Code </h3>
<pre lang="csharp">
private static bool EqualToNumDigits( double expected, double actual, int numDigits )
    {
      bool xNaN = double.IsNaN( expected );
      bool yNaN = double.IsNaN( actual );
      if ( xNaN && yNaN )
      {
        return true;
      }
      if ( xNaN || yNaN )
      {
        return false;
      }
      if ( numDigits <= 0 )
      {
        throw new InvalidArgumentException( "numDigits is not positive in TestCase::EqualToNumDigits." );
      }

      double max = System.Math.Abs( expected ) > System.Math.Abs( actual ) ? System.Math.Abs( expected ) : System.Math.Abs( actual );
      double diff = System.Math.Abs( expected - actual );
      double relDiff = max > 1.0 ? diff / max : diff;
      if ( relDiff <= DOUBLE_EPSILON )
      {
        return true;
      }

      int numDigitsAgree = (int) ( -System.Math.Floor( Math.Log10( relDiff ) ) - 1 );
      //// Console.WriteLine( "x = {0}, y = {1}, rel diff = {2}, diff = {3}, num digits = {4}", x, y, relDiff, diff, numDigitsAgree );
      return numDigitsAgree >= numDigits;
    }
</pre>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/precision-and-reproducibility-in-computing">Precision and Reproducibility in Computing</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.centerspace.net/precision-and-reproducibility-in-computing/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5810</post-id>	</item>
		<item>
		<title>Special Functions</title>
		<link>https://www.centerspace.net/special-functions</link>
					<comments>https://www.centerspace.net/special-functions#respond</comments>
		
		<dc:creator><![CDATA[Paul Shirkey]]></dc:creator>
		<pubDate>Mon, 11 May 2015 14:37:00 +0000</pubDate>
				<category><![CDATA[NMath]]></category>
		<category><![CDATA[.net special functions]]></category>
		<category><![CDATA[bessel functions]]></category>
		<category><![CDATA[elliptic integrals]]></category>
		<category><![CDATA[gamma functions]]></category>
		<category><![CDATA[special functions]]></category>
		<guid isPermaLink="false">http://www.centerspace.net/blog/?p=5603</guid>

					<description><![CDATA[<p><img class="excerpt" title="PhiFunction" src="https://www.centerspace.net/blog/wp-content/uploads/2014/11/ScreenClip.png" alt="Phi Function Example" /> Motivated by the need of some special functions when writing signal processing code for NMath, we decided to add a suite of special functions to be included in NMath 6.1. While the field of special functions is vast, our 41 functions attempt to cover many of the most commonly needed functions in physics and engineering. This includes the gamma function and related functions, Bessel functions, elliptic integrals, and more.</p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/special-functions">Special Functions</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><img decoding="async" class="size-full wp-image-5682" style="float: right;" title="Phi function, from Abramowitz &amp; Stegun (1965) page 258." src="https://www.centerspace.net/blog/wp-content/uploads/2014/11/ScreenClip.png" alt="Phi function, from Abramowitz &amp; Stegun (1965) page 258." width="220" srcset="https://www.centerspace.net/wp-content/uploads/2014/11/ScreenClip.png 350w, https://www.centerspace.net/wp-content/uploads/2014/11/ScreenClip-220x300.png 220w" sizes="(max-width: 350px) 100vw, 350px" /> Motivated by the need of some special functions when writing signal processing code for NMath, we decided to add a suite of special functions to be included in NMath 6.1. While the field of special functions is vast [1], our 41 functions cover many of the most commonly needed functions in physics and engineering. This includes the gamma function and related functions, Bessel functions, elliptic integrals, and more. All special functions in NMath are now organized in a <code>SpecialFunctions</code> class, which is structured similarly to the existing <code>StatsFunctions</code> and <code>NMathFunctions</code> classes.</p>
<h2>Special functions list</h2>
<p>Below is a complete list of the special functions now available in the <code>SpecialFunctions</code> class which resides in the <code>CenterSpace.NMath.Core</code> name space. Previously a handful of these functions were available in either the <code>NMathFunctions</code> or <code>StatsFunctions</code> classes, but now those functions have been deprecated and consolidated into the <code>SpecialFunctions</code> class. Please update your code accordingly as these deprecated functions will be removed from NMath within two to three release cycles.</p>
<p>Using these special functions in your code is simple.</p>
<pre lang="csharp">using namespace CenterSpace.NMath.Core

// Compute the Jacobi function Sn() with a complex argument.
var cmplx = new DoubleComplex( 0.1, 3.3 )
var sn = SpecialFunctions.Sn( cmplx, .3 );  // sn = 0.16134 - i 0.99834

// Compute the elliptic integral, K(m)
var ei = SpecialFunctions.EllipticK( 0.432 ); // ei = 1.80039
</pre>
<p>Below is a complete list of all NMath special functions.</p>
<p>&nbsp;</p>
<table style="tr.alt td {&lt;br /&gt;    color: #000000; &lt;br /&gt;    background-color: #EAF2D3;">
<tbody>
<tr bgcolor="#d0d0dd">
<th>Special Function</th>
<th>Comments</th>
</tr>
<tr>
<td><code>EulerGamma</code></td>
<td>A constant, also known as the Euler-Macheroni constant. Famously, rationality unknown.</td>
</tr>
<tr>
<td><code>Airy</code></td>
<td>Provides solutions Ai, Bi, and derivatives Ai&#8217;, Bi&#8217; to y&#8221; &#8211; yz = 0.</td>
</tr>
<tr>
<td><code>Zeta</code></td>
<td>The Riemann zeta function.</td>
</tr>
<tr>
<td><code>PolyLogarithm</code></td>
<td>The Polylogarithm, Li_n(x) reduces to the Riemann zeta for x = 1.</td>
</tr>
<tr>
<td><code>HarmonicNumber</code></td>
<td>The harmonic number is a truncated sum of the harmonic series, closely related to the digamma function.</td>
</tr>
<tr>
<td><code>Factorial</code></td>
<td>n!</td>
</tr>
<tr>
<td><code>FactorialLn</code></td>
<td>The natural log of the factorial, ln( n! ).</td>
</tr>
<tr>
<td><code>Binomial</code></td>
<td>The binomial coefficient, n choose k; The number of ways of picking k unordered outcomes from n possibilities.</td>
</tr>
<tr>
<td><code>BinomialLn</code></td>
<td>The natural log of the binomial coefficient.</td>
</tr>
<tr>
<td><code>Gamma</code></td>
<td>The gamma function, conceptually a generalization of the factorial.</td>
</tr>
<tr>
<td><code>GammaReciprocal</code></td>
<td>The reciprocal of the gamma function.</td>
</tr>
<tr>
<td><code>IncompleteGammaFunction</code></td>
<td>Computes the gamma integral from 0 to x.</td>
</tr>
<tr>
<td><code>IncompleteGammaComplement</code></td>
<td>Computes the gamma integral from x to infinity (and beyond!).</td>
</tr>
<tr>
<td><code>Digamma</code></td>
<td>Also known as the psi function.</td>
</tr>
<tr>
<td><code>GammaLn</code></td>
<td>The natural log of the gamma function.</td>
</tr>
<tr>
<td><code>Beta</code></td>
<td>The beta integral is also known as the Eulerian integral of the first kind.</td>
</tr>
<tr>
<td><code>IncompleteBeta</code></td>
<td>Computes the beta integral from 0 to x in [0,1].</td>
</tr>
<tr>
<td><code>Ei</code></td>
<td>The exponential integral.</td>
</tr>
<tr>
<td colspan="2" bgcolor="#F0F0FF"><center>Elliptic Integrals</center></td>
</tr>
<tr>
<td><code>EllipticK</code></td>
<td>The complete elliptic integral, K(m), of the first kind. Note that m is related to the elliptic modulus k with, m = k * k.</td>
</tr>
<tr>
<td><code>EllipticE( m )</code></td>
<td>The complete elliptic integral, E(m), of the second kind.</td>
</tr>
<tr>
<td><code>EllipticF</code></td>
<td>The incomplete elliptic integral of the first kind.</td>
</tr>
<tr>
<td><code>EllipticE(phi, m)</code></td>
<td>The incomplete elliptic integral of the second kind.</td>
</tr>
<tr>
<td><code>EllipJ</code></td>
<td>Computes the Jacobi elliptic functions Cn(), Sn(), and Dn() for real arguments.</td>
</tr>
<tr>
<td><code>Sn</code></td>
<td>Computes the Jacobi elliptic function Sn() for complex arguments.</td>
</tr>
<tr>
<td><code>Cn</code></td>
<td>Computes the Jacobi elliptic function Cn() for complex arguments.</td>
</tr>
<tr>
<td colspan="2" bgcolor="#F0F0FF"><center>Bessel Functions</center></td>
</tr>
<tr>
<td><code>BesselI0</code></td>
<td>Modified Bessel function of the first kind, order zero.</td>
</tr>
<tr>
<td><code>BesselI1</code></td>
<td>Modified Bessel function of the first kind, first order.</td>
</tr>
<tr>
<td><code>BesselIv</code></td>
<td>Modified Bessel function of the first kind, non-integer order.</td>
</tr>
<tr>
<td><code>BesselJ0</code></td>
<td>Bessel function of the first kind, order zero.</td>
</tr>
<tr>
<td><code>BesselJ1</code></td>
<td>Bessel function of the first kind, first order.</td>
</tr>
<tr>
<td><code>BesselJn</code></td>
<td>Bessel function of the first kind, arbitrary integer order.</td>
</tr>
<tr>
<td><code>BesselJv</code></td>
<td>Bessel function of first kind, non-integer order.</td>
</tr>
<tr>
<td><code>BesselK0</code></td>
<td>Modified Bessel function of the second kind, order zero.</td>
</tr>
<tr>
<td><code>BesselK1</code></td>
<td>Modified Bessel function of the second kind, order one.</td>
</tr>
<tr>
<td><code>BesselKn</code></td>
<td>Modified Bessel function of the second kind, arbitrary integer order.</td>
</tr>
<tr>
<td><code>BesselY0</code></td>
<td>Bessel function of the second kind, order zero.</td>
</tr>
<tr>
<td><code>BesselY1</code></td>
<td>Bessel function of the second kind, order one.</td>
</tr>
<tr>
<td><code>BesselYn</code></td>
<td>Bessel function of the second kind of integer order.</td>
</tr>
<tr>
<td><code>BesselYv</code></td>
<td>Bessel function of the second kind, non-integer order.</td>
</tr>
<tr>
<td colspan="2" bgcolor="#F0F0FF"><center>Hypergeometric Functions</center></td>
</tr>
<tr>
<td><code>Hypergeometric1F1</code></td>
<td>The confluent hypergeometric series of the first kind.</td>
</tr>
<tr>
<td><code>Hypergeometric2F1</code></td>
<td>The Gauss or generalized hypergeometric function.</td>
</tr>
</tbody>
</table>
<p>&nbsp;</p>
<p>Let us know if you need any additional special functions and we&#8217;ll see if we can add them.</p>
<p>Mathematically,</p>
<p>Paul Shirkey</p>
<h4>References</h4>
<p>[1] Abramowitz, M. and Stegun, I. (1965). Handbook of Mathematical Functions. Dover Publications. ( <a href="http://people.math.sfu.ca/~cbm/aands/abramowitz_and_stegun.pdf">Abramowitz and Stegun PDF</a> )<br />
[2] Wolfram Alpha LLC. (2014). <a href="http://www.wolframalpha.com">www.wolframalpha.com</a><br />
[3] Weisstein, Eric W. &#8220;[Various Articles]&#8221; From MathWorld&#8211;A Wolfram Web Resource. <a href="http://mathworld.wolfram.com/">http://mathworld.wolfram.com/</a><br />
[4] Moshier L. Stephen. (1995) The Cephes Math Library. (<a href="http://www.netlib.org/cephes/">Cephes</a>).</p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/special-functions">Special Functions</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.centerspace.net/special-functions/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5603</post-id>	</item>
		<item>
		<title>NMath Premium&#8217;s new Adaptive GPU Bridge Architecture</title>
		<link>https://www.centerspace.net/gpu-math-csharp</link>
					<comments>https://www.centerspace.net/gpu-math-csharp#respond</comments>
		
		<dc:creator><![CDATA[Paul Shirkey]]></dc:creator>
		<pubDate>Mon, 13 Oct 2014 16:35:01 +0000</pubDate>
				<category><![CDATA[NMath Premium]]></category>
		<category><![CDATA[.NET GPU]]></category>
		<category><![CDATA[C# GPU]]></category>
		<category><![CDATA[c# LAPACK GPU]]></category>
		<category><![CDATA[C# Nvidia GPU]]></category>
		<category><![CDATA[math gpu]]></category>
		<category><![CDATA[math gpu csharp]]></category>
		<category><![CDATA[NMath GPU]]></category>
		<category><![CDATA[Offloading to GPU]]></category>
		<guid isPermaLink="false">http://www.centerspace.net/blog/?p=5295</guid>

					<description><![CDATA[<p>The most release of NMath Premium 6.0 is a major upgrade to the GPU API and it enables users to easily use multiple installed NVIDIA GPU's.  As always, using NMath Premium to leverage GPU's never requires any kernel-level GPU programming or other specialized GPU programming skills.  In the following article, after introducing the new GPU bridge architecture, we'll discuss each of the new API features separately with code examples.</p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/gpu-math-csharp">NMath Premium&#8217;s new Adaptive GPU Bridge Architecture</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>The most recent release of NMath Premium 6.0 is a major update which includes an upgraded optimization suite, now backed by the Microsoft Solver Foundation, a significantly more powerful GPU-bridge architecture, and a new class for cubic smoothing splines. This blog post will focus on the new API for doing computation on GPU&#8217;s with NMath Premium. </p>
<p>The adaptive GPU bridge API in NMath Premium 6.0 includes the following important new features.</p>
<section>
<ul>
<li>Support for multiple GPU&#8217;s</li>
<li>Automatic tuning of the CPU&#8211;GPU adaptive bridge to insure optimal hardware usage.</li>
<li>Per-thread control for binding threads to GPU&#8217;s.</li>
</ul>
</section>
<p>As with the first release of NMath Premium, using NMath to leverage massively-parallel GPU&#8217;s never requires any kernel-level GPU programming or other specialized GPU programming skills. Yet the programmer can easily take as much control as needed to route executing threads or tasks to any available GPU device. In the following, after introducing the new GPU bridge architecture, we&#8217;ll discuss each of these features separately with code examples.</p>
<p>Before getting started on our NMath Premium tutorial it&#8217;s important to consider your test GPU model.  While many of NVIDIA&#8217;s GPU&#8217;s provide a good to excellent computational advantage over the CPU, not all of NVIDIA&#8217;s GPU&#8217;s were designed with general computing in mind. The &#8220;NVS&#8221; class of NVIDIA GPU&#8217;s (such as the NVS 5400M) generally perform very poorly as do the &#8220;GT&#8221; cards in the GeForce series. However the &#8220;GTX&#8221; cards in the <a href="http://www.geforce.com/hardware" target="_blank">GeForce series</a> generally perform well, as do the Quadro Desktop Produces and the Tesla cards. While it&#8217;s fine to test NMath Premium on any NVIDIA, testing on inexpensive consumer grade video cards will rarely show any performance advantage.</p>
<h3>NMath&#8217;s GPU API Basics</h3>
<p>With NMath there are three fundamental software entities involved with routing computations between the CPU and GPU&#8217;s: GPU hardware devices represented by <code>IComputeDevice</code> instances, the <code>Bridge</code> classes which control when a particular operation is sent to the CPU or a GPU, and finally the <code>BridgeManager</code> which provides the primary means for managing the devices and bridges.</p>
<p>These three entities are governed by two important ideas.</p>
<ol>
<li><code>Bridges</code> are assigned to compute devices and there is a strict one-to-one relationship between each <code>Bridge</code> and <code>IComputeDevice</code>. Once assigned, the bridge instance governs when computations will be sent to it&#8217;s paired GPU device or the CPU.</li>
<li>Executing threads are assigned to devices; this is a many-to-one relationship. Any number of threads can be routed to a particular compute device.</li>
</ol>
<p>Assigning a <code>Bridge</code> class to a device is one line of code with the <code>BridgeManager</code>.</p>
<pre lang="csharp">BridgeManager.Instance.SetBridge( BridgeManager.Instance.GetComputeDevice( 0 ), bridge );</pre>
<p>Assigning a thread, in this case the <code>CurrentThread</code>, to a device is again accomplished using the <code>BridgeManager</code>.</p>
<pre lang="csharp">IComputeDevice cd = BridgeManager.Instance.GetComputeDevice( 0 );
BridgeManager.Instance.SetComputeDevice( cd, Thread.CurrentThread );</pre>
<p>After installing NMath Premium, the default behavior will create a default bridge and assign it to the GPU with a device number of 0 (generally the fastest GPU installed). Also by default, all unassigned threads will execute on device 0. This means that out of the box with no additional programming, existing NMath code, once recompiled against the new NMath Premium assemblies, will route all appropriate computations to the device 0 GPU. All of the follow discussions and code examples are ways to refine this default behavior to get the best performance from your GPU hardware.</p>
<h3>Math on Multiple GPU&#8217;s Supported</h3>
<p>Currently only the NVIDIA GPU with a device number 0 is supported by NMath Premium, this release removes that barrier. With version 6, work can be assigned to any installed NVIDIA device as long as the device drivers are up-to-date.</p>
<p>The work done by an executing thread is routed to a particular device using the <code>BridgeManager.Instance.SetDevice()</code> as we saw in the example above. Any properly configured hardware device can be used here including any NVIDIA device and the CPU. The CPU is simply viewed as another compute device and is always assigned a device number of -1.</p>
<pre lang="csharp" line="1">var bmanager = BridgeManager.Instance;

var cd = bmanager .GetComputeDevice( -1 );
BridgeManager.Instance.SetComputeDevice( cd, Thread.CurrentThread );
....
cd = bmanager .GetComputeDevice( 2 );
BridgeManager.Instance.SetComputeDevice( cd, Thread.CurrentThread );</pre>
<p>Lines 3 &#038; 4 first assign the current thread to the CPU device (no code on this thread will run on any GPU) and then in lines 6 &#038; 7 the current thread is switched to the GPU device 2.  If an invalid compute device is requested a null <code>IComputeDevice</code> is returned.  To find all available computing devices, the <code>BridgeManager</code> offers an array of <code>IComputeDevices</code> which contains all detected compute devices <em>including the CPU</em>, called <code>IComputeDevices Devices[]</code>. The number of detected GPU&#8217;s can be found using the property <code>BridgeManager.Instance.CountGPU</code>.</p>
<p>As an aside, keep in mind that PCI slot numbers do not necessarily correspond to GPU device numbers. NVIDIA assigns the device number 0 to the fastest detected GPU and so installing an additional GPU into a machine may renumber the device numbers for the previously installed GPU&#8217;s.</p>
<h3>Tuning the Adaptive Bridge</h3>
<p>Assigning a <code>Bridge</code> to a GPU device doesn&#8217;t necessarily mean that all computation routed to that device will run on that device. Instead, the assigned <code>Bridge</code> acts as an intermediary between the CPU and the GPU and moves the larger problems to the GPU where there&#8217;s a speed advantage and retains the smaller problems on the CPU. NMath has a built-in default bridge, but it may generate non-optimal run-times depending on your hardware or your customers hardware configuration. To improved the hardware usage and performance a bridge can be tuned once and then persisted to disk for all future use.</p>
<pre lang="csharp">// Get a compute device and a new bridge.
IComputeDevice cd = BridgeManager.Instance.GetComputeDevice( 0 );
Bridge bridge = BridgeManager.Instance.NewDefaultBridge( cd );

// Tune this bridge for the matrix multiply operation alone. 
bridge.Tune( BridgeFunctions.dgemm, cd, 1200 );

// Or just tune the entire bridge.  Depending on the hardware and tuning parameters
// this can be an expensive one-time operation. 
bridge.TuneAll( cd, 1200 );

// Now assign this updated bridge to the device.
BridgeManager.Instance.SetBridge( cd, bridge );

// Persisting the bridge that was tuned above is done with the BridgeManager.  
// Note that this overwrites any existing bridge with the same name.
BridgeManager.Instance.SaveBridge( bridge, @".\MyTunedBridge" );

// Then loading that bridge from disk is simple.
var myTunedBridge = BridgeManager.Instance.LoadBridge( @".\MyTunedBridge" );</pre>
<p>Once a bridge is tuned it can be persisted, redistributed, and used again. If three different GPU&#8217;s are installed this tuning should be done once for each GPU and then each bridge should be assigned to the device it was tuned on. However if there are three identical GPU&#8217;s the tuning need be done only once, then persisted to disk, and later assigned to all identical GPU&#8217;s. Bridges assigned to GPU devices for which it wasn&#8217;t tuned will never result in incorrect results, only possibly under performance of the hardware.</p>
<h3>Thread Control</h3>
<p>Once a bridge is paired to a device, threads may be assigned to that device for execution. This is not a necessary step as all unassigned threads will run on the default device (typically device 0). However, suppose we have three tasks and three GPU&#8217;s, and we wish to use a GPU per task.  The following code does that.</p>
<pre lang="csharp">...
IComputeDevice gpu0= BridgeManager.Instance.GetComputeDevice( 0 );
IComputeDevice gpu1 = BridgeManager.Instance.GetComputeDevice( 1 );
IComputeDevice gpu2 = BridgeManager.Instance.GetComputeDevice( 2 );

if( gpu0 != null && gpu1 != null && gpu2 != null)
{
   System.Threading.Tasks.Task[] tasks = new Task[3]
   {
      Task.Factory.StartNew(() => Task1Worker(gpu0)),
      Task.Factory.StartNew(() => Task2Worker(gpu1)),
      Task.Factory.StartNew(() => Task2Worker(gpu2)),
   };

   //Block until all tasks complete.
   Task.WaitAll(tasks);
}
...</pre>
<p>This code is standard C# code using the <a href="https://msdn.microsoft.com/en-us/library/dd460717(v=vs.110).aspx" target="_blank">Task Parallel Library</a> and contains no NMath Premium specific API calls outside of passing a GPU compute device to each task. The task worker routines have the following simple structure.</p>
<pre lang="csharp">private static void Task1Worker( IComputeDevice cd  )
  {
      BridgeManager.Instance.SetComputeDevice( cd );

      // Do Work here.
  }</pre>
<p>The other two task workers are identical outside of whatever useful computing work they may be doing.</p>
<p>Good luck and please post any questions in the comments below or just email us at support AT centerspace.net we&#8217;ll get back to you.</p>
<p>Happy Computing,</p>
<p>Paul</p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/gpu-math-csharp">NMath Premium&#8217;s new Adaptive GPU Bridge Architecture</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.centerspace.net/gpu-math-csharp/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5295</post-id>	</item>
		<item>
		<title>Distributing Parallel Tasks on Multiple GPU&#8217;s</title>
		<link>https://www.centerspace.net/tasks-on-gpu</link>
					<comments>https://www.centerspace.net/tasks-on-gpu#respond</comments>
		
		<dc:creator><![CDATA[Paul Shirkey]]></dc:creator>
		<pubDate>Wed, 17 Sep 2014 20:50:51 +0000</pubDate>
				<category><![CDATA[NMath Premium]]></category>
		<category><![CDATA[.NET GPU]]></category>
		<category><![CDATA[C# GPU]]></category>
		<category><![CDATA[c# LAPACK GPU]]></category>
		<category><![CDATA[C# Nvidia GPU]]></category>
		<category><![CDATA[math gpu]]></category>
		<category><![CDATA[math gpu csharp]]></category>
		<category><![CDATA[NMath GPU]]></category>
		<category><![CDATA[Offloading to GPU's]]></category>
		<guid isPermaLink="false">http://www.centerspace.net/blog/?p=5397</guid>

					<description><![CDATA[<p><img class="excerpt" alt="NMath Premium" src="/themes/centerspace/images/nmath-premium.png" /> Once Microsoft published the <code>Threading.Task</code> library with .NET 4 many programmers who never or only occasionally wrote multi-threaded code were now doing so regularly with the <code>Threading.Task</code> API.  The Task library reduced the complexity of writing threaded code and provided a several new related classes to make the process easier while eliminating some pitfalls.  In this post I'm going to show how to use the Task library with NMath Premium 6.0 to run tasks in parallel on multiple GPU's and the CPU.</p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/tasks-on-gpu">Distributing Parallel Tasks on Multiple GPU&#8217;s</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>In this post I&#8217;m going demonstrate how to use the Task Parallel Library with NMath Premium to run tasks in parallel on multiple GPU&#8217;s and the CPU.  Back in 2012 when Microsoft released .NET 4.0 and the <code>System.Threading.Task</code> namespace many .NET programmers never, or only under duress, wrote multi-threaded code.  It&#8217;s old news now that <a href="https://msdn.microsoft.com/en-us/library/dd460693(v=vs.100).aspx" target="_blank">TPL</a> has reduced the complexity of writing threaded code by providing several new classes to make the process easier while eliminating some pitfalls.  Leveraging the TPL API together with NMath Premium is a powerful combination for quickly getting code running on your GPU hardware without the burden of learning complex CUDA programming techniques.</p>
<h2> NMath Premium GPU Smart Bridge</h2>
<p>The NMath Premium 6.0 library is now integrated with a new CPU-GPU hybrid-computing Adaptive Bridge&trade; Technology.  This technology allows users to easily assign specific threads to a particular compute device and manage computational routing between the CPU and multiple on-board GPU&#8217;s.  Each piece of installed computing hardware is uniformly treated as a compute device and managed in software as an immutable <code>IComputeDevice</code>;  Currently the adaptive bridge allows a single CPU compute device (naturally!) along with any number of NVIDIA GPU devices.  How NMath Premium interacts with each compute device is governed by a <code>Bridge</code> class.  A one-to-one relationship between each <code>Bridge</code> instance and each compute device is enforced.  All of the compute devices and bridges are managed by the singleton <code>BridgeManager</code> class.</p>
<p><figure id="attachment_5473" aria-describedby="caption-attachment-5473" style="width: 600px" class="wp-caption alignnone"><img decoding="async" src="https://www.centerspace.net/blog/wp-content/uploads/2014/04/Adaptive-Bridge.png" alt="Adaptive Bridge" width="600" class="size-full wp-image-5473" srcset="https://www.centerspace.net/wp-content/uploads/2014/04/Adaptive-Bridge.png 700w, https://www.centerspace.net/wp-content/uploads/2014/04/Adaptive-Bridge-300x186.png 300w" sizes="(max-width: 700px) 100vw, 700px" /><figcaption id="caption-attachment-5473" class="wp-caption-text">Adaptive Bridge</figcaption></figure></p>
<p>These three classes: the <code>BridgeManager</code>, the <code>Bridge</code>, and the immutable <code>IComputeDevice</code> form the entire API of the Adaptive Bridge&trade;.  With this API, nearly all programming tasks, such as assigning a particular <code>Action<></code> to a specific GPU, are accomplished in one or two lines of code.  Let&#8217;s look at some code that does just that:  Run an <code>Action<></code> on a GPU.</p>
<pre lang="csharp">
using CenterSpace.NMath.Matrix;

public void mainProgram( string[] args )
    {
      // Set up a Action<> that runs on a IComputeDevice.
      Action<IComputeDevice, int> worker = WorkerAction;
      
      // Get the compute devices we wish to run our 
      // Action<> on - in this case two GPU 0.
      IComputeDevice deviceGPU0 = BridgeManager.Instance.GetComputeDevice( 0 );

      // Do work
      worker(deviceGPU0, 9);
    }

    private void WorkerAction( IComputeDevice device, int input )
    {
      // Place this thread to the given compute device.
      BridgeManager.Instance.SetComputeDevice( device );

      // Do all the hard work here on the assigned device.
      // Call various GPU-aware NMath Premium routines here.
      FloatMatrix A = new FloatMatrix( 1230, 900, new RandGenUniform( -1, 1, 37 ) );
      FloatSVDecompServer server = new FloatSVDecompServer();
      FloatSVDDecomp svd = server.GetDecomp( A );
    }
</pre>
<p>It&#8217;s important to understand that only operations where the GPU has a computational advantage are actually run on the GPU.  So it&#8217;s not as though all of the code in the <code>WorkerAction</code> runs on the GPU, but only code that makes sense such as: SVD, QR decomp, matrix multiply, Eigenvalue decomposition and so forth.  But using this as a code template, you can easily run your own worker several times passing in different compute devices each time to compare the computational advantages or disadvantages of using various devices &#8211; including the CPU compute device.</p>
<p>In the above code example the <code>BridgeManager</code> is used twice: once to get a <code>IComputeDevice</code> reference and once to assign a thread (the <code>Action<>'s</code> thread in this case ) to the device.  The <code>Bridge</code> class didn&#8217;t come into play since we implicitly relied on a default bridge to be assigned to our compute device of choice.  Relying on the default bridge will likely result in inferior performance so it&#8217;s best to use a bridge that has been specifically tuned to your NVIDIA GPU.  The follow code shows how to accomplish bridge tuning.</p>
<pre lang="csharp">
  // Here we get the bridge associated with GPU device 0.
  var cd = BridgeManager.Instance.GetComputeDevice( 0 );
  var bridge = (Bridge) BridgeManager.Instance.GetBridge( cd );

  // Tune the bridge and save it.  Turning can take a few minutes.
  bridge.TuneAll( device, 1200 );
  bridge.SaveBridge("Device0Bridge.bdg");
</pre>
<p>This bridge turning is typically a one-time operation per computer, and once done, the tuned bridge can be serialized to disk and then reload at application start-up.  If new GPU hardware is installed then this tuning operation should be repeated.  The following code snipped loads a saved bridge and pairs it with a device.</p>
<pre lang="csharp">
  // Load our serialized bridge.
  Bridge bridge = BridgeManager.Instance.LoadBridge( "Device0Bridge.bdg" );
  
  // Now pair this saved bridge with compute device 0.   
  var device0 = BridgeManager.Instance.GetComputeDevice( 0 );
  BridgeManager.Instance.SetBridge( device0, bridge );
</pre>
<p>Once the tuned bridge is assigned to a device, the behavior of all threads assigned to that device will be governed by that bridge.  In the typical application the pairing of bridges to devices done at start up and not altered again, while the assignment of threads to devices may be done frequently at runtime.</p>
<p>It&#8217;s interesting to note that beyond optimally routing small and large problems to the CPU and GPU respectively, bridges can be configured to shunt all work to the GPU regardless of problem size.  This is useful for testing and for offloading work to a GPU when the CPU if taxed.  Even if the particular problem runs slower on the GPU than the CPU, if the CPU is fully occupied, offloading work to an otherwise idle GPU will enhance performance.</p>
<h2> C# Code Example of Running Tasks on Two GPU&#8217;s </h2>
<p>I&#8217;m going to wrap up this blog post with a complete C# code example which runs a matrix multiplication task simultaneously on two GPU&#8217;s and the CPU.  The framework of this example uses the TPL and aspects of the adaptive bridge already covered here.  I ran this code on a machine with two NVIDIA GeForce GPU&#8217;s, a GTX760 and a GT640, and the timing results from this run for executing a large matrix multiplication are shown below.</p>
<pre class="code">
Finished matrix multiply on the GeForce GTX 760 in 67 ms.
Finished matrix multiply on the Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz in 103 ms.
Finished matrix multiply on the GeForce GT 640 in 282 ms.

Finished all double precision matrix multiplications in parallel in 282 ms.
</pre>
<p>The complete code for this example is given in the section below.  In this run we see the GeForce GTX760 easily finished first in 67ms followed by the CPU and then finally by the GeForce GT640.  It&#8217;s expected that the GeForce GT640 would not do well in this example because it&#8217;s optimized for single precision work and these matrix multiples are double precision.  Nevertheless, this examples shows it&#8217;s programmatically simple to push work to any NVIDIA GPU and in a threaded application even a relatively slow GPU can be used to offload work from the CPU.  Also note that the entire program ran in 282ms &#8211; the time required to finish the matrix multiply by the slowest hardware &#8211; verifying that all three tasks did run in parallel and that there was very little overhead in using the TPL or the Adaptive Bridge&trade;</p>
<p>Below is a snippet of the NMath Premium log file generated during the run above.</p>
<pre class="code">
	Time 		        tid   Device#  Function    Device Used    
2014-04-28 11:22:47.417 AM	10	0	dgemm		GPU
2014-04-28 11:22:47.421 AM	15	1	dgemm		GPU
2014-04-28 11:22:47.425 AM	13	-1	dgemm		CPU
</pre>
<p>We can see here that three threads were created nearly simultaneously with thread id&#8217;s of 10, 15, &#038; 13;  And that the first two threads ran their matrix multiplies (dgemm) on  GPU&#8217;s 0 and 1 and the last thread 13 ran on the CPU.  As a matter of convention the CPU device number is always -1 and all GPU device numbers are integers 0 and greater.  Typically device number 0 is assigned to the fastest installed GPU and that is the default GPU used by NMath Premium.  </p>
<p>-Paul</p>
<h3> TPL Tasks on Multiple GPU&#8217;s C# Code </h3>
<pre lang="csharp">
public void GPUTaskExample()
    {
     
      NMathConfiguration.Init();

      // Set up a string writer for logging
      using ( var writer = new System.IO.StringWriter() )
      {

        // Enable the CPU/GPU bridge logging
        BridgeManager.Instance.EnableLogging( writer );

        // Get the compute devices we wish to run our tasks on - in this case 
        // two GPU's and the CPU.
        IComputeDevice deviceGPU0 = BridgeManager.Instance.GetComputeDevice( 0 );
        IComputeDevice deviceGPU1 = BridgeManager.Instance.GetComputeDevice( 1 );
        IComputeDevice deviceCPU = BridgeManager.Instance.CPU;

        // Build some matrices
        var A = new DoubleMatrix( 1200, 1400, 0, 1 );
        var B = new DoubleMatrix( 1400, 1300, 0, 1 );

        // Build the task array and assign matrix multiply jobs and compute devices
        // to those tasks.  Any number of tasks can be added here and any number 
        // of tasks can be assigned to a particular device.
        Stopwatch timer = new Stopwatch();
        timer.Start();
        System.Threading.Tasks.Task[] tasks = new Task[3]
        {
          Task.Factory.StartNew(() => MatrixMultiply(deviceGPU0, A, B)),
          Task.Factory.StartNew(() => MatrixMultiply(deviceGPU1, A, B)),
          Task.Factory.StartNew(() => MatrixMultiply(deviceCPU, A, B)),
        };

        // Block until all tasks complete
        Task.WaitAll( tasks );
        timer.Stop();
        Console.WriteLine( "Finished all double precision matrix multiplications in parallel in " + timer.ElapsedMilliseconds + " ms.\n" );

        // Dump the log file for verification.
        Console.WriteLine( writer );

        // Quit logging
        BridgeManager.Instance.DisableLogging();
      
      }
    }

    private static void MatrixMultiply( IComputeDevice device, DoubleMatrix A, DoubleMatrix B )
    {
      // Place this thread to the given compute device.
      BridgeManager.Instance.SetComputeDevice( device );

      Stopwatch timer = new Stopwatch();
      timer.Start();

      // Do this task work.
      NMathFunctions.Product( A, B );

      timer.Stop();
      Console.WriteLine( "Finished matrix multiplication on the " + device.DeviceName  + " in " + timer.ElapsedMilliseconds + " ms.\n" );
    }
    
</pre>
<pre lang="csharp">

</pre>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/tasks-on-gpu">Distributing Parallel Tasks on Multiple GPU&#8217;s</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.centerspace.net/tasks-on-gpu/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5397</post-id>	</item>
		<item>
		<title>CenterSpace in Chicago and Singapore</title>
		<link>https://www.centerspace.net/centerspace-university-of-chicago</link>
					<comments>https://www.centerspace.net/centerspace-university-of-chicago#respond</comments>
		
		<dc:creator><![CDATA[Paul Shirkey]]></dc:creator>
		<pubDate>Wed, 18 Jun 2014 19:06:07 +0000</pubDate>
				<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">http://www.centerspace.net/blog/?p=5534</guid>

					<description><![CDATA[<p>NVIDIA GPU Technology Workshop in SE Asia CenterSpace will be giving a presentation at the upcoming GPU Technology Workshop South East Asia on July 10. The conference will be held at the Suntec Singapore Convention &#038; Exhibition Centre. For a full schedule of talks see the agenda. Abstract From CPU to GPU: a comparative case [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/centerspace-university-of-chicago">CenterSpace in Chicago and Singapore</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h2> NVIDIA GPU Technology Workshop in SE Asia </h2>
<p>CenterSpace will be giving a presentation at the upcoming <em>GPU Technology Workshop South East Asia</em> on July 10.  The conference will be held at the Suntec Singapore Convention &#038; Exhibition Centre.  For a full schedule of talks see the <a href="http://www.nvidia.com.tw/object/gpu-technology-workshop-2014-sea.html">agenda</a>. </p>
<figure class="testimonial"><figcaption class="attribution">
       <span>Abstract</span><br />
    </figcaption><blockquote>
<p><strong>From CPU to GPU: a comparative case study / Andy Gray – CenterSpace Software</strong></p>
<p> <em>In this code-centric presentation, we will compare and contrast several approaches to a simple algorithmic problem: a straightforward implementation using managed code, a multi-CPU approach using a parallelization library, coupling object-oriented managed abstractions with high-performance native code, and seamlessly leveraging the power of a GPU for massive parallelization without code changes.</em></p>
</blockquote>
</figure>
<p>Andy Gray, a technology evangelist for CenterSpace Software, will be delivering the talk.  We hope to see you there!</p>
<h2> Parallel Computing in Finance Lecture </h2>
<p>The June 5-6 conference at the University of Chicago titled, <a href="https://stevanovichcenter.uchicago.edu/recent-developments-in-parallel-computing-in-finance/">Recent Developments in Parallel Computing in Finance</a> hosted talks by various academics in finance, Microsoft, Intel, and CenterSpace.  <strong>CenterSpace </strong> was invited to give a two hour lecture and tutorial on GPU computing at the Stevanovich Center at the University of Chicago.  We will post up the tutorial video from the talk as soon as it becomes available.</p>
<figure class="testimonial"><figcaption class="attribution">
       <span>Abstract</span><br />
    </figcaption><blockquote>
<strong>Lecture by Trevor Misfeldt</strong></p>
<p><em>CenterSpace Software, a leading provider of numerical component libraries for the .NET platform, will give an overview of their NMath math and statistics libraries and how they are being used in industry.  The Premium Edition of NMath offers GPU parallelization. Xeon Phi, C++ AMP and CUDA are technologies of interest. Support for each will be discussed.  Also discussed will be CenterSpace&#8217;s Adapative Bridge&#x2122; technology, which provides intelligent, adaptive routing of computations between CPU and GPUs.  The presentation will finish with a demonstration followed by performance charts. </em></p>
<p><strong>Tutorial by Andy Gray</strong></p>
<p><em>In this hands-on programming tutorial, we will compare and contrast several approaches to a simple algorithmic problem: a straightforward implementation using managed code, a multi-CPU approach using a parallelization library, coupling object-oriented managed abstractions with high-performance native code, and seamlessly leveraging the power of a GPU for massive parallelization.</em>
    </p></blockquote>
</figure>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/centerspace-university-of-chicago">CenterSpace in Chicago and Singapore</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.centerspace.net/centerspace-university-of-chicago/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5534</post-id>	</item>
	</channel>
</rss>
