<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	
	xmlns:georss="http://www.georss.org/georss"
	xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#"
	>

<channel>
	<title>Object-Oriented Numerics Archives - CenterSpace</title>
	<atom:link href="https://www.centerspace.net/category/object-oriented-numerics/feed" rel="self" type="application/rss+xml" />
	<link>https://www.centerspace.net/category/object-oriented-numerics</link>
	<description>.NET numerical class libraries</description>
	<lastBuildDate>Mon, 23 Nov 2015 18:51:56 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.1.1</generator>
<site xmlns="com-wordpress:feed-additions:1">104092929</site>	<item>
		<title>Precision and Reproducibility in Computing</title>
		<link>https://www.centerspace.net/precision-and-reproducibility-in-computing</link>
					<comments>https://www.centerspace.net/precision-and-reproducibility-in-computing#respond</comments>
		
		<dc:creator><![CDATA[Paul Shirkey]]></dc:creator>
		<pubDate>Mon, 16 Nov 2015 22:32:31 +0000</pubDate>
				<category><![CDATA[MKL]]></category>
		<category><![CDATA[NMath]]></category>
		<category><![CDATA[Object-Oriented Numerics]]></category>
		<category><![CDATA[Performance]]></category>
		<category><![CDATA[floating point precision]]></category>
		<category><![CDATA[MKL repeatability]]></category>
		<category><![CDATA[MKL reproducibility]]></category>
		<category><![CDATA[NMath repeatability]]></category>
		<category><![CDATA[NMath Reproducibility]]></category>
		<category><![CDATA[repeatability]]></category>
		<category><![CDATA[repeatability in computing]]></category>
		<category><![CDATA[Reproducibility in computing]]></category>
		<guid isPermaLink="false">http://www.centerspace.net/blog/?p=5810</guid>

					<description><![CDATA[<p>Run-to-run reproducibility in computing is often assumed as an obvious truth.  However software running on modern computer architectures, among many other processes, particularly when coupled with advanced performance-optimized libraries, is often only guaranteed to produce reproducible results only up to a certain precision; beyond that results can and do vary run-to-run.</p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/precision-and-reproducibility-in-computing">Precision and Reproducibility in Computing</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Run-to-run reproducibility in computing is often assumed as an obvious truth.  However software running on modern computer architectures, among many other processes, particularly when coupled with advanced performance-optimized libraries, is often only guaranteed to produce reproducible results only up to a certain precision; beyond that results can and do vary run-to-run.  Reproducibility is interrelated with the precision of floating-point point types and the resultant rounding, operation re-ordering, memory structure and use, and finally how real numbers are represented internally in a computer&#8217;s registers.  </p>
<p>This issue of reproducibility arises with <strong>NMath</strong> users when writing and running unit tests; which is why it&#8217;s important when writing tests to compare floating point numbers only up to their designed precision, at an absolute maximum.  With the IEEE 754 floating point representation which virtually all modern computers adhere to, the single precision <code>float </code>type uses 32 bits or 4 bytes and offers 24 bits of precision or about <em>7 decimal digits</em>. While the double precision <code>double </code>type requires 64 bits or 8 bytes and offers 53 bits of precision or about <em>15 decimal digits</em>.  Few algorithms can achieve significant results to the 15th decimal place due to rounding, loss of precision due to subtraction and other sources of numerical precision degradation.  <strong>NMath&#8217;s</strong> numerical results are tested, at a maximum, to the 14th decimal place.</p>
<h4 style="padding-left: 30px;"><em>A Precision Example</em></h4>
<p style="padding-left: 30px;">As an example, what does the following code output?</p>
<pre style="padding-left: 30px;" lang="csharp">      double x = .050000000000000003;
      double y = .050000000000000000;
      if ( x == y )
        Console.WriteLine( "x is y" );
      else
        Console.WriteLine( "x is not y" );
</pre>
<p style="padding-left: 30px;">I get &#8220;x is y&#8221;, which is clearly not the case, but the number x specified is beyond the precision of a <code>double </code>type.</p>
<p>Due to these limits on decimal number representation and the resulting rounding, the numerical results of some operations can be affected by the associative reordering of operations. For example, in some cases <code>a*x + a*z</code> may not equal <code>a*(x + z)</code> with floating point types.  Although this can be difficult to test using modern optimizing compilers because the code you write and the code that runs can be organized in a very different way, but is mathematically equivalent if not numerically.</p>
<p>So <em>reproducibility </em>is impacted by precision via dynamic operation reorderings in the ALU and additionally by run-time processor dispatching, data-array alignment, and variation in thread number among other factors.  These issues can create <em>run-to-run</em> differences in the least significant digits.  Two runs, same code, two answers.  <em>This is by design and is not an issue of correctness</em>.  Subtle changes in the memory layout of the program&#8217;s data, differences in loading of the ALU registers and operation order, and differences in threading all due to unrelated processes running on the same machine cause these run-to-run differences. </p>
<h3> Managing Reproducibility </h3>
<p>Most importantly, one should test code&#8217;s numerical results only to the precision that can be expected by the algorithm, input data, and finally the limits of floating point arithmetic.  To do this in unit tests, compare floating point numbers carefully only to a fixed number of digits.  The code snippet below compares two double numbers and returns true only if the numbers match to a specified number of digits.  </p>
<pre lang="csharp">
private static bool EqualToNumDigits( double expected, double actual, int numDigits )
    {
      double max = System.Math.Abs( expected ) > System.Math.Abs( actual ) ? System.Math.Abs( expected ) : System.Math.Abs( actual );
      double diff = System.Math.Abs( expected - actual );
      double relDiff = max > 1.0 ? diff / max : diff;
      if ( relDiff <= DOUBLE_EPSILON )
      {
        return true;
      }

      int numDigitsAgree = (int) ( -System.Math.Floor( Math.Log10( relDiff ) ) - 1 );
      return numDigitsAgree >= numDigits;
    }
</pre>
<p>This type of comparison should be used throughout unit testing code.  The full code listing, which we use for our internal testing, is provided at the end of this article.</p>
<p>If it is essential to enforce binary run-to-run reproducibility to the limits of precision, <strong>NMath </strong>provides a flag in its configuration class to ensure this is the case.  However this flag should be set for unit testing only because there can be a significant cost to performance.  In general, expect a 10% to 20% reduction in performance with some common operations degrading far more than that.  For example, some matrix multiplications will take twice the time with this flag set.</p>
<p>Note that the number of threads that Intel&#8217;s MKL library uses ( which <strong>NMath</strong> depends on ) must also be fixed before setting the reproducibility flag.</p>
<pre lang="csharp">
int numThreads = 2;  // This must be fixed for reproducibility.
NMathConfiguration.SetMKLNumThreads( numThreads );
NMathConfiguration.Reproducibility = true;
</pre>
<p>This reproducibility run configuration for <strong>NMath </strong>cannot be unset at a later point in the program.  Note that both setting the number of threads and the reproducibility flag may be set in the AppConfig or in environmental variables.  See the <a href="https://www.centerspace.net/doc/NMath/user/overview-83549.htm#Xoverview-83549">NMath User Guide</a> for instructions on how to do this. </p>
<p>Paul</p>
<p><strong>References</strong></p>
<p>M. A. Cornea-Hasegan, B. Norin.  <em>IA-64 Floating-Point Operations and the IEEE Standard for Binary Floating-Point Arithmetic</em>. Intel Technology Journal, Q4, 1999.<br />
<a href="http://gec.di.uminho.pt/discip/minf/ac0203/icca03/ia64fpbf1.pdf">http://gec.di.uminho.pt/discip/minf/ac0203/icca03/ia64fpbf1.pdf</a></p>
<p>D. Goldberg, <em>What Every Computer Scientist Should Know About Floating-Point Arithmetic</em>. Computing Surveys. March 1991.<br />
<a href="http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html">http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html</a></p>
<h3> Full <code>double</code> Comparison Code </h3>
<pre lang="csharp">
private static bool EqualToNumDigits( double expected, double actual, int numDigits )
    {
      bool xNaN = double.IsNaN( expected );
      bool yNaN = double.IsNaN( actual );
      if ( xNaN && yNaN )
      {
        return true;
      }
      if ( xNaN || yNaN )
      {
        return false;
      }
      if ( numDigits <= 0 )
      {
        throw new InvalidArgumentException( "numDigits is not positive in TestCase::EqualToNumDigits." );
      }

      double max = System.Math.Abs( expected ) > System.Math.Abs( actual ) ? System.Math.Abs( expected ) : System.Math.Abs( actual );
      double diff = System.Math.Abs( expected - actual );
      double relDiff = max > 1.0 ? diff / max : diff;
      if ( relDiff <= DOUBLE_EPSILON )
      {
        return true;
      }

      int numDigitsAgree = (int) ( -System.Math.Floor( Math.Log10( relDiff ) ) - 1 );
      //// Console.WriteLine( "x = {0}, y = {1}, rel diff = {2}, diff = {3}, num digits = {4}", x, y, relDiff, diff, numDigitsAgree );
      return numDigitsAgree >= numDigits;
    }
</pre>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/precision-and-reproducibility-in-computing">Precision and Reproducibility in Computing</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.centerspace.net/precision-and-reproducibility-in-computing/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5810</post-id>	</item>
		<item>
		<title>Complex division by zero</title>
		<link>https://www.centerspace.net/complex-division-by-zero</link>
					<comments>https://www.centerspace.net/complex-division-by-zero#comments</comments>
		
		<dc:creator><![CDATA[Steve Sneller]]></dc:creator>
		<pubDate>Fri, 05 Dec 2008 18:31:31 +0000</pubDate>
				<category><![CDATA[Object-Oriented Numerics]]></category>
		<category><![CDATA[complex number divide by zero]]></category>
		<category><![CDATA[complex number division]]></category>
		<category><![CDATA[complex numbers]]></category>
		<category><![CDATA[computing with complex numbers]]></category>
		<guid isPermaLink="false">http://www.centerspace.net/blog/?p=4</guid>

					<description><![CDATA[<p>An NMath customer submitted the following support question: I&#8217;m working on the primitives (NMathCoreShared.dll) and have found a rather odd &#8216;quirk&#8217; with complex division by zero: DoubleComplex aa = new DoubleComplex(0.0, 0.0); DoubleComplex bb = new DoubleComplex(5.2, -9.1); DoubleComplex cc = new DoubleComplex(); cc = bb/aa; Console.WriteLine(cc); // (NaN,NaN) double g = -5.0 / 0.0; [&#8230;]</p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/complex-division-by-zero">Complex division by zero</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>An NMath customer submitted the following support question:</p>
<blockquote><p>I&#8217;m working on the primitives (NMathCoreShared.dll) and have found a rather odd &#8216;quirk&#8217; with complex division by zero:</p>
<pre>DoubleComplex aa = new DoubleComplex(0.0, 0.0);
DoubleComplex bb = new DoubleComplex(5.2, -9.1);
DoubleComplex cc = new DoubleComplex();

cc = bb/aa;
Console.WriteLine(cc);  // (NaN,NaN)

double g = -5.0 / 0.0;
Console.WriteLine(g);   // -Infinity</pre>
<p>On my machine (Athlon FX), I get the value NaN back when I&#8217;m pretty sure it should be &#8216;Inf&#8217;?</p></blockquote>
<p>The code implementing complex division in the NMath complex number class does not check to see if the divisor is zero. It just applies the formula for complex division in terms of the operands&#8217; real and imaginary parts:</p>
<pre lang="eq.latex">\frac{a + bi}{c + di} = \frac{ac + bd}{c^2 + d^2} + i\frac{bc - ad}{c^2 + d^2}     (1)</pre>
<p>As you can see, when <em>c = d = 0 </em> both the real and imaginary components of the quotient take the form 0/0. Now the IEEE standard says that for real numbers <em>a/0</em>, <em>a</em> not equal to 0, should yield +∞ or -∞ where the sign depends on whether <em>a &gt; 0</em> or <em>a &lt; 0</em> and 0/0 should yield NaN (Not a Number). So a direct application of the formulas for division of complex number, without “special casing” the zero denominator case, will yield a result of (NaN, NaN).</p>
<p>One might be tempted to say “OK, that&#8217;s the end of that. (NaN, NaN) is obviously the correct behavior”. However, this does not set very well with intuition. If I have a fixed complex number <em>w</em> which is divided by another variable complex number <em>z</em>, it seems that as <em>z</em> gets smaller and smaller, the quotient should get bigger and bigger. There is a slight technicality here&#8211;there really is no notion of bigger and smaller for the complex number. That is, there is no ordering of the complex numbers that is consistent with its algebraic structure. In particular there is no notion of negative and positive complex numbers, so the concept of positive and negative infinity are not well defined. Let&#8217;s take a look at some theoretical stuff in the hopes it will help to guide our thinking.</p>
<p>Let&#8217;s start with something more familiar. What does it mean to divide a real number by zero? From a purely algebraic sense if <em>a</em> and <em>b</em> are two real numbers then the number <em>a ÷ b</em> is the solution to the equation <em>b * x = a</em>. When <em>b = 0</em> this equation has no solution and so the quotient is not defined (unless we extend the real numbers to include infinity and define algebraic operations that include infinity, but let&#8217;s not go there now as it isn&#8217;t really germane to our conversation). Where does infinity come in? Well, one can also look at the question from an analysis point of view, where one deals in functions and limits. With this perspective, given a real number <em>a</em> one defines a <em>division</em> function <em>f<sub>a</sub>(x) = a ÷ x</em>. This function is well defined for all <em>x</em> not equal to 0. But, analysis deals also with <em>limits</em>, and one would naturally ask what happens when <em>x</em> gets <em>close</em> to 0. The answer is as <em>x → 0, f<sub>a</sub> → ∞</em>. Formally, this means the following: Given any real number <em>M&gt;0</em>, there exists a corresponding real number <em>δ &gt; 0</em> such that |<em>fa </em>|<em> &gt; M</em> whenever |<em>x</em>|<em> &lt; δ</em>. Now, we can apply this definition to complex numbers since we have a notion of absolute value for complex numbers: if <em>z = a + ib</em> is a complex number, then the modulus of <em>z</em>, |<em>z</em>|, is defined as</p>
<pre lang="eq.latex">|z|=\sqrt{a^2 + b^2}</pre>
<p>That is it&#8217;s the euclidean distance from the point <em>z = (a,b)</em> to the origin in the complex plane. From this it is clear that as a complex number tends to zero, so must its real and imaginary parts. So as the denominator in equation (1) above tends to zero, so do its real and imaginary parts <em>c</em> and <em>d</em>. The fact that <em>c</em> and <em>d</em> are squared in the denominator of the real and imaginary parts of the quotient means that they will tend to zero faster than the their corresponding numerators and hence, using a L&#8217;Hôpital&#8217;s Rule based argument, or a straight <em>δ, M</em> based argument, we can see that both the real and imaginary parts of the quotient tend to infinity as the denominator tends towards zero. Whether it is positive or negative infinity is another question.</p>
<p>But enough theoretical talk. Let&#8217;s look at something concrete. Unlike C#, the standard C++ library includes a complex number class. What do they do when dividing by zero? Here&#8217;s the result I get with Microsoft&#8217;s C++ complex class:</p>
<p>Microsoft Visual C++ 15.0</p>
<pre>(1,1)/(0,0) = (1.#QNAN,1.#QNAN)
(0,0)/(0,0) = (1.#QNAN,1.#QNAN)
1/0 = 1.#INF
0/0 = -1.#IND</pre>
<p>Hmm. They seem to just apply the formulas and get 0/0 for the real and imaginary parts. How about the GNU C++ compiler?</p>
<p>g++ 4.3.2</p>
<pre>(1,1)/(0,0) = (inf,inf)
(0,0)/(0,0) = (nan,nan)
1/0 = inf
0/0 = nan</pre>
<p>Interesting. They return infinite real and imaginary parts. What&#8217;s more interesting is that the two compilers do not agree. And just for fun, here&#8217;s the result with the complex class included in Microsoft&#8217;s F# compiler:</p>
<p>F#</p>
<pre>(1,1)/(0,0) = NaNr+NaNi
(0,0)/(0,0) = NaNr+NaNi
1/0 = Infinity
0/0 = NaN</pre>
<p>No surprise here. At least Microsoft is consistent.</p>
<p>What does the GNU compiler do about positive and negative infinity in its results?</p>
<pre>(1,1)/(0,0) = (inf,inf)
(-1,-1)/(0,0) = (-inf,-inf)
(1,-1)/(0,0) = (inf,-inf)
(-1,1)/(0,0) = (-inf,inf)</pre>
<p>Apparently, they have adopted the convention that the sign of infinity in the real and imaginary parts of the quotient is the same as the signs of the real and imaginary parts of the numerator, respectively.</p>
<p>So, what should NMath do? On the one hand .NET is a Microsofty kind of thing, so maybe we should be consistent with their C++ implementation. On the other hand g++ is a widely distributed compiler and they seem to take the moral high road here. What would you do?</p>
<p>Steve</p>
<p>The post <a rel="nofollow" href="https://www.centerspace.net/complex-division-by-zero">Complex division by zero</a> appeared first on <a rel="nofollow" href="https://www.centerspace.net">CenterSpace</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.centerspace.net/complex-division-by-zero/feed</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">4</post-id>	</item>
	</channel>
</rss>
