Welcome toVigges Developer Community-Open, Learning,Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
477 views
in Technique[技术] by (71.8m points)

c# - Why does Math.Exp give different results between 32-bit and 64-bit, with same input, same hardware

I am using .NET 2.0 with PlatformTarget x64 and x86. I am giving Math.Exp the same input number, and it returns different results in either platform.

MSDN says you can't rely on a literal/parsed Double to represent the same number between platforms, but I think my use of Int64BitsToDouble below avoids this problem and guarantees the same input to Math.Exp on both platforms.

My question is why are the results different? I would have thought that:

  • the input is stored in the same way (double/64-bit precision)
  • the FPU would do the same calculations regardless of processor's bitness
  • the output is stored in the same way

I know I should not compare floating-point numbers after the 15/17th digit in general, but I am confused about the inconsistency here with what looks like the same operation on the same hardware.

Any one know what's going on under the hood?

double d = BitConverter.Int64BitsToDouble(-4648784593573222648L); // same as Double.Parse("-0.0068846153846153849") but with no concern about losing digits in conversion
Debug.Assert(d.ToString("G17") == "-0.0068846153846153849"
    && BitConverter.DoubleToInt64Bits(d) == -4648784593573222648L); // true on both 32 & 64 bit

double exp = Math.Exp(d);

Console.WriteLine("{0:G17} = {1}", exp, BitConverter.DoubleToInt64Bits(exp));
// 64-bit: 0.99313902928727449 = 4607120620669726947
// 32-bit: 0.9931390292872746  = 4607120620669726948

The results are consistent on both platforms with JIT turned on or off.

[Edit]

I'm not completely satisfied with the answers below so here are some more details from my searching.

http://www.manicai.net/comp/debugging/fpudiff/ says that:

So 32-bit is using the 80-bit FPU registers, 64-bit is using the 128-bit SSE registers.

And the CLI Standard says that doubles can be represented with higher precision if the hardware supports it:

[Rationale: This design allows the CLI to choose a platform-specific high-performance representation for floating-point numbers until they are placed in storage locations. For example, it might be able to leave floating-point variables in hardware registers that provide more precision than a user has requested. At the Partition I 69 same time, CIL generators can force operations to respect language-specific rules for representations through the use of conversion instructions. end rationale]

http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-335.pdf (12.1.3 Handling of floating-point data types)

I think this is what is happening here, because the results different after Double's standard 15 digits of precision. The 64-bit Math.Exp result is more precise (it has an extra digit) because internally 64-bit .NET is using an FPU register with more precision than the FPU register used by 32-bit .NET.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

Yes rounding errors, and it is effectively NOT the same hardware. The 32 bit version is targeting a different set of instructions and register sizes.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to Vigges Developer Community for programmer and developer-Open, Learning and Share
...