Given the squared edge-lengths of a triangle as a, b, c > 0, how can we accurately compute the interior angles?
Kahan's oft cited method which carefully rearranges parentheses assumes (unsquared) edge lengths.
Another popular method (pp 15) assumes edge vectors.
My best bet so far is to use the Law of Cosines:
float angle = acos( (b+a-c) / (2.*sqrt(b*a)) );
but for some non-degenerate inputs this will return 0.0.
Admittedly, the way I'm generating valid input is by computing squared lengths of edge-vectors, but for the purpose of this question I'd like to ignore access to the original edge vectors. Here's a tricky test case:
// Triangle (0,0) (1,1) (1+e,1)
float e = 1e-7;
float a = ((1+e)-1)*((1+e)-1);
float b = (1+e)*(1+e) + 1*1;
float c = 1*1 + 1*1;
The ground-truth angles are approximately:
4.9999997529193436e-08 2.3561944901923448 0.7853981133974508
If I immediately take sqrt of a,b,c and apply Kahan's method I get:
0 3.14159 0
If I apply Law of Cosines, I get:
0 2.35619 0.785398
which is better, but I don't like the exact ==0 where the correct value is >0.
Further, if I scale a,b,c by a non-power of two, then Law of Cosines gives a very wrong result:
0 2.02856 1.11303
Is there a numerically more accurate way to compute angles directly from squared edge lengths?
Or...
Am I simply asking too much of floats here?
Are my squared edge-lengths in float violating some kind of triangle inequality?
My working implementations, numbers above using clang++ main.cpp -g && ./a.out on mac os.
After experimenting for many hours re-arranging floating-point expressions, exploiting fused multiply-add (FMA) and compensated sums, I concluded that computing via the Law of Cosines is not going to work robustly, not even when switching to double-
floatarithmetic (also known as pair arithmetic).As already noted in the question, Kahan's numerically superior arrangement by itself is insufficient since the need to take square roots already injects numerical error before the angle computation starts. However, I observed that performing intermediate computation in
doublearithmetic lead to a robust implementation. Since the asker's requirements preclude the use ofdoublecomputation, this leaves us with double-floatcomputation as a back-up, which of course has a significant performance impact even on platforms with FMA support. A "smoke" test indicates that this leads to an implementation capable of providing all angles of a triangle with relative error of less than 2-23 by straightforward translation of Kahan's algorithm specification.For the C++11 code below I assumed that the target platform has support for FMA which is thus available to accelerate the double-
floatfunctions. My test framework is based on a very old arbitrary-precision library that I have been using for the past 30 years: R. P. Brent's MP from 1978. I configure it for 250-bit precision. The reference function using the MP library computes the angles with the Law of Cosines to provide a robust unit test. This portion of the code would have to be replaced to utilize commonly-used modern libraries of this kind. I built with the Intel C/C++ compiler with full optimization and maximum conformance to IEEE-754 floating-point requirements (/fp:strict).