## Posts Tagged ‘robust’

### How does matlab’s hypot function work?

Saturday, January 7th, 2017

The hypot function in matlab purports to be more numerically stable at computing the hypotenuse of a (right-)triangle in 2D. The example the help hypot gives is:

a = 3*[1e300 1e-300];
b = 4*[1e300 1e-300];
c1 = sqrt(a.^2 + b.^2)
c2 = hypot(a,b)


where you see as output:

c1 =

Inf     0


and

c2 =

5e+300       5e-300


this is a compiled built-in function so you can’t just open hypot to find out what its doing. It might just be pre-dividing by the maximum absolute value. There’s probably a better reference for this, but I found it in: “Vector length and normalization difficulties” by Mike Day.

Continuing this example:

m = max(abs([a;b]))
c3 = m.*sqrt((a./m).^2 + (b./m).^2)


produces

c3 =

5e+300       5e-300


While matlab’s hypot only accepts two inputs, pre-dividing by the maximum obviously extends to any dimension.

### Triangle area in nD

Saturday, January 7th, 2017

My typical go-to formula for the area of a triangle with vertices in nD is Heron’s formula. Actually this is a formula for the area of a triangle given its side lengths. Presumably it’s easy to compute side lengths in any dimension: e.g., a = sqrt((B-C).(B-C)).

Kahan showed that the vanilla Heron’s formula is numerically poor if your (valid) triangle side lengths are given as floating point numbers. He improved this formula by sorting the side-lengths and carefully rearranging the terms in the formula to reduce roundoff.

However, if you’re vertices don’t actually live in R^n but rather F^n where F is the fixed precision floating-point grid, then computing side lengths will in general incur some roundoff error. And it may so happen (and it does so happen) that this roundoff error invalidates the triangle side lengths. Here, invalid means that the side lengths don’t obey the triangle inequality. This might happen for nearly degenerate (zero area) triangles: one (floating point) side length ends up longer than the sum of the others. Kahan provides an assertion to check for this, but there’s no proposed remedy that I could find. One could just declare that these edge-lengths must of come from a zero-area triangle. But returning zero might let the user happily work with very nonsensical triangle side lengths in other contexts not coming from an embedded triangle. You could have two versions: one for “known embeddings” (with assertions off and returning 0) and one for “intrinsic/metric only” (with assertions on and returning NaN). Or you could try to fudge in an epsilon a nd hope you choose it above the roundoff error threshold but below the tolerance for perceived “nonsense”. These are messy solutions.

An open question (open in the personal sense of “I don’t know the answer”) is what is the most robust way to compute triangle area in nD with floating point vertex positions. A possible answer might be: compute the darn floating point edge lengths, use Kahan’s algorithm and replace NaNs with zeros. That seems unlikely to me because (as far as I can tell) there are 4 sqrt calls, major sources of floating point error.

Re-reading Shewchuk’s robustness notes, I saw his formula for triangle area for points A,B,C in 3D:

Area3D(A,B,C) = sqrt( Area2D(Axy,Bxy,Cxy)^2 + Area2D(Ayz,Byz,Cyz)^2 + Area2D(Azx,Bzx,Czx)^2 ) / 2

where Area2D(A,B,C) is computed as the determinant of [[A;B;C] 1]. This formula reduces the problem of computing 3D triangle area into computing 2D triangle area on the “shadows” (projections) of the triangle on each axis-aligned plane. This lead me to think of a natural generalization to nD:

AreaND(A,B,C) = sqrt( ∑_{i<j} ( Area2D(Aij,Bij,Cij)^2 ) / 2

This formula computes the area of the shadow on all (N choose 2) axis-aligned planes. Since the sqrt receives a sum of squares as in argument there’s no risk of getting a NaN. There’s a clear drawback that this formula is O(N^2) vs Heron’s/Kahan’s O(N).

### Stable triangle normals

Wednesday, February 11th, 2015

Here’s one way to compute a triangle’s normal using floating point arithmetic that is stable with respect to the order the triangle’s vertices are given in. To demonstrate the issue, consider:

% Vertex positions
A = [1/8 1 1/6];
B = [1/3 1/5 1/7];
C = [1/4 1/9 1/2];
% normals via cross product of different edge vectors
NB = cross(A-B,C-B);
NC = cross(B-C,A-C);
NA = cross(C-A,B-A);
NA-NB
NB-NC


producing

ans =
0            0  -2.7756e-17
ans =
5.5511e-17   1.3878e-17            0


Computing the normals three different ways, I get three different floating point errors. The right thing to do I guess is analyze the vertex positions and design a normal computation that minimizes floating point error (the solution hopefully being stable via uniqueness). I took a lazier route and take a careful average of the three solutions. The might also have the effect of cutting down floating point error a bit. But at least it’s stable.

Unfortunately it’s not enough to naively sum N1, N2, and N3 entry-wise and divide by 3. The summation could incur roundoff error depending on the order of the three arguments. So first we must sort the arguments. If going to the trouble of sorting we might as well sort in a way that reduces roundoff error: descending order of absolute value. The matlab syntax for this is a little clunky:

NABC = [NA(:) NB(:) NC(:)];
[~,I] = sort([NABC],2,'descend');
sNABC = reshape( ...
NABC(sub2ind(size(NABC),repmat(1:size(NABC,1),size(NABC,2),1)',I)),size(NABC));
N = 1/3*reshape(sum(sNABC,2),shape);