## Archive for July, 2014

### Small typo in “Shape Decomposition Using Modal Analysis”

Wednesday, July 30th, 2014

Upon implementing the modal analysis part of “Shape Decomposition Using Modal Analysis” [Huang et al. 2009], I noticed a small typo in the derivation.

However, I ran into a small issue regarding the construction of matrix C in Equation 6. C_i is responsible for the quadratic terms involving c_i. Namely,

∑j_N(i) wij || c_i x (pi - pj) ||^2


The squared norm of a cross product can be written as the sum of the product of individual squared norms and the squared dot product: ||a x b||^2 = ||a||^2 ||b||^2 - (a.b)^2. Or in our case:

∑j_N(i) wij || c_i ||^2 || pi - pj ||^2 + wij (ci . (pi-pj) )^2


Now, in the 2009 paper only the latter term is included in C_i as it is equivalent to the covariance matrix (sum of outerproducts):

ci' C_i ci = ci' ( ∑j_N(i) wij (pi-pj) (pi - pj)' ) ci


where a’ is the transpose of a.

C_i should also contain a diagonal term accounting for ∑j_N(i) wij || c_i ||^2 || pi - pj ||^2. This is of course easy to add to C_i.

Update: Huang replied to me with another way to rewrite the construction C_i correctly and using a notation more familiar to the original paper:

∑j wij (p_{i}-p_{j})_x (p_{i}-p_{j})_x',


where a_x is the matrix representing cross product with a and it follows that a_x a_x' = ||a||^2 - (aa').

And then it’s working. Here’re some modes of a cow:

### Linksys router + Motorola Surfboard + Time Warner needs reset after power outage

Saturday, July 26th, 2014

Our vacuum keeps blowing the fuse in our apartment and each time the internet goes down. I haven’t tracked down why this happens, but here’s my solution after a couple times.

1. Open http://192.168.1.1/WanMAC.asp (router’s configuration page) and note down MAC Address currently cloning
2. Unplug power to router and modem and wait 15 seconds
3. Plug power to router and modem in again
4. Unplug ethernet from the linksys router
5. Connect original computer (with MAC address above) directly to router using ethernet, check that some page loads
6. Restore Linksys to factory settings using little button on back
7. Connect to restored wireless router (it should have name linksys)
8. Open http://192.168.1.1/WanMAC.asp and enable MAC address cloning with previously noted address
9. Open http://192.168.1.1/Wireless_Basic.asp and restore wireless name exactly
10. Reconnect to router wireless router using new name
11. Open http://192.168.1.1/WL_WPATable.asp and set up password again exactly: choose WEP, then type password as Key 1
12. Once again reconnect to wireless router using new name, your machine should remember that it used to remember this password if it’s the same

### Inversion of matrix made of diagonal blocks

Sunday, July 20th, 2014

Recently I needed to invert (a.k.a. solve against) a large matrix with a special form. It was a block matrix where each block was diagonal:

 M = [M11 M12 ... M1n
M21 ...
...
Mn1 Mn2 ... Mnn];


where each Mij was a diagonal matrix. In my case Mij=Mji, though I won’t rely on this for the following.

Note that this is not the same as a “block diagonal matrix” though it can be rearrange into such a matrix.

The inverse of my kind of matrix will have the same sparsity pattern. I can find it using a recursive application of the blockwise matrix inversion formula.

Here’s a little function to do exactly that:

function N = diag_blk_inv(M,k)
% DIAG_BLK_INV Invert a block matrix made out of square diagonal blocks. M
% should be of the form:
%
%   M = [M11 M12 ... M1n
%        M21 ...
%        ...
%        Mn1 Mn2 ... Mnn];
% where each Mij is a diagonal matrix.
%
% Inputs:
%   M  n*k by n*k matrix made out of diagonal blocks.
% Outputs:
%   N  n*k by n*k matrix inverse of M
%

switch k
case 1
assert(isdiag(M));
N = diag(sparse(1./diag(M)));
return;
end

assert(size(M,1)==size(M,2),'Must be square');
assert(rem(size(M,1),k)==0,'Must be divisble by k');
n = size(M,1)/k;
% Extract 4 blocks
A = M(0*n+(1:(k-1)*n),0*n+(1:(k-1)*n));
B = M(0*n+(1:(k-1)*n),(k-1)*n+(1:n));
C = M((k-1)*n+(1:n),0*n+(1:(k-1)*n));
D = M((k-1)*n+(1:n),(k-1)*n+(1:n));
assert(isdiag(D));
% https://en.wikipedia.org/wiki/Invertible_matrix#Blockwise_inversion
Ainv = diag_blk_inv(A,k-1);
% Schur complement
S = (D-C*Ainv*B);
assert(isdiag(S));
Sinv = diag_blk_inv(S,1);
N = [Ainv + Ainv*B*Sinv*C*Ainv -Ainv*B*Sinv; ...
-Sinv*C*Ainv              Sinv];
end


Update: No doubt someone will see that I’m computing the inverse explicitly and call me a heretic. Actually in the case of diagonal matrices there’s some room for argument why one might want to do this. In terms of efficiency, this is orders of magnitude faster than calling inv(M)*B or M\B. However, matlab’s \ doesn’t seem to figure out that it could use cholesky decomposition. In that case L = chol(M);L\(L'\B) is roughly the same speed as diag_blk_inv(M,k)*B: both are fast.

Update: To be my own policeman, it probably is safer to use chol in this case. Otherwise I would in general I would need to check the scaling of M. It’s unfortunate though because a priori I don’t have any reason to believe chol will see the special structure of this matrix.

### Interleave rows of single matrix

Friday, July 18th, 2014

Dealing with xyz coordinates in linear algebra for computer graphics, there are predominant two styles when concatenating a list of many positions/vectors/matrices. The first groups common coordinates together. For example a list of mesh vertex coordinates may look like:

V = [x1
x2
...
xk
y1
y2
...
yk
z1
z2
...
zk];


The other style groups common objects together, listing all coordinates in a stream. For our vertex positions this would look like:

V = [x1
y1
z1
x2
y2
z2
...
xk
yk
zk];


In OpenGL and array language the first option has a data access stride of 1 if pulling x-coordinates and the second option has a stride of n. Of course if pulling position vectors then the first option has a stride of n and the second a stride of 1. Hence, the two styles really have appropriate use cases depending on data access patterns.

This gets a bit more complicated if we’re not just storing a scalar in each row, but a n-vector.

In matlab, it’s very easy to convert between one representation and the other. Suppose you have a k*m-by-n matrix A stored in the first format above. We can rearrange it to move all common objects together (forming a stack of k m-by-n matrices), with:

A = [1 2 3; ...
4 5 6; ...
7 8 9; ...
10 11 12; ...
13 14 15; ...
16 17 18; ...
19 20 21; ...
22 23 24];
n = 3;
B = reshape(permute(reshape(A,[],2,n),[3 2 1]),n,[])';


and the output is

B =

1     2     3
13    14    15
4     5     6
16    17    18
7     8     9
19    20    21
10    11    12
22    23    24


To convert back:

C = reshape(permute(reshape(B',n,2,[]),[3 2 1]),[],n);


and we get again:

C =
1     2     3
4     5     6
7     8     9
10    11    12
13    14    15
16    17    18
19    20    21
22    23    24


### libigl Version 1.0 beta release

Thursday, July 17th, 2014

Daniele and I have completed a working tutorial for libigl. The course at SGP 2014 also encouraged us to advertise a proper Version 1.0 Beta release of libigl.

This change marks our confidence that libigl is ready for wide use in geometry processing research. While we still qualify our release with “Beta” to indicate that the library is a work in progress and documentation, for example, may not be as thorough as commercial or better established libraries, we’re continuing to develop libigl and encourage you to git pull our repo and stay up to date.

Here’s the table of contents to our tutorial. Each section comes with a working mini example main.cpp which demonstrates a specific function or routine available in libigl:

### Small typo in “An Algorithm for the Construction of Intrinsic…”

Friday, July 11th, 2014

Upon implementing the Intrinsic Delaunay Laplacian according the helpful notes in “An Algorithm for the Construction of Intrinsic Delaunay Triangulations with Applications to Digital Geometry Processing” [Fisher et al 2006], I noticed a small typo. Specifically I’m following this version of the document. The issue is regarding the computation of the length of the “flipped edge” f. Currently the last step in section 2.2 uses the Law of Cosines, writing that:

f = sqrt(b^2 + c^2 - 2 cos(alpha + delta))


This should be:

f = sqrt(b^2 + c^2 - 2 b c cos(alpha + delta))


### Pitfalls when implementing conformalized mean curvature flow

Tuesday, July 8th, 2014

Here’re some notes to avoid (common? obvious?) pitfalls when implementing conformalized mean curvature flow (“Can Mean-Curvature Flow Be Made Non-Singular?” [Kazhdan et al. 2012]).

Naively building a smoothing operation out of the identity matrix (data term) and a discrete pointwise Laplacian (smoothness term) would lead to iterations looking something like this:

L = cotmatrix(V,F);
I = speye(size(L));
U = V;
while true
M = massmatrix(U,F,'barycentric');
U = (I-delta*M\L)\U;
tsurf(F,U)
drawnow;
end


This is vary badly scaled as we need to first invert the mass matrix, introducing numerical problems if our mesh is irregular, and then solve the system.

Algebraically we can fix these issues by multiplying both sides of the system by our mass matrix:

L = cotmatrix(V,F);
U = V;
while true
M = massmatrix(U,F,'barycentric');
U = (M-delta*L)\(M*U);
tsurf(F,U)
drawnow;
end


This solution might converge for very small delta values, but eventually the surface positions U will converge on a single point and our floating point representation will run out of precision. In our context, the problem will be that entries in our diagonal mass matrix will degenerate because triangles in our mesh are degenerating.

So for numerical reasons we must normalize the positions at every step. There are various ways we could do this, here I show normalizing the surface to have unit surface area:

L = cotmatrix(V,F);
U = V;
while true
M = massmatrix(U,F,'barycentric');
U = (M-delta*L)\(M*U);
U = U/sqrt(sum(doublearea(U,F))*0.5);
tsurf(F,U)
drawnow;
end


This is better, but still might only converge for small delta values and even then eventually diverge. The problem this time is a bit more subtle. Our mesh is unit area, but “floating” in space. Importantly, it’s floating away from the origin. Thus to store the coordinates, we’re wasting precision on each vertex by also storing this global translation. We eventually run into the same problem as before. Thus we should recenter the mesh around the origin at each step. Again, many ways to do this, but I show centering the centroid:

L = cotmatrix(V,F);
U = V;
while true
M = massmatrix(U,F,'barycentric');
U = (M-delta*L)\(M*U);
U = U/sqrt(sum(doublearea(U,F))*0.5);
U = bsxfun(@minus,U,centroid(U,F));
tsurf(F,U)
drawnow;
end


Depending on the quality of the original mesh we should now be able to take quite huge time steps and still see convergence (e.g. delta = 1e12!).

Important: Normalizing the surface area and recentering at the origin are necessary for numerics, not just visualization.

Note: Convergence means that our mesh flows to the sphere. However, the mesh may still slide around on the sphere. The problem being that any Möbius transformation is allowed by the flow. Eventually vertices may group up at a pole on the sphere and our mass matrix will degenerate.