Archive for January, 2015

Abbreviate long strings with dots using sed

Saturday, January 24th, 2015

My bash prompt lists the current directory name and I was bothered that if it’s very long then it will take up the whole line. Here’s a short sed command which will abbreviate any lines over 5+3+5=13 characters replacing the middle part with three dots:

echo \
"a very long directory name which will take up a whole line
short name
perfect name
just big name" | \
sed -e "s/\(.\{5\}\).\{3,\}\(.\{5,\}\)/\1...\2/"


a ver... line
short name
perfect name
just ... name

What happens when you force quit WindowServer

Sunday, January 18th, 2015

It triggers a hard logout.

Comma initialize a const Eigen matrix

Wednesday, January 14th, 2015

I’ve forgotten this a couple times. Here’s how to initialize an Eigen matrix’s entries while keeping it const and avoiding auxiliary variables:

const Eigen::MatrixXd A = (Eigen::MatrixXd(2,2) << 1,2,3,4).finished();

Drop-in replacement for MATLAB’s graphconncomp from the bioinformatics toolbox

Sunday, January 11th, 2015

One of my colleagues could not run my matlab script because it called graphconncomp. This function computes the connected components of a graph represented by an adjacency matrix. Strangely it is not a built-in function of the standard MATLAB installation. Rather it’s included in the bioinformatics toolbox, which my colleague’s university did not purchase for him. This was frustrating so I wrote my own drop in replacement. Find this in my gptoolbox:

function [S,C] = conncomp(G)
  % CONNCOMP Drop in replacement for graphconncomp.m from the bioinformatics
  % toobox. G is an n by n adjacency matrix, then this identifies the S
  % connected components C. This is also an order of magnitude faster.
  % [S,C] = conncomp(G)
  % Inputs:
  %   G  n by n adjacency matrix
  % Outputs:
  %   S  scalar number of connected components
  %   C  
  [p,q,r] = dmperm(G+speye(size(G)));
  S = numel(r)-1;
  C = cumsum(full(sparse(1,r(1:end-1),1,1,size(G,1))));
  C(p) = C;

As a pleasant surprise, my implementation is also an order of magnitude faster than graphconncomp. On a 4,000,000 by 4,000,000 sparse adjacency matrix, graphconncomp takes 12 seconds and my conncomp using dmperm takes 2 seconds.

MATLAB cellfun lengths of all cells gotcha

Sunday, January 11th, 2015

This will be a hard one to remember because it doesn’t make a lot of sense at a high level. Suppose you have a long cell array of vectors and you’d like to collect the lengths of each vector. For the sake of the running example you could use:

R = rand(1000000,1);
R = R./sum(R);
lengths = floor(R*numel(M));
lengths = [lengths;numel(M)-sum(lengths)];
C = mat2cell(M,lengths,1);

Then the following equivalently compute then lengths of each vector in the cell:

lengths = cellfun('length',C);
lengths = cellfun(@length,C);
lengths = cellfun(@(x)length(x),C);

The times I see for these are strikingly different:

Elapsed time is 0.022611 seconds.
Elapsed time is 0.817478 seconds.
Elapsed time is 10.252154 seconds.

The first is using a hard coded function inside the cellfun mex file. The second is using a function handle to a builtin function and the last is using an anonymous function wrapping the built in function.

As rigid as possible gradient descent and Newton’s method

Monday, January 5th, 2015

For a long while now I’ve been curious to implement Newton’s method for minimizing as-rigid-as-possible (ARAP) energies. I finally had an excuse to put the pieces together. Find the arap_gradient.m and arap_hessian.m functions in my gptoolbox for matlab.

First, to see what gradient descent looks like:

arap knight gradient descent

The red mesh is the original for reference, and the blue begins distorted and is restored via gradient descent. The iterations are simply:

while true
  [G,E] = arap_gradient(V,F,U);
  U = U - delta_t * G;

Convergence is quite slow (I’m displaying every 100 iterations!). I’ve even increased delta_t as much as I can without hitting instabilities.

In comparison, here’s Newton’s method:

arap knight Newton's method

Not surprisingly Newton’s method is much faster (done in 100 iterations). Here’s what the iterations look like:

while true
  [G,E,R] = arap_gradient(V,F,U);
  H = arap_hessian(V,F,'EvaluationPoint',U,'Rotations',R);
  U = U - delta_t * reshape((H+mu*I) \ G(:),size(U));

where mu~=1e-6 is a small number accounting for the instability of the Hessian. I guess this makes it Levenberg-Marquardt, though I’m not updating mu.

For those interested, the paper “A Simple Geometric Model for Elastic Deformations” [Chao et al. 2010] has derivations for both the gradient and Hessian. Though I find their mathematical conventions difficult to parse.

The gradient is straight-forward if you’ve already implemented the popular “local-global” solver approach. Just conduct your local step (find best fit rotations R) and build (but don’t solve) your Poisson equation A X = B. This equation comes from minimizing the ARAP energy with R held fixed: E(X) = 1/2 X’ A X – X’ B. Now you know that Grad E at X with the best fit R is A*X – B.

The Hessian takes a bit more work. I prefer the derivation in “Shape Decomposition Using Modal Analysis” [Huang et al. 2009]. Besides being mindful of small typo, one must adapt their derivation a bit. They derive the Hessian assuming evaluation at the rest state (where all best fit rotations are identities). To account for this, build the covariance matrices using the evaluation point vertex positions and then left multiply them by the same best fit rotations used when computing the gradient.

A friend also showed me a derivation of the Hessian of the (more or less equivalent) co-rotation elasticity energy in “Material Point Method for Snow Simulation” [Stomakhin et al. 2013]. The procedure looks very well explained though I haven’t given it enough time to parse the dense tensor/index notation yet.

“Blockwise”-Matrix multiply each 2d slice in 3d array

Monday, January 5th, 2015

I’m sure I’ve done this before and pretty sure I’ve also posted it here before, but I couldn’t find it.

Suppose you have to 3d-arrays:

A an M by S by K array formed for example by A = cat(3,A1,A2,A3, ... ,AK)
B an S by N by K array formed for example by B = cat(3,B1,B2,B3, ... ,BK)

and you’d like to compute a new 3d array C such that

C an M by N by K array as if formed by C = cat(3, A1 * B1, A2 * B2, ..., AK * BK)

where X*Y is the usual 2d matrix multiply. In my case, K >> M,S,N.

In matlab, a first attempt might be to write a single for loop over the last dimension of size K:

C = zeros(m,n,k);
for k = 1:K
  C(:,:,k) = A(:,:,k) * B(:,:,k);

Matlab’s notorious for loops have gotten better in the last couple years, but this is still slow.

Better is to unroll the 2D loop and take advantage of vectorized, elementwise vector-vector multiplication:

C = zeros(m,n,k);
for i = 1:M
  for j = 1:S
    for k = 1:N
      C(i,k,:) = C(i,k,:) + A(i,j,:).*B(j,k,:);

Slightly better still is to use bsxfun to compute vectorize outer-products in a single for loop

C = zeros(m,n,k);
for j = 1:size(A,2)
  C = C + bsxfun(@times,A(:,j,:),B(j,:,:));

This solution works especially well if S is small compared to the other dimensions.

For (M=S=N=3, K=10000000), these take 60 secs, 13 secs, and 9 secs respectively.