## Memory gotcha in eigen

August 26th, 2015

I recently ran into a frustrating memory leak “gotcha” involving Eigen. The following code compiles

template <typename Derived>
Eigen::PlainObjectBase<Derived> leaky(
const Eigen::PlainObjectBase<Derived> & A)
{
Derived B = A;
return B;
}

int main(int argc, char * argv[])
{
Eigen::MatrixXd A(10000,1);
Eigen::MatrixXd B = leaky(A);
}


but causes a memory leak (without any warnings/asserts from Eigen):

.main(33789,0x7fff7bb2d300) malloc: *** error for object 0x7f9ed188da00: pointer being freed was not allocated
*** set a breakpoint in malloc_error_break to debug


The problem seems to be the cast happening in the return. Changing the return type of leaky to Derived:

template <typename Derived>
Derived leaky(
const Eigen::PlainObjectBase<Derived> & A)


fixes the issue. Or it could be fixed by changing the type of B inside leaky to Eigen::PlainObjectBase<Derived>.

  Eigen::PlainObjectBase<Derived> B = A;


## Bounded biharmonic weights demo binary

August 13th, 2015

I’ve finally gotten around to creating a stand-alone mac os x app for the prototype I use to give demos of our bounded biharmonic weights. The only catch is that you’ll have to grab a mosek (free academic trial) license and put it in the right place

Unfortunately do to license reasons I cannot release the source code for this app, but the code for bounded biharmonic weights is in my matlab gptoolbox and in our C++ libigl

## Slice, rearrange and extract output arguments of matlab function

August 6th, 2015

It is a bit annoying how matlab handles anonymous functions an output arguments. Suppose there’s a function called solver that expects to be pass a function handle fval and grad that compute the value and the gradient of some function at point X respectively. Then you’d call solver like this:

Z = solver(@fval,@grad);


Well, suppose I already have a function grad_fval that computes the gradient and function value at some point X, as in:

[G,f] = grad_fval(X);


It’s easy to create an anonymous function to compute the gradient:

grad = @(X) grad_fval(X);


But since anonymous functions can only have one line and the output must match it’s impossible to extract the second output argument from grad_fval. I guess the official way to do this is to create a file on my hard drive in the current path, say called fval.m and give it the contents:

function f = fval(X)
end


This is pretty silly.

Instead, I’ve created a one-off utility function slice_argout.m which is similar to a lot of get_the_nth_output functions found online. Only here I use the matlab-style slicing to enable full manipulation of the output arguments. In the case above I’d use:

fval = @(X) slice_argout(@grad_fval,[2],X);


If I wanted to reverse the output arguments I could use:

fval_grad = @(X) slice_argout(@grad_fval,[2 1],X);


Here’re the contents of slice_argout.m:

function varargout = slice_argout(func,order,varargin)
% SLICE_ARGOUT Slice (permute, reorder, extract blocks from, etc.) output
% arguments of a given function.
%
% [O1,O2,O3,...] = slice_argout(func,order,I1,I2,I3, ...)
%
% Calling this is effectively the same as calling:
%
%     [T1,T2,T3,...] = func(I1,I2,I3,...)
%
%
if islogical(order)
n = numel(order);
else
n = max(order(:));
end
varargout = cell(n,1);
[varargout{:}] = func(varargin{:});
varargout = varargout(order);
end


August 5th, 2015

Here’s what I added to the wp-admin/admin-header.php file of my wordpress site to re-typeset the preview of the latex-math as I type into the post editor:

<script>
function()
{
// http://stackoverflow.com/a/4029518/148668
var idleTime = 0;
// Increment every 100 millisecond
setInterval(timerIncrement, 100);
function timerIncrement()
{
idleTime+=100;
if(idleTime>1000)
{
idleTime=0;
if(MathJax.Hub.queue.running == 0 && MathJax.Hub.queue.pending == 0)
{
MathJax.Hub.Typeset();
}
}
}

document.getElementById("content").onkeypress =
function(e)
{
idleTime = 0;
};
},false);
</script>


## How does Galerkin multigrid scale for irregular grids?

August 4th, 2015

There are different flavors of the multigrid method. Let’s consider the geometric subclass and within that the “Galerkin” or projection-style multigrid method. This method is favorable because it only requires discretization at the finest level and prolongation and restriction between levels. This means, given a problem A1 x1 = B1 on the finest mesh, we don’t need to know how to create an analogous problem A2 x2 = B2 on the second finest mesh and so on. Instead, we define A2 = R A1 P where R is the #coarse vertices by #fine vertices restriction operator taking fine mesh values to coarse mesh values and P is the #fine vertices by #coarse vertices prolongation operator. A Galerkin multigrid V-cycle looks something like this:

• relax current guess x1 according to A1 x1 = B1
• compute residual r1 = B1 - A1 x1
• restrict residual to coarse mesh r2 = R r1
• restrict system matrix A2 = R A1 P
• solve (recursively) for update A2 u2 = r2
• prolong update and add to guess u1 = P u2, x1 += u1
• relax current guess x1 according to A1 x1 = B1

Often (for good reasons), we take the restriction operator to be the transpose of the prolongation operator R = P'. When we’re working with nested meshes, it’s natural to use the linear interpolation operator as P.

This V-cycle if A1 x1 = B1 is coming from a quadratic energy minimization problem:

min  ½ x1' A1 x1 - x1' B1
x1


Suppose we freeze our current guess x1 and look for the best update u1, we’d change the problem to

min  ½ (x1 + u1)' A1 (x1 + u1) - (x1+u1)' B1
u1


Rearranging terms according to u1 this is

min  ½ u1' A1 u1 - u1' (B1 - A1 x1) + constants
u1


Now we recognize our residual r1 = B1 - A1 x1. Let’s substitute that in:

min  ½ u1' A1 u1 - u1' r1
u1


If we were to optimize this for u1 we’d get a system of equations:

A1 u1 = r1


Instead, let’s make use of our multires hierarchy. The coarse mesh spans a smaller space of functions that the finer mesh. We can take any function living on the coarse mesh u2 and interpolate it to give us a function on the fine mesh u1 = P u2, thus we could consider restricting the optimization search space of u1 functions to those we can create via P u2:

min  ½ (P u2)' A1 (P u2) - (P u2)' r1
u2


rearranging terms and substituting R = P' we get:

min  ½ u2' R A1 P u2 - u2' R r1
u2


finally optimizing this for u2 we get a system of equations:

R A1 P u2 = R r1


or

A2 u2 = r2


This Galerkin-style multigrid is great because we don’t need to worry about re-discretizing the system matrix (e.g. using FEM on the coarse mesh) or worry about redefining boundary conditions on the coarse mesh. The projection takes care of everything.

But there’s a catch.

For regular grids with carefully chosen decimation patterns (#vertices on a side is a power of 2 plus 1) then the projection matrix will be well behaved. Each row corresponding to a fine mesh vertex will at most 1 vertex if perfectly align atop a coarse mesh vertex and will involve the nearest 2 vertices on the coarse mesh if lying on a coarse mesh edge. Then examining a row of A2 = P' A1 P we can read off the change in the stencil. Let’s say A1 is a one-ring Laplacian on the fine mesh. Hitting it on the left with P' will form the matrix P' A1, a row of which corresponds to a coarse mesh vertex and the columns to the fine mesh vertices it “touches”. Since coarse vertices lie exactly at fine mesh vertices, this will simply connect each coarse mesh to its “half-radius-one-ring”, i.e. the fine mesh vertices lying on coarse mesh edges around it. Finally hitting this matrix with P on the right creates A2 = P' A1 P connecting coarse to coarse vertices. Since each coarse vertex was connected to its “half-radius-one-ring” via the fine mesh, now in P' A P all coarse mesh vertices are connected to its true one-ring neighbors: just like if we built the Laplacian directly on the coarse mesh. The sparsity pattern is maintained.

For irregular grids this is unfortunately not the case. Each row in P will contain, in general, d+1 entries. When we examine P' A1, we see that we smear the stencil around the coarse mesh to include the “half-radius-one-rings” from d+1 nearest fine-mesh vertices. This effect is repeated when finally creating P' A1 P. We see the sparsity worsening with each level.

Here’s a plot of the number non-zeros per row of the projected system matrix for a multires hierarchy with 7 levels.

The orange lines shows the average number of non-zeros per row for the cotangent Laplacian projected down different sequences of random irregular meshes of the unit square. You can see a rough 2x increase in the number of non-zeros in the beginning and then things taper-off as the meshes get very coarse: meshes range from roughly 2562 to 42. The blue lines show the average number of non-zeros per row for the cotangent Laplacian rediscretized (aka rebuilt from scratch) on each mesh: this is and should be roughly 7 for any triangle mesh (the slight decrease is due to the increase effect of the boundary on the average).

This doesn’t look so bad. Even for the third level (562 vertices) the number of non-zeros per row is 25: still _very sparse).

Things get worse, much worse in 3D.

Already the average number of non-zeros per row of a the 3D tet-mesh Laplacian is not neatly bounded to 7, though it tends to be around 16 for a “reasonable mesh”.

In this plot we see the number of non-zeros in the projected system matrix increasing by around a factor of 3 each level, peaking around 112 on the 173 mesh. Notice on the 93=729 vertex grid the average number of non-zeros is 100. That’s a sparsity ratio of 13% or 729,000 non-zeros: not very sparse for such a small mesh. Especially considering that the rediscretization system matrix would have 12 non-zeros per row, a sparsity ratio of 1.6%, or only 8,700 total non-zeros.

I didn’t have the patience to let this run on a larger initial fine-mesh, but you can imagine that the situation gets drastically worse. Galerkin multigrid fails to improve performance and gets choked by memory consumption on irregular 3D tetrahedral meshes using piecewise linear prolongation operators.

$E = mc^2$

## Retrieve matlab plot colors

August 4th, 2015

I couldn’t find any documentation about where the plot line colors are stored or how to access them. Here’s a hacky way to retrieve them. Currently plot cycles through 7 colors:

p = plot(repmat(1:7,2,1));
plot_colors = reshape([p.Color],3,7)';


## Split a long mp3 audio file into 3 min files

July 29th, 2015

After some frustration trying to get mp3splt to work, I caved in and wrote a script to split apart a large audio file into many 3min chunks. Save this in mp3split.sh:

#!/bin/bash
big="$1" duration_stamp=$(ffmpeg -i "$big" 2>&1 | grep Duration | sed 's/^.*Duration: *$[^ ,]*$,.*/\1/g') title=$(ffmpeg -i "$big" 2>&1 | grep "title *:" | sed 's/^.*title *: *$.*$/\1/g') # get minutes as a raw integer number (rounded up) prefix=$(basename "$big" .mp3) echo$duration_stamp
mins=$(echo "$duration_stamp" | sed 's/$[0-9]*$:$[0-9]*$:$[0-9]*$\.$[0-9]*$/\1*60+\2+\3\/60+\4\/60\/100/g' | bc -l | python -c "import math; print int(math.ceil(float(raw_input())))")
ss="0"
count="1"
total_count=$(echo "$mins/3+1" | bc)
while [ "$ss" -lt "$mins" ]
do
zcount=$(printf "%05d"$count)
ss_hours=$(echo "$ss/60" | bc)
ss_mins=$(echo "$ss%60" | bc)
ss_stamp=$(printf "%02d:%02d:00"$ss_hours $ss_mins) ffmpeg -i "$big" -acodec copy -t 00:03:00 -ss $ss_stamp -metadata track="$count/$total_count" -metadata title="$title $zcount" "$prefix-$zcount.mp3" ss=$[$ss+3] count=$[\$count+1]
done


The execute mp3split.sh my-long-file.mp3. This will output a sequence of files:

my-long-file-00001.mp3
my-long-file-00002.mp3
my-long-file-00003.mp3
my-long-file-00004.mp3
...


Each will retain the meta data from the original file except the file number will be appended to the track name and the track number will be set accordingly (i.e. this will work well for splitting enormous audiobook files into file lists that play in the correct sequence on an iphone).

Note: mp3splt really seems like the right tool for this. It supposedly has fancy features like silence detection and presumably won’t reload the file for each new split.

## Remove annotations from a pdf

July 29th, 2015

I lost track of where I found this online and had to dig it out of my bash history:

perl -pi -e 's:/Annots $[^]]+$::g' input-and-output.pdf


## Libigl tutorial entry for physics upsampling with biharmonic coordinates

July 28th, 2015

I’ve added a tutorial entry describing biharmonic coordinates and linking to an example using it to run a “physics” simulation on a coarse mesh and upsample it to a high-resolution mesh:

Read more about in our upcoming SIGGRAPH paper Linear Subspace Design for Real-Time Shape Deformation.

## Sanity check that constructing a parameter inline circumvents aliasing functions with Eigen

July 27th, 2015

Suppose I have a function with Eigen types as inputs and outputs:

void aliasing_function(
const Eigen::MatrixXd & A,
Eigen::MatrixXd & B)
{
for(int i = 0;i<B.rows();i++)
{
B.row(i) = A.row(A.rows()-i-1);
}
}


This function is dangerous despite the const on the input parameter A because if A and B reference the same place in memory, writing to B will also write onto what this function thinks is A. Usually this is undesirable and it’s called aliasing. For example consider calling with the same input as output:

aliasing_function(C,C);


Supposing that I can’t/won’t/refuse to change the implementation of aliasing_function, here’s proof that you can avoid aliasing problems by using the copy constructor of the Eigen type directly in the parameter (avoiding the explicitly named temporary variable, but still costing a copy probably). The call above is replaced with:

aliasing_function(Eigen::MatrixXd(C),C);


To have a tangible example:

int main()
{
using namespace std;
Eigen::MatrixXd C = (Eigen::MatrixXd(3,3)<<1,2,3,4,5,6,7,8,9).finished();
cout<<"C: "<<endl<<C<<endl;
aliasing_function(Eigen::MatrixXd(C),C);
cout<<"C: "<<endl<<C<<endl;
aliasing_function(C,C);
cout<<"C: "<<endl<<C<<endl;
}


This outputs:

C:
1 2 3
4 5 6
7 8 9
C:
7 8 9
4 5 6
1 2 3
C:
1 2 3
4 5 6
1 2 3


The last print out shows the effect of aliasing.