## GLSL shaders using normal as vertex positions

November 14th, 2015

Every few months I need to relearn how properly set up a GLSL shader with a vertex array object. This time I was running into a strange bug where my normals were being used as vertex positions. I had a simple pass-through vertex shader looking something like:

#version 330 core
in vec3 position;
in vec3 normal;
out vec3 frag_normal;
void main()
{
gl_Position = position;
frag_normal = normal;
}


And on the CPU side I was setting up my vertex attributes with:

glVertexAttribPointer(0,3, ... );
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER,position_buffer_object);
glVertexAttribPointer(1,3, ... );
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER,normal_buffer_object);


My folly was assuming that because I’d declared position and normal in that order in the vertex shader that they’d be bound to attribute ids 0 and 1 respectively. Not so! After some head-slamming I thought to query these ids using:

glGetAttribLocation(program_id,"position");
glGetAttribLocation(program_id,"normal");


And found that for whatever reason position was bound to 0 and normal was bound to 1. Of course I then tried hacks to get these to reverse order, or I could hard code the different order. But there appears to be two correct options for fixing this problem:

The obvious one is to use glGetAttribLocation when creating the vertex array object:

glVertexAttribPointer(glGetAttribLocation(program_id,"position"),3, ... );
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER,position_buffer_object);
...


I was a little bothered that this solution requires that I know which shader is going to be in use at the time of creating the vertex array object.

The opposite solution is to assume a certain layout on the CPU-side when writing the shader:

layout(location = 0) in vec3 position;
layout(location = 1) in vec3 normal;


Now I can be sure that using ids 0 and 1 will correctly bind to position and normal respectively.

## Determine how much space is used by .git/.svn/.hg in a directory

November 12th, 2015

Here’s a nasty little bash one-liner to determine how much space is being “wasted” but .svn/ or .git/ or .hg/ repos in your current directory:

du -k | sed -nE 's/^([0-9]*).*\.(svn|git|hg)$/\1/p' | awk '{s+=$1*1024} END {print s}' | awk '{ sum=$1 ; hum[1024**3]="Gb";hum[1024**2]="Mb";hum[1024]="Kb"; for (x=1024**3; x>=1024; x/=1024){ if (sum>=x) { printf "%.2f %s\n",sum/x,hum[x];break } }}'  ## Remove prince annotation from pdf November 10th, 2015 Here’s a little perl script to remove the prince watermark note from a pdf: perl -p -e '!$x && s:/Annots $[0-9]+ 0 R [0-9]+ 0 R ?([^$]+)\]:/Annots [\1]: && (\$x=1)' input.pdf > output.pdf


Update:

So, unfortunately the simple perl hack will “damage” the pdf. It seems that most viewers will ignore this, but I was alerted that a popular ipad reader “GoodReader” produces an ominous “This file is damaged” warning (though it then renders OK).

I couldn’t quite reverse engineer why, but here’s a temporary albeit heavy-handed fix. After running the perl script, also repair the pdf with ghostscript:

gs -o output.pdf  -sDEVICE=pdfwrite -dPDFSETTINGS=/prepress input.pdf


Note that output.pdf cannot be the same as input.pdf or it quietly creates an empty pdf instead.

## Conservative voxelization in gptoolbox

November 3rd, 2015

A while ago I implemented a function to voxelize a given triangle mesh in matlab as part of gptoolbox. Rather than use 3D-Bresenham-style rasterization, I needed a conservative voxelization: where every point (not just vertex) on the mesh is contained inside the voxelization. In other words, output all voxels that intersect the triangle mesh.

Here’s a little snippet to produce the following meshes:

for k = [10 20 40 80]
[DV,I] = remove_unreferenced(DV,Q);
Q = I(Q);
d = render_in_cage(V,F,DV,Q,'ColorIntersections',false);
d.tc.LineWidth = 2;
end


## Venn diagram of multigrid methods

October 24th, 2015

Here’s a little Venn diagram I made to explain (some of) the variety of multigrid methods:

## Mesh decimation (aka simplification) in matlab

October 15th, 2015

I’d forgotten that I’d discovered that matlab has a built-in function for triangle mesh simplification: reduce_patch. Here’s a little comparison between this method and the ones I’ve wrapped up in gptoolbox.

### Input mesh

Given an input mesh with vertices V and faces F (this elephant has 14732 faces):

### Built-in matlab reduce patch

We can decimate the mesh to 1000 faces using pure built-in matlab:

[mF,mV] = reducepatch(F,V,1000);


Notice that it’s a rather adaptive remeshing. There aren’t (m)any options for controlling reducepatch, just the 'fast' flag, but in this case it will produce an identical output.

### Libigl’s decimate

Libigl has a framework for edge-collapse decimation on manifold meshes and a default setting for regular meshing:

[iV,iF] = decimate_libigl(V,F,1000);


### CGAL’s decimation

I have a wrapper for CGAL’s decimation algorithms. These can result in a regular remeshing with 1000 faces:

[c1V,c1F] = decimate_cgal(V,F,1000/size(F,1));


[c2V,c2F] = decimate_cgal(V,F,1000/size(F,1),'Adaptive',true);


CGAL has a fairly general interface for an edge-collapser, but these are the only metrics provided by default from CGAL.

### Timing

Suppose my original elephant had 3.7 million faces instead of just 14K, how do these methods measure up when reducing to 1000 faces:

Method Time
reducepatch 64.4s
libigl 74.56s
cgal (regular) 117.7s
qslim 108.0s + 507.5s + 59.4s

My qslim wrapper is currently very silly. It’s calling qslim as an executable and then reading in the entire collapse log and conducting the collapse within matlab (the second time is for reading the log, the third for conducting the collapse).

None of these methods guarantee that the result is self-intersection free. I’ve noticed that qslim and matlab will create non-manifold meshes from manifold inputs. The current libigl implementation should not creative non-manifold output, but assumes the input is a closed surface. For casual decimation, it seems like matlab’s reducepatch and libigl’s decimate_libigl are the best bets for speedy adaptive and regular meshing respectively. Conveniently they also require the least in terms of dependencies.

## Dissertation Impact article in Computer Graphics and Applications

October 6th, 2015

I was asked to write an article in the Dissertation Impact series of IEEE Computer Graphics and Applications. In the first half of “Breathing Life into Shapes” I’ve tried to summarize my thesis for the general computer graphics audience. The second half is more exciting and contains my view of the future of real-time shape deformation research and the open problems I see ahead.

## Nested Cages project page

October 2nd, 2015

We’ve posted a project page for our upcoming SIGGRAPH Asia paper Nested Cages, a collaboration between Leonardo Sacht, Etienne Vouga and myself.

Abstract: Many tasks in geometry processing and physical simulation benefit from multiresolution hierarchies. One important characteristic across a variety of applications is that coarser layers strictly encage finer layers, nesting one another. Existing techniques such as surface mesh decimation, voxelization, or contouring distance level sets do not provide sufficient control over the quality of the output surfaces while maintaining strict nesting. We propose a solution that enables use of application-specific decimation and quality metrics. The method constructs each next-coarsest level of the hierarchy, using a sequence of decimation, flow, and contact-aware optimization steps. From coarse to fine, each layer then fully encages the next while retaining a snug fit. The method is applicable to a wide variety of shapes of complex geometry and topology. We demonstrate the effectiveness of our nested cages not only for multigrid solvers, but also for conservative collision detection, domain discretization for elastic simulation, and cage-based geometric modeling.

## Accompanying video for “Nested Cages”, SIGGRAPH Asia 2015

September 22nd, 2015

Here’s the accompanying video for the upcoming SIGGRAPH Asia 2015 paper “Nested Cages” that I’ve been working on with Leonardo Sacht and Etienne Vouga:

Abstract:
Many tasks in geometry processing and physical simulation benefit from multiresolution hierarchies. One important characteristic across a variety of applications is that coarser layers strictly encage finer layers, nesting one another. Existing techniques such as surface mesh decimation, voxelization, or contouring distance level sets do not provide sufficient control over the quality of the output surfaces while maintaining strict nesting. We propose a solution that enables use of application-specific decimation and quality metrics. The method constructs each next-coarsest level of the hierarchy, using a sequence of decimation, flow, and contact-aware optimization steps. From coarse to fine, each layer then fully encages the next while retaining a snug fit. The method is applicable to a wide variety of shapes of complex geometry and topology. We demonstrate the effectiveness of our nested cages not only for multigrid solvers, but also for conservative collision detection, domain discretization for elastic simulation, and cage-based geometric modeling.

You can find the paper on my site.

## CSG Tree operations in libigl

September 22nd, 2015

I’ve added support for constructive solid geometry tree operations in libigl. Check out igl::boolean::CSGTree and the tutorial entry.

The class constructors take advantage of C++ initializer lists to make tree encoding simple using a reverse polish encoding:

// Compute result of (A ∩ B) \ ((C ∪ D) ∪ E)
igl::boolean::CSGTree<MatrixXi> CSGTree =
{{{VA,FA},{VB,FB},"i"},{{{VC,FC},{VD,FD},"u"},{VE,FE},"u"},"m"};