## Archive for the ‘code’ Category

### Using git and github for course assignments

Tuesday, January 10th, 2017

Here’s my tested plan for using git and github to manage student assignments for my upcoming course.

## Initial setup as the instructor:

We’re going to create a student repo and an instructor fork for each assignment. The motivation for separating assignments is to ensure that the diffs that students are submitting for their solutions are tidy. In my case, there’s not much shared code between assignments

Create public* repo that students will see: test-gp-hw0-student.

If you try to fork this on Github, it will tell you that you already have a fork. So instead trick github into forking it via the github importer, import test-gp-hw0-student as a private test-gp-hw0-instructor. Now you have two repos. We’ll set up a branch on test-gp-hw0-instructor to track test-gp-hw0-student for easy merging:

INSTRUCTOR_REPO=test-gp-hw0-instructor
STUDENT_REPO=test-gp-hw0-student
git clone https://github.com/alecjacobson/$INSTRUCTOR_REPO.git cd$INSTRUCTOR_REPO
git remote add student https://github.com/alecjacobson/$STUDENT_REPO.git git fetch student git checkout -b student-tracker --track student/master git checkout master cd ..  To test this out, let’s create a file visible to everyone (students and instructors) in the student repo: git clone https://github.com/alecjacobson/$STUDENT_REPO.git
cd $STUDENT_REPO echo "visible to everyone" >> hello-everyone.md git add hello-everyone.md git commit -m "add file visible to everyone" hello-everyone.md git push cd ../  Now, let’s merge change into the instructor repo cd$INSTRUCTOR_REPO
git checkout master
git pull student master
cat hello-everyone.md
cd ..


Just to be thorough, let’s try the reverse: creating a file just visible to instructors:

cd stewartdent
git remote add instructor https://stewartdent@github.com/alecjacobson/$INSTRUCTOR_REPO.git git fetch instructor  You should see an error: remote: Repository not found. fatal: repository 'https://stewartdent@github.com/alecjacobson/test-gp-hw0-instructor.git/' not found  Now, let’s act as the student to submit the homework # cd stewartdent echo "stewartdent was here" >> assignment-01.md git add assignment-01.md git commit -m "Stewart Dent's Assignment 01 submission" assignment-01.md git push cd ..  The student has pushed their submission onto their own repo. They can continue to work on it using git in all its glory. When they’re down they “hand in” their homework via a pull request onto the public test-gp-hw0-student repo. The student does this by visiting the Github page for their own fork of test-gp-hw0-student and clicking “New Pull Request”. ## Grading as the instructor The instructor can now grade this. I’m imagining that the test-gp-hw0-instructor repo has a solution that can be compared (in some way or another) to the students work. To pull the students work into the instructor copy (but never merge): cd$INSTRUCTOR_REPO
git checkout -b stewartdent-master master
git pull https://github.com/stewartdent/test-gp-hw0-student.git master
cat empty-form.md
# Record grade.
git checkout master
git branch -D stewartdent-master


Then the instructor can close the pull request on test-gp-hw0-student.

## Other perks

Questions or discussions related to the homework can be conducted on the “issues” page of test-gp-hw0-student

* The fact that this repo is public should immediately indicate that student’s work submitted via pull request will also be public and importantly will be visible to other students. Unlike the most common form of cheating, this means that Student A can copy the work of Student B without the explicit consent of Student B. Student B’s only possible defense against this would be to submit their work just before the deadline. As far as I know, I can’t hide pull requests from the public (that would be a nice solution) and I prefer not to use any method that relies on student’s having to create a private fork (since they cost money; although, other class require very over-priced textbooks, so a 6-month github subscription is not so crazy, but it does go against the open-source spirit; there’s also the Big Tobacco style get’em while their young https://education.github.com/pack).

### How does matlab’s hypot function work?

Saturday, January 7th, 2017

The hypot function in matlab purports to be more numerically stable at computing the hypotenuse of a (right-)triangle in 2D. The example the help hypot gives is:

a = 3*[1e300 1e-300];
b = 4*[1e300 1e-300];
c1 = sqrt(a.^2 + b.^2)
c2 = hypot(a,b)


where you see as output:

c1 =

Inf     0


and

c2 =

5e+300       5e-300


this is a compiled built-in function so you can’t just open hypot to find out what its doing. It might just be pre-dividing by the maximum absolute value. There’s probably a better reference for this, but I found it in: “Vector length and normalization difficulties” by Mike Day.

Continuing this example:

m = max(abs([a;b]))
c3 = m.*sqrt((a./m).^2 + (b./m).^2)


produces

c3 =

5e+300       5e-300


While matlab’s hypot only accepts two inputs, pre-dividing by the maximum obviously extends to any dimension.

### Triangle area in nD

Saturday, January 7th, 2017

My typical go-to formula for the area of a triangle with vertices in nD is Heron’s formula. Actually this is a formula for the area of a triangle given its side lengths. Presumably it’s easy to compute side lengths in any dimension: e.g., a = sqrt((B-C).(B-C)).

Kahan showed that the vanilla Heron’s formula is numerically poor if your (valid) triangle side lengths are given as floating point numbers. He improved this formula by sorting the side-lengths and carefully rearranging the terms in the formula to reduce roundoff.

However, if you’re vertices don’t actually live in R^n but rather F^n where F is the fixed precision floating-point grid, then computing side lengths will in general incur some roundoff error. And it may so happen (and it does so happen) that this roundoff error invalidates the triangle side lengths. Here, invalid means that the side lengths don’t obey the triangle inequality. This might happen for nearly degenerate (zero area) triangles: one (floating point) side length ends up longer than the sum of the others. Kahan provides an assertion to check for this, but there’s no proposed remedy that I could find. One could just declare that these edge-lengths must of come from a zero-area triangle. But returning zero might let the user happily work with very nonsensical triangle side lengths in other contexts not coming from an embedded triangle. You could have two versions: one for “known embeddings” (with assertions off and returning 0) and one for “intrinsic/metric only” (with assertions on and returning NaN). Or you could try to fudge in an epsilon a nd hope you choose it above the roundoff error threshold but below the tolerance for perceived “nonsense”. These are messy solutions.

An open question (open in the personal sense of “I don’t know the answer”) is what is the most robust way to compute triangle area in nD with floating point vertex positions. A possible answer might be: compute the darn floating point edge lengths, use Kahan’s algorithm and replace NaNs with zeros. That seems unlikely to me because (as far as I can tell) there are 4 sqrt calls, major sources of floating point error.

Re-reading Shewchuk’s robustness notes, I saw his formula for triangle area for points A,B,C in 3D:

Area3D(A,B,C) = sqrt( Area2D(Axy,Bxy,Cxy)^2 + Area2D(Ayz,Byz,Cyz)^2 + Area2D(Azx,Bzx,Czx)^2 ) / 2

where Area2D(A,B,C) is computed as the determinant of [[A;B;C] 1]. This formula reduces the problem of computing 3D triangle area into computing 2D triangle area on the “shadows” (projections) of the triangle on each axis-aligned plane. This lead me to think of a natural generalization to nD:

AreaND(A,B,C) = sqrt( ∑_{i<j} ( Area2D(Aij,Bij,Cij)^2 ) / 2

This formula computes the area of the shadow on all (N choose 2) axis-aligned planes. Since the sqrt receives a sum of squares as in argument there’s no risk of getting a NaN. There’s a clear drawback that this formula is O(N^2) vs Heron’s/Kahan’s O(N).

Friday, January 6th, 2017

Like clockwork, my matlab updates have gotten out of sync with Xcode updates. It seems like fixing this SDK error always requires a different hack each year. This year I got the error:

No supported compiler or SDK was found. For options, visit
http://www.mathworks.com/support/compilers/current_release/.


To fix it, I replaced all occurrences of 10.9 with 10.11 in /Applications/MATLAB_R2017a.app/bin/maci64/mexopts/clang{++,}_maci64.xml

I’m still getting linker warnings:

ld: warning: object file was built for newer OSX version (10.11) than being linked (10.9)


For now, I’m assuming that I can ignore them. We’ll see how far that gets me.

### Plot a bunch of edges in matlab with per-edge color data

Monday, December 19th, 2016

Suppose you have a list of vertices V (#V by dim) and a list of edges E (#E by 2) indexing V, then one hacky to draw all of these edges with per-vertex color data DV (#V by 1) is to use trisurf:

trisurf(E(:,[1 1 2]),V(:,1),V(:,2),V(:,3),'CData',DV,'EdgeColor','interp');


However, if you try to use this trick to draw per-edge color data DE (#E by 1), you’ll find that 'EdgeColor','flat' does not work. Instead you can explode you compact “mesh” of edges into individual edges and repeat you edge color data for each edge point:

trisurf(reshape([1:size(E,1),1:size(E,1)*2],[],3),V(E(:),1),V(E(:),2),V(E(:),3),'CData',repmat(DE,2,1),'EdgeColor','interp');


### Command line program to view 3d meshes from files and piped input

Thursday, December 15th, 2016

Here’s a little C++ program to directly render meshes in 3D viewer from the command line.

This let’s you write little test programs without worrying about linking to a 3D viewer. You just need to output a mesh in a standard format. For example, here’s a tiny program that outputs a cube in an .off format:

#include <igl/read_triangle_mesh.h>
#include <Eigen/Core>
#include <iostream>

int main(int argc, char * argv[])
{
using namespace Eigen;
MatrixXd V(8,3);
MatrixXi Q(6,4);
V<<
0,0,1,
0,1,1,
1,1,1,
1,0,1,
0,0,0,
0,1,0,
1,1,0,
1,0,0;
Q<<
3,2,1,0,
0,1,5,4,
6,5,1,2,
3,7,6,2,
4,7,3,0,
4,5,6,7;
std::cout<<
"OFF\n"<<V.rows()<<" "<<Q.rows()<<" 0\n"<<
V.format(IOFormat(FullPrecision,DontAlignCols," ","\n","","","","\n"))<<
(Q.array()).format(IOFormat(FullPrecision,DontAlignCols," ","\n","4 ","","","\n"));
return EXIT_SUCCESS;
}


Compile this into cube_off then issue:

./cube_off | view mesh


**Update: ** And here’s a funny, little one-liner you can call from matlab to display a mesh via the .obj format:

system(sprintf('echo \"%s%s\" | /usr/local/bin/viewmesh',sprintf('v %0.17f %0.17f %0.17f\n',V'),sprintf('f %d %d %d\n',F')))


### Background computation threads with igl::viewer::Viewer

Sunday, December 4th, 2016

Here’s a minimal example showing how to launch background computation threads for each mesh in a list of meshes. Meanwhile the main thread runs a mesh viewer with all meshes concatenated into one huge multi-component mesh. Whenever a computation thread signals that an update to the mesh needs to be made, the main thread will re-concatenate the meshes and update the viewer. In this example, the “computation” is determining how much to move a clock “hand” (random mesh).

Here’s the program running on the cow, cheburashka and knight models:

// Tiny example to demonstrate spawning a background computation thread for
// each mesh in a list and update the viewer when computation results in a
// change to (one of) the meshes.
//
// In this example, three meshes are read in and interpreted as "hands" of a
// clock. The "computation" is just a busy-wait until the hand should move
// (after one second, one minute, one hour). This is, of course, a terrible way
// to implement a clock.
//
// ./test libigl/tutorial/shared/{cow.off,cheburashka.off,decimated-knight.off}
#include <igl/read_triangle_mesh.h>
#include <igl/point_mesh_squared_distance.h>
#include <igl/combine.h>
#include <igl/viewer/Viewer.h>
#include <Eigen/Geometry>
#include <thread>
int main(int argc, char * argv[])
{
using namespace std;
using namespace Eigen;
// Read in k meshes (<=3 will be used)
std::vector<Eigen::MatrixXd> VV(argc-1);
std::vector<Eigen::MatrixXi> FF(argc-1);
for(int i = 1;i<argc;i++)
{
igl::read_triangle_mesh(argv[i],VV[i-1],FF[i-1]);
VV[i-1].col(0).array() -= VV[i-1].col(0).mean();
VV[i-1].col(1).array() -= VV[i-1].col(1).minCoeff();
}
bool continue_computing = true;
// Flag to redraw and mutex to guard it
bool redraw = false;
std::mutex m;
// After waiting tic seconds, rotate V by deg degrees
const auto rot = [&continue_computing,&redraw,&m](
const double tic, const double deg, Eigen::MatrixXd& V)
{
while(continue_computing)
{
// Let's watch this at 500x: Real clocks are slow.
std::this_thread::sleep_for(std::chrono::milliseconds((int)tic*5));
V *= Eigen::AngleAxisd(deg/180.*igl::PI,
Eigen::Vector3d(0,0,1)).toRotationMatrix();
{
std::lock_guard<std::mutex> lock(m);
redraw = true;
}
}
};
// Launch background "computation" threads for each "hand" of the clock
// std::ref is important, otherwise std::bind passes by copy
std::vector<std::thread> threads;
switch(VV.size())
{
case 3:
threads.emplace_back(std::bind(rot,60.*60.,30,std::ref(VV[2])));
case 2:
threads.emplace_back(std::bind(rot,60.,6,std::ref(VV[1])));
case 1:
threads.emplace_back(std::bind(rot,1.,6,std::ref(VV[0])));
default: break;
}
igl::viewer::Viewer v;
v.data.clear();
// Helper function to view k meshes
const auto & set_meshes = [](
const std::vector<Eigen::MatrixXd> & VV,
const std::vector<Eigen::MatrixXi> & FF,
igl::viewer::Viewer & v)
{
Eigen::MatrixXd V;
Eigen::MatrixXi F;
igl::combine(VV,FF,V,F);
v.data.set_mesh(V,F);
};
set_meshes(VV,FF,v);
// Continuous draw loop. TODO: trigger draws using glfwPostEmptyEvent
v.core.is_animating = true;
// Before every draw check if the meshes have changed.
v.callback_pre_draw =
[&redraw,&m,&VV,&FF,&set_meshes](igl::viewer::Viewer & v)->bool
{
if(redraw)
{
set_meshes(VV,FF,v);
{
std::lock_guard<std::mutex> lock(m);
redraw = false;
}
}
return false;
};
v.launch();
// Tell computation threads to stop
continue_computing = false;
// Join with computation threads: return to serial main thread.
for(auto & t : threads) if(t.joinable()) t.join();
}


This is pretty hacky, but it will allow me to use the standard libigl viewer for making comparisons of mesh-editing methods whose performances are very different.

### Convert two-page color scan of book into monochrome single pdf

Friday, November 25th, 2016

Two days in a row now I’ve had to visit the physical library to retrieve an old paper. Makes me feel very authentic as an academic. Our library has free scanning facilities, but the resulting PDF will have a couple problems. If I’m scanning a book then each page of the pdf actually contains 2 pages of the book. Depending on the scanner settings, I might also accidentally have my 2 pages running vertically instead of horizontally. Finally, if I forgot to set the color settings on the scanner, then I get a low-contrast color image instead of a high-contrast monochrome scan.

Here’s a preview of pdf of an article from a book I scanned that has all these problems:

If this pdf is in input.pdf then I call the following commands to create output.pdf:

pdfimages input.pdf .scan
mogrify -format png -monochrome -rotate 90 -crop 50%x100% .scan*
convert +repage .scan*png output.pdf
rm .scan*


I’m pretty happy with the output. There are some speckles, but the simple -monochrome flag does a fairly good job.

I use Adobe Acrobat Pro to run OCR so that the text is selectable (haven’t found a good command line solution for that, yet).

Note: I think the -rotate 90 is needed because the images are stored rotated by -90 degrees but the input.pdf is compositing them after rotation. This hints that this script won’t generalize to complicated pdfs. But we’re safe here because a scanner will probably apply the same transformation to each page.

### Understanding mass matrix lumping in terms of functions spaces

Monday, November 14th, 2016


Mass matrix lumping is the process of using a diagonal mass matrix instead of the “full” mass matrix resulting from Galerkin Finite Element Method. Lumping is often recommended because a diagonal mass matrix easy to invert (just invert the entries along the diagonal). As the name implies, diagonalizing the mass matrix is often justified by arguing rather hand-wavily that we’re just “lumping all of the mass or integration” at the vertices of the mesh rather than integrating over the elements. This sounds intuitive but it leaves me wanting a more formal justification.

Here’s an attempt to justify mass lumping in terms of mixed Finite Element approach to discretizing problems of the form: minimize,

$$E(u) = \frac{1}{2} \int_\Omega (u-f)^2 + \nabla u \cdot \nabla u\ dA$$

If we discretize this irreducible form directly using, for example, piecewise-linear basis functions over a mesh (e.g., $$u \approx \sum_i \phi_i u_i$$, and we use $$\u$$ to also mean the vector of coefficients $$u_i$$), then we get:

$$E(\u) = \frac{1}{2} (\u-\f)^TM(\u-\f) + \frac{1}{2} \u^TK\u$$

where we call $$M$$ the mass matrix so that $$M_{ij} = \int_\Omega \phi_i \phi_j\ dA$$. Then when we “mass lump” we are replacing $$M$$ with a diagonal matrix $$D$$ such that $$D_{ii} = \sum_j M_{ij}$$ or $$D_{ii} =$$ the area associated with vertex $$i$$. This “associated area” might be the Voronoi area.

In physical dynamics, if $$\u=\u^t$$ are displacements of some dynamic system and $$\f=\u^{t-\Delta t}$$ are the mesh displacements of the previous time step then this gives us a physical formula for kinetic and potential energy:

$$E(\u^t) = \frac{1}{2 \Delta t^2} (\u^t-\u^{t-\Delta t})^T M (\u^t-\u^{t-\Delta t}) + \frac{1}{2} {\u^t}^TK\u^t$$

where the stiffness matrix $$K$$ now might measure, for example, linear elasticity (I’m mostly going to ignore what’s going on with $$K$$).

When we write this physical energy in the continuous form we’ll introduce a variable to keep track of velocities: $$\dot{u} = \partial u / \partial t$$,

$$E(u,\dot{u}) = \frac{1}{2} \int_\Omega \dot{u}^2 + (\nabla u+\nabla u^T) \cdot (\nabla u + \nabla u^T)\ dA$$

$$\dot{u} = \partial u / \partial t$$

Now we have the potential energy in terms of velocity $$\dot{u}$$ and the kinetic energy in terms of displacements $$u$$. This is a very natural mixed form. If we discretize both $$u$$ and $$\dot{u}$$ using piecewise-linear basis functions over the same mesh then we’ll have done a lot of re-shuffling to get the identical discretization above. In particular, we’ll just get the same non-diagonal mass matrix $$M$$.

Instead, we’ll discretize only displacements $$u$$ using piecewise-linear basis functions, and for velocities $$\dot{u}$$ we’ll use piecewise-constant basis functions defined on the dual mesh. A standard choice for the dual mesh is the Voronoi dual mesh, where each zero dimensional vertex becomes zero-codimensional “cell” whose area/volume is the Voronoi area/volume of that vertex as a seed amongst all other vertices as fellow seeds. One can construct this dual mesh by connected circumcenters of all of the triangles in the primary mesh. An alternative dual mesh, we’ll call the barycentric dual, will connect all barycenters of the triangles in the primary mesh. All this to say that we approximate the velocities as sum over piecewise constant basis functions located at primary mesh vertices (aka dual mesh cells): $$\dot{u}^t \approx \sum_i \psi_i \dot{u}^t_i$$. The basis functions $$\psi_i$$ and $$\phi_i$$ are in correspondence so that $$\dot{\u}^t$$ collects elements $$\dot{u}^t_i$$ in the same number and order as $$\u$$.

Now we can discretize the energy as:

$$E(\dot{\u}^t,\u^t) = \frac{1}{2} \dot{\u}^T D \dot{\u} + \frac{1}{2} {\u^t}^T K \u^t,$$

and voilà the diagonal mass matrix $D$ appears, where

$$D_{ij} = \int_\Omega \psi_i \psi_j\ dA = \begin{cases} \text{area}_i & i=j \ 0 & i\neq j \end{cases}$$

As a side note, if we use the barycentric dual mesh then areai $$= \sum_jM_{ij}$$ , if we use the Voronoi dual mesh then $$\text{area}_i$$ is the Voronoi area which may become negative for non-Delaunay meshes (and we might want to use the “fix” described in “Discrete Differential-Geometry Operators for Triangulated 2-Manifolds” [Mark Meyer et al. 2002]).

Our work is not quite done since we haven’t coupled our discretization of velocity and displacements together. We haven’t discretized the constraint: $$\dot{u} = \partial u / \partial t$$. We could just quietly say that they “need to agree” at mesh vertices so that $$\dot{u}^t_i = (u^t_i – u_i^{t – \Delta t})/\Delta t$$. If we do then we can immediately substitute-out all of the velocity variables and arrive at an “flattened” form of the energy only in terms of displacements $\u$:

$$E(\u^t) = \frac{1}{2 \Delta t^2} (\u^t-\u^{t-\Delta t})^T D (\u^t-\u^{t-\Delta t}) + \frac{1}{2} {\u^t}^T K \u^t.$$

Deeper down the rabbit hole: This is a bit unsettling because it seems like we just pushed the “lumping magic” deeper into the constraints between velocities and displacements. Indeed, if we try to enforce this constraint by the Lagrange multiplier method:

$$L(u,\dot{u},\lambda) = E(u,\dot{u}) + \frac{1}{2} \int_\Omega \lambda \left(\dot{u} – \frac{\partial u}{\partial t}\right) \ dA$$

(which admittedly reads a bit like nonsense because before the only difference between $$\dot{u}$$ and $$\partial u / \partial t$$ was notation, but think of $$\dot{u}$$ as a first-class variable function).

then we have to pick a discretization for the Lagrange multiplier function $$\lambda$$. Surprisingly, if we use either piecewise-linear on the primary mesh or piecewise-constant basis functions on the dual mesh then we’ll get a non-diagonal matrix coupling $$\dot{u}$$ and $$\u$$ which means that we can’t simply substitute-out $\dot{\u}$.

To tie things up, I found one (potentially dubious) discretization of $$\lambda$$ that makes things work: Dirac basis functions

$$\delta_i(x) = \int_\Omega \delta(x-x_i) \ dA$$

Where $$\delta(x)$$ is Dirac’s delta function so that $$\delta(x) = \begin{cases} \infty & x=0 \ 1 & x\neq 0 \end{cases}$$ and $$\int \delta(x) \ dA = 1$$.

Assuming I have permission for the Discretization God’s, then I let $$\lambda \approx \sum_i \delta_i \lambda_i$$ where (by abuse of notation/LaTeX’s inability to typeset bold Greek letters) $$\lambda$$ is a vector of coefficients $$\lambda_i$$ in corresponding number and order with $$\u$$ and $$\dot{\u}$$.

Since (crossing fingers) $$\int_\Omega \delta_i \psi_j \ dA = \begin{cases} 1 & i=j \ 0 & i\neq j\end{cases}$$ and $$\int_\Omega \delta_i \phi_j \ dA = \begin{cases} 1 & i=j \ 0 & i\neq j\end{cases}$$. Our discretized Lagrangian is:

$$L(\u^t,\dot{\u}^t,\lambda) = \frac{1}{2} \dot{\u}^T D \dot{\u} + \frac{1}{2} {\u^t}^T K \u^t + \frac{1}{2} \lambda I \left(\u^t – \frac{\u^t – \u^{t-\Delta t}}{\Delta t} \right),$$

where $$I$$ truly madly deeply is the identity matrix. This implies that a direct substitution of $$\dot{\u}^t$$ with $$(\u^t – \u^{t-\Delta t})/\Delta t$$ is valid.

Update: Some papers making similar claims:

“A note on mass lumping and related processes” [Hinton et al. 1976]
(“Analysis Of Structural Vibration And Response” [Clough 1971] (cited in [Hinton et al. 1976] but I haven’t found a copy yet)

### Implementing QSlim mesh simplification using libigl

Friday, November 4th, 2016

One of my research projects is looking at mesh simplification and the perennial comparison is always to Garland and Heckbert’s “Surface simplification using quadric error metrics” (aka QSlim). And relevant to us, the lesser-known followup for “Surfaces with Color and Texture using Quadric Error” (aka QSlim in nD)

Currently, libigl has built in edge-collapse based mesh simplification with an exposed function handle: cost_and_placement which determines the cost of collapsing an edge and the placement. This function is used for two purposes: to initialize the heap of edges to collapse and to update edges in that heap after a collapse.

Currently I had only implemented a very simple metric: cost is the length of the edge and collapsed vertices are placed at edge midpoints.

So a bit out of research necessity and a bit just for a challenge, I timed how long it would take for me to implement QSlim in nD.

So, 1 hour and 20 minutes later, here is my QSlim in nD demo based off of the decimation entry in the libigl tutorial:

#include <igl/collapse_edge.h>
#include <igl/edge_flaps.h>
#include <igl/read_triangle_mesh.h>
#include <igl/viewer/Viewer.h>
#include <Eigen/Core>
#include <iostream>
#include <set>

int main(int argc, char * argv[])
{
using namespace std;
using namespace Eigen;
using namespace igl;
cout<<"Usage: ./703_Decimation_bin [filename.(off|obj|ply)]"<<endl;
cout<<"  [space]  toggle animation."<<endl;
cout<<"  'r'  reset."<<endl;
// Load a closed manifold mesh
string filename(argv[1]);
if(argc>=2)
{
filename = argv[1];
}
MatrixXd V,OV;
MatrixXi F,OF;
read_triangle_mesh(filename,OV,OF);

igl::viewer::Viewer viewer;

// Prepare array-based edge data structures and priority queue
VectorXi EMAP;
MatrixXi E,EF,EI;
typedef std::set<std::pair<double,int> > PriorityQueue;
PriorityQueue Q;
std::vector<PriorityQueue::iterator > Qit;
// If an edge were collapsed, we'd collapse it to these points:
MatrixXd C;
int num_collapsed;

// Function for computing cost of collapsing edge (lenght) and placement
// (midpoint)
const auto & shortest_edge_and_midpoint = [](
const int e,
const Eigen::MatrixXd & V,
const Eigen::MatrixXi & /*F*/,
const Eigen::MatrixXi & E,
const Eigen::VectorXi & /*EMAP*/,
const Eigen::MatrixXi & /*EF*/,
const Eigen::MatrixXi & /*EI*/,
double & cost,
RowVectorXd & p)
{
cost = (V.row(E(e,0))-V.row(E(e,1))).norm();
p = 0.5*(V.row(E(e,0))+V.row(E(e,1)));
};

// Quadrics per vertex
typedef std::tuple<Eigen::MatrixXd,Eigen::RowVectorXd,double> Quadric;
std::vector<Quadric> quadrics;

// c **is** allowed to be a or b.
const auto & plus = [](const Quadric & a, const Quadric & b, Quadric & c)
{
std::get<0>(c) = (std::get<0>(a) + std::get<0>(b)).eval();
std::get<1>(c) = (std::get<1>(a) + std::get<1>(b)).eval();
std::get<2>(c) = (std::get<2>(a) + std::get<2>(b));
};
// State variables keeping track of whether we've just collpased edge (v1,v2)
int v1 = -1;
int v2 = -1;
const auto & qslim_optimal = [&quadrics,&plus,&v1,&v2](
const int e,
const Eigen::MatrixXd & V,
const Eigen::MatrixXi & /*F*/,
const Eigen::MatrixXi & E,
const Eigen::VectorXi & /*EMAP*/,
const Eigen::MatrixXi & /*EF*/,
const Eigen::MatrixXi & /*EI*/,
double & cost,
RowVectorXd & p)
{
// Then we just collapsed (v1,v2)
if(v1>=0 && v2>=0)
{
plus(quadrics[v1],quadrics[v2],quadrics[v1<v2?v1:v2]);
v1 = -1;
v2 = -1;
}
// Combined quadric
Quadric quadric_p;
plus(quadrics[E(e,0)],quadrics[E(e,1)],quadric_p);
// Quadric: p'Ap + 2b'p + c
// optimal point: Ap = -b, or rather because we have row vectors: pA=-b
const auto & A = std::get<0>(quadric_p);
const auto & b = std::get<1>(quadric_p);
const auto & c = std::get<2>(quadric_p);
p = -b*A.inverse();
cost = p.dot(p*A) + 2*p.dot(b) + c;
};

// Function to reset original mesh and data structures
const auto & reset = [&]()
{
F = OF;
V = OV;
edge_flaps(F,E,EMAP,EF,EI);
Qit.resize(E.rows());
Q.clear();

const int dim = V.cols();
// Quadrics per face
std::vector<Quadric> face_quadrics(F.rows());
// Initialize each vertex quadric to zeros
quadrics.resize(
V.rows(),{Eigen::MatrixXd::Zero(dim,dim),Eigen::RowVectorXd::Zero(dim),0});
Eigen::MatrixXd I = Eigen::MatrixXd::Identity(dim,dim);
// Rather initial with zeros, initial with a small amount of energy pull
// toward original vertex position
const double w = 1e-10;
for(int v = 0;v<V.rows();v++)
{
std::get<0>(quadrics[v]) = w*I;
Eigen::RowVectorXd Vv = V.row(v);
std::get<1>(quadrics[v]) = w*-Vv;
std::get<2>(quadrics[v]) = w*Vv.dot(Vv);
}
// Generic nD qslim from "Simplifying Surfaces with Color and Texture
// using Quadric Error Metric" (follow up to original QSlim)
for(int f = 0;f<F.rows();f++)
{
Eigen::RowVectorXd p = V.row(F(f,0));
Eigen::RowVectorXd q = V.row(F(f,1));
Eigen::RowVectorXd r = V.row(F(f,2));
Eigen::RowVectorXd pq = q-p;
Eigen::RowVectorXd pr = r-p;
// Gram Determinant = squared area of parallelogram
double area = sqrt(pq.squaredNorm()*pr.squaredNorm() - pow(pr.dot(pq),2));
Eigen::RowVectorXd e1 = pq.normalized();
Eigen::RowVectorXd e2 = (pr-e1.dot(pr)*e1).normalized();
// e1 and e2 be perpendicular
assert(std::abs(e1.dot(e2)) < 1e-10);
// Weight face's quadric (v'*A*v + 2*b'*v + c) by area
const Eigen::MatrixXd A = I-e1.transpose()*e1-e2.transpose()*e2;
const Eigen::RowVectorXd b = p.dot(e1)*e1 + p.dot(e2)*e2 - p;
const double c = (p.dot(p) - pow(p.dot(e1),2) - pow(p.dot(e2),2));
face_quadrics[f] = { area*A, area*b, area*c };
// Throw at each corner
for(int c = 0;c<3;c++)
{
plus(
quadrics[F(f,c)],
face_quadrics[f],
quadrics[F(f,c)]);
}
}

C.resize(E.rows(),V.cols());
VectorXd costs(E.rows());
v1 = -1;
v2 = -1;
for(int e = 0;e<E.rows();e++)
{
double cost = e;
RowVectorXd p(1,3);
qslim_optimal(e,V,F,E,EMAP,EF,EI,cost,p);
C.row(e) = p;
Qit[e] = Q.insert(std::pair<double,int>(cost,e)).first;
}
num_collapsed = 0;
viewer.data.clear();
viewer.data.set_mesh(V,F);
viewer.data.set_face_based(true);
};

const auto &pre_draw = [&](igl::viewer::Viewer & viewer)->bool
{
// If animating then collapse 10% of edges
if(viewer.core.is_animating && !Q.empty())
{
bool something_collapsed = false;
// collapse edge
const int max_iter = std::ceil(0.01*Q.size());
for(int j = 0;j<max_iter;j++)
{
std::pair<double,int> p = *(Q.begin());
bool shouldnt_collapse = false;
if((! Q.empty()) &&
(p.first != std::numeric_limits<double>::infinity()))
{
v1 = E(p.second,0);
v2 = E(p.second,1);
}else
{
v1 = -1;
v2 = -1;
shouldnt_collapse = true;
}
if(!collapse_edge(qslim_optimal,V,F,E,EMAP,EF,EI,Q,Qit,C))
{
break;
}else
{
// we just collapsed.
assert(!shouldnt_collapse);
}
something_collapsed = true;
num_collapsed++;
}

if(something_collapsed)
{
viewer.data.clear();
viewer.data.set_mesh(V,F);
viewer.data.set_face_based(true);
}
}
return false;
};

const auto &key_down =
[&](igl::viewer::Viewer &viewer,unsigned char key,int mod)->bool
{
switch(key)
{
case ' ':
viewer.core.is_animating ^= 1;
break;
case 'R':
case 'r':
reset();
break;
default:
return false;
}
return true;
};

reset();
viewer.core.is_animating = true;
viewer.callback_key_down = key_down;
viewer.callback_pre_draw = pre_draw;
return viewer.launch();
}


I’m pretty happy that this didn’t take me all day, but admittedly it should have been faster. One confusing part was that the current API in libigl for edge collapsing doesn’t have an explicit function handle that’s called when an edge is successfully collapsed. Detecting this is necessary to propagate the quadric to the collapsed vertex (but isn’t necessary for simple costs like edge-length). I faked this for now using v1 and v2 as state variables. Probably, I’ll change the libigl API to accommodate this. The other time sink was worry about degenerate Quadrics (when the system matrix is singular). Rather than try to detect this categorically, I opted to sprinkle in a little energy pulling vertices toward their (ancestors’) original positions. This seems to work fairly well.