I’ve put up the code for my old prototype program corresponding to the paper “Fast Automatic Skinning Transformations” on github. Enjoy.
Archive for February, 2015
Since publishing our paper in 2012, I’ve learned how to bundle up mac os x applications in to stand alone .app’s which include all dependencies (dynamic libraries etc.). This means no installation necessary.
We’ve had the code up for this project for a while now and it’s now integrated into libigl. You can read about it in the tutorial. But hopefully this little app. Is fun to try now! It also has some features not in the paper, like dynamics.
Eventually I’ll try to host the code for this demo on GitHub. But until download FAST.app. Inside the zip is
FAST.app and three example projects
Launch the program by double clicking on FAST.app
Open a “Project” directory using APPLE + O
Press ‘t’ to toggle displaying the filled triangles of the mesh.
Press ‘l’ to toggle displaying the wireframe of the mesh.
The “state” of this program is partly determined by which shader is being used. To toggle between shaders, use the ‘<‘ and ‘>’ keys.
Switch to the LBS shader.
Press ‘s’ to toggle displaying the skeleton
Click on individual bones or endpoints to select them.
Press ‘r’ to reset all bones to their rest positions.
Press ‘R’ to reset selected bones.
Right-click and drag on an endpoint to rotate the bone about its parent
Right-click and drag on a bone to wist about its axis
Left-click and drag on a bone or endpoint to translate (not so useful, yet)
The “Fast Automatic Skinning Transformations” method may be activated by setting appropriate parameters in the “AutoDOF” group of the GUI and toggling “Auto DOF” by pressing ‘D’.
Press ‘D’ to turn on “Auto DOF” and wait for precomputation to complete.
Left-click and drag on Yellow endpoints to set position constraints.
Select an endpoint and hit ‘f’ to toggle its type:
|Color | Constraint type | |-------|------------------------------------------------| |Red | fully constrained | |Pink | linearly constrained (position is free) | |Yellow | positionally constrained (linear part is free) | |Green | completely free |
Right-click and drag red endpoints to apply full rigid transformations
Help for the GUI is available by clicking the little question mark in the bottom left corner.
I recently (re)discovered that if you set the ‘CData’ value of a face to nan the face will be hidden. This can sometimes be useful if you want to use the wireframe of a model to give context but focus on just a few faces. Here’s an example:
% selected faces I = sparse(125,1,1,size(F,1),1); A = facet_adjacency_matrix(F); while true % Set all but selected to nan tsurf(F,V,'CData',sparse(find(~I),1,nan,size(F,1),1)); view(-48,4); set(gca,'Visible','off');apply_ambient_occlusion(); drawnow; if nnz(I)==size(F,1); break; end I = A*I; end
Here’s one way to compute a triangle’s normal using floating point arithmetic that is stable with respect to the order the triangle’s vertices are given in. To demonstrate the issue, consider:
% Vertex positions A = [1/8 1 1/6]; B = [1/3 1/5 1/7]; C = [1/4 1/9 1/2]; % normals via cross product of different edge vectors NB = cross(A-B,C-B); NC = cross(B-C,A-C); NA = cross(C-A,B-A); NA-NB NB-NC
ans = 0 0 -2.7756e-17 ans = 5.5511e-17 1.3878e-17 0
Computing the normals three different ways, I get three different floating point errors. The right thing to do I guess is analyze the vertex positions and design a normal computation that minimizes floating point error (the solution hopefully being stable via uniqueness). I took a lazier route and take a careful average of the three solutions. The might also have the effect of cutting down floating point error a bit. But at least it’s stable.
Unfortunately it’s not enough to naively sum N1, N2, and N3 entry-wise and divide by 3. The summation could incur roundoff error depending on the order of the three arguments. So first we must sort the arguments. If going to the trouble of sorting we might as well sort in a way that reduces roundoff error: descending order of absolute value. The matlab syntax for this is a little clunky:
NABC = [NA(:) NB(:) NC(:)]; [~,I] = sort([NABC],2,'descend'); sNABC = reshape( ... NABC(sub2ind(size(NABC),repmat(1:size(NABC,1),size(NABC,2),1)',I)),size(NABC)); N = 1/3*reshape(sum(sNABC,2),shape);
I made a nice realization today that the boolean code I’ve been working on can also be used to compute “solid sweeps” of meshes along line segments: the Minkowski sum of a polyhedron and a line segment.
linear_sweep.m in gptoolbox. It works in 2D (using Triangle and the winding number to classify the boundary of the swept region):
and in 3D:
the 3D version is currently only robust if the sweep vector goes outside the bounding box of the input shape. In those cases, I’m using a technique analogous to my robust booleans and it works quite well. But if the sweep vector is small, I’m resorting to using TetGen and computing winding numbers: TetGen will often fail. Hopefully this limitation is not for long, since I have a clear idea how to handle small vectors. Just need to code it up.
Update: The 3D version is now robust, too.
There’s a long chain of work on symmetry detection algorithms for 3d shapes. The mathematics behind a lot of these works is quite interesting and well founded. See “Symmetry in 3D Geometry: Extraction and Applications” [Mitra et al. 2012] for a survey. But for the sake of implementing something quick and dirty with tools I have lying around, here’s a solution that largely ignores all this great work.
The workhorse of the solution I arrived at is rigid registration using iterative closest point (aka ICP or Insane Clown Posse). This algorithm is quite simple. To find the best rotation and translations that aligns point set
B to point set
while not converged find closest point a* in A to each point b in B find closest point b* in B to each point a in A update B with the rigid transform (R,t) best aligning all pairs a* to b and b* to a
Rigid ICP itself has a wide literature of improvements strategies and optimization alternatives. Besides performance, the main concern is getting stuck in a local minimum. One solution is to consider a variety of initial rigidly transformed seeds for
Now, if I have a shape
A and I take
B=A, obviously I’ll get back the identity transformation after ICP. If
A is rotationally symmetric and I seed
B with rotated versions of
A. I’ll expect that the local minima I find with very low energy will be aligned with symmetry rotations.
But how can I use this to find planes of reflectional symmetry? Well, if I seed
B with many rotations of a reflection of
A. Then low energy local minima of rigid ICP will reveal reflectional symmetry planes.
A vanilla brute force algorithm might try to sample all reflection planes directly. But this will waste time considering spans of planes far from optimal. Using rigid ICP can be thought of as a way to quickly converge on decent places to look.
My current implementation is a sort of particle swarm optimization. I generate
k random seeds for the reflections of
A, run rigid ICP on each for
j iterations, pick the best
i candidates, generate
k*f seeds around these candidates, and so on.
Here’re a couple visualizations of the algorithm in progress:
Implementing this with straight matlab, you just need
procrustes (or gptoolbox‘s
fit_rigid.m, see use in my
icp.m implementation). I guess I’ll call it Iterative Closest Upside-down Point (ICUP).
Given a url to a somevid.com video like:
http://somevid.com/YvEkX82q2zH9Mbo6y8VX. Here’s how you can rearrange the url to download the video as an mp4:
wget $(echo "http://somevid.com/YvEkX82q2zH9Mbo6y8VX" | sed -e "s/com\/\(.*\)/com\/transcodes\/\1?format=1\&chunk=0/") -O output.mp4
Probably if it’s a long video you’ll need to increment the
chunk=1 and so on and then stitch the video together.
Sometimes I’ll use Eigen types with
bool as the Scalar type. This invites “gotchas” when I assume the matrices will work like logicals do in MATLAB. Most immediately
sum of logicals in matlab will return the integer number of trues (actually returns it as a
double type). In Eigen,
sum (not surprisingly) just calls
operator+ and casts the result to
So rather than:
Matrix<bool,4,4> B; B<<true,false,true,true; B.sum(); // will just return true
if you want to compute the number of true entries use one of:
<del>B.nonZeros();</del> B.count(); std::count(B.data(),B.data()+B.size(),true);
Things get trickier if your trying to use more advanced accessors. Consider trying to find the number of true entries per row. Instead of
you could use
but this casts the entire matrix first to
or just roll up the loop over the rows.
Matlab has only rudimentary support for visualizing data defined over a tetrahedral mesh. I use usually external calls to
medit. Today I needed to implement slicing through a tet mesh for a different application and noticed that it’s also useful for visualization. I add slice_tets.m to gptoolbox. The slicing works in an obvious, yet vectorized way. The way a plane passes through a tet (if at all) can be described by the number of vertices on either side of the plane: 0-4, 1-3, 2-2, 3-1, 4-0. The 0-4 and 4-0 cases don’t add anything to the slice and are quickly ignored. The 1-3 and 3-1 cases add a single triangle to the slice and if we categorize the vertices with respect to their signed distance to the plane, then we only need to implement the 1-3 case and reverse the signs to handle the 3-1 case. Finally the 2-2 case adds a quad (i.e. two triangles) to the slice, and we should be careful to flip the orientation depending on which vertices end up on either side.
My vectorized matlab identifies the case each tet belongs to and computes triangles or split quads for all of that case in unison.
The slicing is actually pretty fast. For a 600,000 tetrahedra mesh, slicing through the middle takes ~0.11 secs (~9fps). With a little more time spent, I may also keep track of the linear interpolation coordinates so that I can visualize a function defined over the tet-mesh onto the slicing plane.
Here I’m showing a slice through a Poisson equation inside this reclining knight: