Posts Tagged ‘siggraph’

Convincing LatexIt and Illustrator to use the new SIGGRAPH fonts

Saturday, May 20th, 2017

The SIGGRAPH Latex style changed to the Libertine font. Here’re the steps to convince Latexit to use the new stylesheet and then to convince Illustrator to use the libertine font for drag and drop math.

mkdir ~/Library/texmf/tex/latex/local/acmart.cls/
cp ~/Dropbox/boundary/Paper/acmart.cls ~/Library/texmf/tex/latex/local/acmart.cls

In Latexit, open up Preferences, add a new SIGGRAPH “Template” containing:

\documentclass[sigconf, review]{acmart}
\pagenumbering{gobble}

If you try to drag and drop these into illustrator you’ll see that illustrator has replaced the nice math font with Myriad or something silly.

Download the TTF libertine font pack

Drag this into FontBook.app

cp /usr/local/texlive/2015/texmf-dist/fonts/type1/public/libertine/*.pfb ~/Library/Application\ Support/Adobe/Fonts/

Update: I also had to issue:

cp /usr/local/texlive/2015/texmf-dist/fonts/type1/public/txfonts/*.pfb ~/Library/Application\ Support/Adobe/Fonts/
cp /usr/local/texlive/2015/texmf-dist/fonts/type1/public//newtx/*.pfb  ~/Library/Application\ Support/Adobe/Fonts/

If you see boxes with X’s replacing symbols after dragging and dropping from LaTeXit, then drag into Finder instead (to create a .pdf file), then open this directly and Illustrator will give a warning and tell you which font it’s (still) missing.

Restart Adobe Illustrator

Rig Animation with a Tangible and Modular Input Device preprint + video

Thursday, May 5th, 2016

We’ve put up a project page and a preprint of our new SIGGRAPH 2016 paper “Rig Animation with a Tangible and Modular Input Device”, joint work with Glauser Oliver, Wan-Chun Ma, Daniele Panozzo, O. Hilliges, O. Sorkine-Hornung.

This is not just version 2.0 of our tangible and modular input device from 2014 (although the new hardware is totally awesome). In this paper we also present a new optimization for mapping joints and splitters to any industry-grade character rig. The optimization will output instructions for a device to construct out of parts and then map those degrees of freedom to all parameters of the rig.

Abstract
We propose a novel approach to digital character animation, combining the benefits of tangible input devices and sophisticated rig animation algorithms. A symbiotic software and hardware approach facilitates the animation process for novice and expert users alike. We overcome limitations inherent to all previous tangible devices by allowing users to directly control complex rigs using only a small set (5-10) of physical controls. This avoids oversimplification of the pose space and excessively bulky device configurations. Our algorithm derives a small device configuration from complex character rigs, often containing hundreds of degrees of freedom, and a set of sparse sample poses. Importantly, only the most influential degrees of freedom are controlled directly, yet detailed motion is preserved based on a pose interpolation technique. We designed a modular collection of joints and splitters, which can be assembled to represent a wide variety of skeletons. Each joint piece combines a universal joint and two twisting elements, allowing to accurately sense its configuration. The mechanical design provides a smooth inverse kinematics-like user experience and is not prone to gimbal locking. We integrate our method with the professional 3D software Autodesk Maya® and discuss a variety of results created with characters available online. Comparative user experiments show significant improvements over the closest state-of-the-art in terms of accuracy and time in a keyframe posing task.

Mesh Arrangements for Solid Geometry preprint

Thursday, April 21st, 2016

mesh booleans

I’m very excited to publish my work with Qingnan Zhou, Eitan Grinspun and Denis Zorin on mesh booleans at this year’s SIGGRAPH. The paper is titled Mesh Arrangements for Solid Geometry. The code is (and has been) in libigl and gptoolbox and pymesh.

Wednesday, March 30th, 2016

skinning course at SGP 2016

I’ll be teaching our Skinning course again this year. This time at the IGS 2016 summer school (host of the Symposium of Geometry Processing). The rest of the summer school (aka workshop) line-up is an all-star cast. Look forward to seeing you there.

Code and presentation slides for Nested Cages

Monday, February 1st, 2016

nested cages teaser

We’ve released the source code and presentation slides for our paper Nested Cages, presented last November at SIGGRAPH Asia 2015.

Nested Cages project page

Friday, October 2nd, 2015

nested cages bunny

We’ve posted a project page for our upcoming SIGGRAPH Asia paper Nested Cages, a collaboration between Leonardo Sacht, Etienne Vouga and myself.

Abstract: Many tasks in geometry processing and physical simulation benefit from multiresolution hierarchies. One important characteristic across a variety of applications is that coarser layers strictly encage finer layers, nesting one another. Existing techniques such as surface mesh decimation, voxelization, or contouring distance level sets do not provide sufficient control over the quality of the output surfaces while maintaining strict nesting. We propose a solution that enables use of application-specific decimation and quality metrics. The method constructs each next-coarsest level of the hierarchy, using a sequence of decimation, flow, and contact-aware optimization steps. From coarse to fine, each layer then fully encages the next while retaining a snug fit. The method is applicable to a wide variety of shapes of complex geometry and topology. We demonstrate the effectiveness of our nested cages not only for multigrid solvers, but also for conservative collision detection, domain discretization for elastic simulation, and cage-based geometric modeling.

Accompanying video for “Nested Cages”, SIGGRAPH Asia 2015

Tuesday, September 22nd, 2015

Here’s the accompanying video for the upcoming SIGGRAPH Asia 2015 paper “Nested Cages” that I’ve been working on with Leonardo Sacht and Etienne Vouga:

Abstract:
Many tasks in geometry processing and physical simulation benefit from multiresolution hierarchies. One important characteristic across a variety of applications is that coarser layers strictly encage finer layers, nesting one another. Existing techniques such as surface mesh decimation, voxelization, or contouring distance level sets do not provide sufficient control over the quality of the output surfaces while maintaining strict nesting. We propose a solution that enables use of application-specific decimation and quality metrics. The method constructs each next-coarsest level of the hierarchy, using a sequence of decimation, flow, and contact-aware optimization steps. From coarse to fine, each layer then fully encages the next while retaining a snug fit. The method is applicable to a wide variety of shapes of complex geometry and topology. We demonstrate the effectiveness of our nested cages not only for multigrid solvers, but also for conservative collision detection, domain discretization for elastic simulation, and cage-based geometric modeling.

You can find the paper on my site.

Libigl tutorial entry for physics upsampling with biharmonic coordinates

Tuesday, July 28th, 2015

I’ve added a tutorial entry describing biharmonic coordinates and linking to an example using it to run a “physics” simulation on a coarse mesh and upsample it to a high-resolution mesh:

octopus physics biharmonic coordinates

Read more about in our upcoming SIGGRAPH paper Linear Subspace Design for Real-Time Shape Deformation.

Physical simulation “skinning” with biharmonic coordinates

Thursday, June 25th, 2015

This is a cute little example we didn’t have time to put into our upcoming SIGGRAPH paper Linear Subspace Design for Real-Time Shape Deformation.

octopus upsampling

In this example I first decimate the high resolution (blue) octopus to create the low-res orange octopus. I’m careful to use a decimate that keeps vertices at (or at least near) their original positions. Then I compute “biharmonic coordinates” for vertices of the blue mesh with respect to the vertices of the coarse orange mesh. Notice that the orange mesh does not need to be an enclosing cage. Rather its points just need to lie in or on the blue octopus (there’re even ways to deal with points outside the blue octopus, but right now I’m skipping that). These weights are computed in gptoolbox with:

b = knnsearch(orange_V,blue_tet_V);
W = biharmonic_coordinates(blue_tet_V,blue_tet_T,b);

Then I run a sort of co-rotational linear-elasticity simulation on the orange mesh with collisions activated for the floor. The blue mesh tracks the orange mesh by simply interpolating the orange vertex positions via the coordinates:

blue_V = W * orange_V;

Simple as that. Smooth physics “skinning” with a matrix-matrix multiplication. This is due to the affine precision of our coordinates and their squared Laplacian energy minimization. The “standard” approach to physics “skinning” would be to create the orange tet-mesh so that it envelops the blue mesh and then use piecewise-linear (i.e. non-smooth) interpolation to move the blue mesh vertices.

Update: I should make it super clear that this is not a subspace (aka model reduction) simulation. The simulation of the orange mesh is not aware of the blue mesh at all. Sometimes subspace physics would be more appropriate, though other times pushing the subspace through all parts of the simulation pipeline is impractical/impossible. Hence this prototypical result.

For posterity here’s the entire example code:

% load hi-res mesh
[uV,uF] = load_mesh('~/Dropbox/models/octopus-med.ply');
% load low-res mesh
[~,CV,CF] = qslim(uV,uF,900,'QSlimFlags','-O 0');
[CV,CF] = meshfix(CV,CF);
% Find closest points
b = knnsearch(uV,CV);
CV = uV(b,:);

% Tet-mesh fine resolution mesh and compute weights
[dV,dT,dF] = tetgen(uV,uF,'Flags','-q2');
W = biharmonic_coordinates(dV,dT,b);
W = W(1:size(uV,1),:);

[VV,VT,VF] = tetgen(CV,CF,'Flags','-q100 -Y');
off = mean(VV);
VV=bsxfun(@minus,VV,off);
UU = bsxfun(@plus,VV,[0 0 1]);
vel = zeros(size(UU));
data = [];
M = massmatrix(VV,VT);
% This normalization is very important
M = M/max(M(:));
dt = 1e-2;

t = tsurf(VF,UU, ...
  'FaceColor',[0.8 0.5 0.2], ...
  'SpecularStrength',0,'DiffuseStrength',0.3,'AmbientStrength',0.7);
%t.Visible = 'off';
l = light('Style','local','Position',[5 0 20]);
l2 = light('Style','infinite','Position',[-1 -1 2]);
hold on;
up = @(UU) bsxfun(@plus,[1 0 0],W*UU(1:numel(b),:));
u = tsurf(uF,up(UU),'EdgeColor','none', ...
  'FaceVertexCData',repmat([0.3 0.4 1.0],size(uV,1),1), ...
  fphong,'SpecularStrength',0,'DiffuseStrength',0.3,'AmbientStrength',0.7);
shadows = false;
if shadows
  e = -1e-4;
  QV = [-0.8,-0.5 e; -0.8  0.5 e; 1.5  0.5 e;1.5,-0.5 e];
  QQ = [1 2 3 4];
  trisurf(QQ,QV(:,1),QV(:,2),QV(:,3),'EdgeColor','none','FaceColor',[0.7 0.7 0.7]);
  st = add_shadow(t,l,'Ground',[0 0 -1 0]);
  su = add_shadow(u,l,'Ground',[0 0 -1 0]);
  st.FaceColor = [0.5 0.5 0.5];
  su.FaceColor = [0.5 0.5 0.5];
  set(gca,'Visible','off');
  set(gcf,'Color','w');
end
hold off;
axis equal;
axis([-0.8 1.8 -0.5 0.5 0 1.5]);
axis manual;
camproj('persp');
T = get(gca,'tightinset');
set(gca,'position',[T(1) T(2) 1-T(1)-T(3) 1-T(2)-T(4)]);

while true
  dur = 0;
  UU0 = UU;
  [UU,data] = arap(VV,VT,[],[], ...
    'Energy','elements', ...
    'V0',UU0, ...
    'Data',data, ...
    'Dynamic',M*repmat([0 0 -9.8],size(VV,1),1), ...
    'MaxIter',100, ...
    'TimeStep',dt, ...
    'Vm1',UU-vel);
  vel = UU-UU0;
  C = UU(:,3)<0;
  UU(C,3) = -UU(C,3);
  vel(C,3) = -vel(C,3);
  dur = dur+dt;
  t.Vertices = UU;
  u.Vertices = up(UU);

  if shadows
    light_pos = [l.Position strcmp(l.Style,'local')];
    ground = [0 0 -1 0];
    d = ground * light_pos';
    shadow_mat = d*eye(4) - light_pos'*ground;
    H = [t.Vertices ones(size(t.Vertices,1),1)]*shadow_mat';
    st.Vertices = bsxfun(@rdivide,H(:,1:3),H(:,4));
    H = [u.Vertices ones(size(u.Vertices,1),1)]*shadow_mat';
    su.Vertices = bsxfun(@rdivide,H(:,1:3),H(:,4));
  end

  drawnow;
end

Implementing “Linear Subspace Design for Real-Time Shape Deformation”

Tuesday, May 12th, 2015

Here’s a way to implement the linearly precise smoothness energy using gptoolbox in manner consistent with the paper.

First construct the usual cotangent laplacian L:

C = cotangent(V,F);
L = sparse( ...
  F(:,[2 3 1 3 1 2 2 3 1 3 1 2]), ...
  F(:,[3 1 2 2 3 1 2 3 1 3 1 2]), ...
  [C C -C -C], ...
  size(V,1),size(V,1));

This throws the cotangent of the angle across from each half-edge (i,j) to the off-diagonal entries L(i,j) and L(j,i) and minus that value to the diagonal entries L(i,i) and L(j,j).

You could also just use:

L = cotmatrix(V,F);

To compute the normal derivative matrix N, first find all of the half-edges on the boundary and the subscript indices into F so that if on_b_f(1) = f and on_b_c(1) = c then we know the half-edge across from vertex F(f,c) is on the boundary.

[~,on_b] = on_boundary(F);
[on_b_f,on_b_c] = find(on_b);

Now build lists of triplets (p,i,q) so that the half-edge i–>p is on the boundary and q is opposite it:

P = F(sub2ind(size(F),on_b_f,mod(on_b_c+1,3)+1));
I = F(sub2ind(size(F),on_b_f,mod(on_b_c,3)+1));
Q = F(sub2ind(size(F),on_b_f,on_b_c));

Collect the cotangents across from vertices i and j and throw accordingly at entries N(i,j), N(i,k), N(i,i), etc.:

CP = C(sub2ind(size(F),on_b_f,mod(on_b_c+1,3)+1));
CI = C(sub2ind(size(F),on_b_f,mod(on_b_c,3)+1));
N = sparse( ...
  [P P P P I I I I], ...
  [Q P Q I Q I Q P], ...
  [-CI CI -CP CP -CP CP -CI CI], ...
  size(V,1),size(V,1));

Now we can compute the “linearly precise Laplacian” K:

K = L+N;

Build the per-vertex mass matrix, and square our Laplacian to result in the bilaplacian smoothness energies quadratic form A:

M = massmatrix(V,F);
A = K' * (M\ K);