## Posts Tagged ‘geometry’

### Thingi10K: A Dataset of 10,000 3D-Printing Models

Tuesday, May 17th, 2016

Qingnan “James” Zhou and I have released a technical report detailing the contents and methodology behind our ten thousand model thingi10k dataset.

Abstract
Empirically validating new 3D-printing related algorithms and implementations requires testing data representative of inputs encountered in the wild. An ideal benchmarking dataset should not only draw from the same distribution of shapes people print in terms of class (e.g., toys, mechanisms, jewelry), representation type (e.g., triangle soup meshes) and complexity (e.g., number of facets), but should also capture problems and artifacts endemic to 3D printing models (e.g., self-intersections, non-manifoldness). We observe that the contextual and geometric characteristics of 3D printing models differ significantly from those used for computer graphics applications, not to mention standard models (e.g., Stanford bunny, Armadillo, Fertility). We present a new dataset of 10,000 models collected from an online 3D printing model-sharing database. Via analysis of both geometric (e.g., triangle aspect ratios, manifoldness) and contextual (e.g., licenses, tags, classes) characteristics, we demonstrate that this dataset represents a more concise summary of real-world models used for 3D printing compared to existing datasets. To facilitate future research endeavors, we also present an online query interface to select subsets of the dataset according to project-specific characteristics. The complete dataset and per-model statistical data are freely available to the public.

Wednesday, March 30th, 2016

I’ll be teaching our Skinning course again this year. This time at the IGS 2016 summer school (host of the Symposium of Geometry Processing). The rest of the summer school (aka workshop) line-up is an all-star cast. Look forward to seeing you there.

### Boolean operations using generalized winding numbers

Tuesday, February 2nd, 2016

I posted a little tech report about how to use the generalized winding number for constructive-solid geometry (CSG) style boolean operations (union, intersection, difference etc.) on nasty meshes.

Code implementing this has existed for a little while now in the mesh_boolean_winding_number function of gptoolbox.

### Mesh untangling in gptoolbox

Thursday, June 11th, 2015

I’ve reimplemented the first half of our paper, “Consistent Volumetric Discretizations Inside Self-Intersecting Surfaces” [Sacht et al. 2013] as the untangle function in my gptoolbox. Our paper expands on the idea of “Interference-Aware Geometric Modeling” [Harmon et al. 2011]: given a self-intersecting mesh, run a mean-curvature flow until all self-intersections disappear and then reverse the flow but activate collision detection and response. My new implementation uses the conformalized mean curvature flow of “Can Mean-Curvature Flow Be Made Non-Singular?” [Kazhdan et al. 2012] and then reverse the flow using el topo, “Robust Topological Operations for Dynamic Explicit Surfaces” [Brochu & Bridson 2009]. For the reverse steps, I’m simply filtering the reversed flow “velocities”, then once returned to it’s original state, I run some iterations of ARAP gradient descent (with collision detection and response via el topo) to restore the intrinsic shape of the mesh.

Here’s the result on an example with a rather drastic self-intersection.

For simpler models with lots of little self-intersections (like an otherwise nice subdivision surface), this method works very well. Unlike meshfix, this will not delete or remesh any part of the model.

### Accompanying video for “Linear Subspace Design for Real-Time Shape Deformation”

Thursday, May 7th, 2015

Here’s the accompanying video for SIGGRAPH 2015 paper “Linear Subspace Design for Real-Time Shape Deformation”.

### Preprint for “Linear Subspace Design for Real-Time Shape Deformation”

Wednesday, May 6th, 2015

Our upcoming SIGGRAPH 2015 paper “Linear Subspace Design for Real-Time Shape Deformation” is online now. This is joint work with Yu Wang, jernej Barbič and Ladislav Kavan. One way to interpret our contribution is a method for computing shape-aware (rather than cage-aware) generalized barycentric coordinates without a cage. Another way to interpret it is as a generalization of linear blend skinning to incorporate transformations of region regions/bones and translations at point handles. The distinguishing characteristic of our weight computation is its speed. The choice of handles (the choice of linear subspace) is now interactive and becomes part of the design process. It absorbs user-creativity just as manipulating the handles does.

### Closed mesh of piece-wise constant height field surface from an image

Wednesday, May 6th, 2015

I pushed a little function box_height_field.m to gptoolbox. This function creates a height field from an image where each pixel becomes an extruded square (rather than just a point/vertex). There are two modes, one that’s fast, vectorized version which doesn’t add vertices so that the mesh is closed (though the underlying surface will still be “water-tight”). And a slower version which really creates a perfectly closed height field. Heres’ the result of the second one on the red-channel of the Hans Hass image:

im = im2double(imresize(rgb2gray(imread('hans-hass.jpg')),0.1));
[V,F] = box_height_field(im);


Here’s the same result but computed after quantizing the colors:

im = round(im/0.25)*0.25;
[V,F] = box_height_field(im);


You can clearly see the piecewise-constant regions. Using remesh_planar_patches you can reduce the number of version with losing any information:

[W,G] = remesh_planar_patches(V,F);


### Stable triangle normals

Wednesday, February 11th, 2015

Here’s one way to compute a triangle’s normal using floating point arithmetic that is stable with respect to the order the triangle’s vertices are given in. To demonstrate the issue, consider:

% Vertex positions
A = [1/8 1 1/6];
B = [1/3 1/5 1/7];
C = [1/4 1/9 1/2];
% normals via cross product of different edge vectors
NB = cross(A-B,C-B);
NC = cross(B-C,A-C);
NA = cross(C-A,B-A);
NA-NB
NB-NC


producing

ans =
0            0  -2.7756e-17
ans =
5.5511e-17   1.3878e-17            0


Computing the normals three different ways, I get three different floating point errors. The right thing to do I guess is analyze the vertex positions and design a normal computation that minimizes floating point error (the solution hopefully being stable via uniqueness). I took a lazier route and take a careful average of the three solutions. The might also have the effect of cutting down floating point error a bit. But at least it’s stable.

Unfortunately it’s not enough to naively sum N1, N2, and N3 entry-wise and divide by 3. The summation could incur roundoff error depending on the order of the three arguments. So first we must sort the arguments. If going to the trouble of sorting we might as well sort in a way that reduces roundoff error: descending order of absolute value. The matlab syntax for this is a little clunky:

NABC = [NA(:) NB(:) NC(:)];
[~,I] = sort([NABC],2,'descend');
sNABC = reshape( ...
NABC(sub2ind(size(NABC),repmat(1:size(NABC,1),size(NABC,2),1)',I)),size(NABC));
N = 1/3*reshape(sum(sNABC,2),shape);


### Convex hull volume ratios on closed shapes in SHREC database

Sunday, October 13th, 2013

I wrote a script to compute the ratio of volume inside a given model to the volume of its convex hull. This ratio may be used for various things such as a measure of weak convexity. Here’re the average results applied the SHREC target models database.

Ideally I would have a database of solid meshes (closed, self-intersection free). Then I could simply compute the volume inside the shape and divide by the volume of the convex hull. Unfortunately, of the ~720 SHREC models only 91 are closed and of those only 13 are self-intersection free.

So in the absence of a large set of solid meshes, I opted to consider this convex hull volume ratio for closed meshes. Here are the 91 closed meshes in the shrec dataset:

In the presence of self-intersections determining the interior volume is tricky. We could create a conforming Delaunay tessellation of the convex hull, then for each tetrahedron determine if it is inside (say by checking that the winding number is 0). Unfortunately, no truly robust algorithm for conforming delaunay tessellations exists.

Instead, I sampled a regular grid scaled to the bounding box of the shape, throughout those outside the convex hull and then determined which samples were inside the input shape (by checking that winding number is 0).

Undoubtedly this will be only an approximation, but given that this input shapes are discrete and finite it should converge as the regular grid resolution increases.

Here’s a plot of the average ratio of inner volume to convex hull volume as a function of the number of grid points in a semi-log plot:

So, for this set of shapes (which spans quite a variety) the ratio is roughly 30%.

Here’s a histogram for the finest resolution I tested (256x256x256 = 16777216 samples):

Update: Here’re the results from the watertight models in the Princeton Surface Correspondence Benchmark. There the average is more like 38%. Here’re the same plots.

### Geometric proof for sum of arctangent identity

Friday, May 24th, 2013

Today a colleague showed me an interesting trigonometric identity:

atan(α) + atan(β) = atan( (α + β) / (1 – αβ) )

We couldn’t come up with a geometric interpretation immediately, but I found a pretty nice understanding. Begin by first recalling what atan(α) measures: the angle θα between the x-axis and some point (xα,yα):

atan(α) = atan(yα/xα) = θα

The distance of the point (xα,yα) to the origin doesn’t matter, since the angle will be the same. Thus we can always normalize (xα,yα) to be unit length or otherwise.

When we look at the identity at the top then, we see that the left-hand side is really summing angles:

 atan(α) + atan(β) = atan( (α + β) / (1 – αβ) ) atan(yα/xα) + atan(yβ/xβ) = θα + θβ = = θ? = atan(y?/x?)

And this means the the numerator and denominator inside the atan in right-hand side are telling us the y and x values respectively of some point (x?,y?) along the ray with angle θ? = θα + θβ.

To see how we arrive at these particular values for x? and y? we can make a couple assumptions. Without loss of generality assume that xα = 1. If not we can always divide both xα and yα by xα without changing θα or atan(yα/xα) or atan(α). Also, w.l.o.g. assume that xβ = sqrt( xα² + yα² ), that is, the length of (xα,yα) as a vector, for short let’s write this xβ = ‖xαyα‖. This illustration shows vectors (xα,yα) and (xβ,yβ).

Now, notice that we can sum θα and θβ by rotating the vector (xβ,yβ) by θα and reading off the angle θ? = θα + θβ. Because we’ve chosen our lengths carefully, this is very easy.

Because xβ = ‖xαyα‖, we just need to move yβ units in the direction perpendicular to the vector (xα,yα). In other words:

 (x?,y?) = (xα,yα) + yβ (xα,yα)⟂ / ‖xαyα‖ = (xα,yα) + yβ (-yα,xα) / ‖xαyα‖ = (1,yα) + yβ (-yα,1) / xβ

where we immediately simplify the known quantities.

Recall that:
α = yα / xα = yα
and
β = yβ / xβ,
and now consider each coordinate separately:

 y? = yα + yβ/xβ = α + β x? = 1 – yβyα/xβ = 1 + α β