Archive for November, 2016

Convert two-page color scan of book into monochrome single pdf

Friday, November 25th, 2016

Two days in a row now I’ve had to visit the physical library to retrieve an old paper. Makes me feel very authentic as an academic. Our library has free scanning facilities, but the resulting PDF will have a couple problems. If I’m scanning a book then each page of the pdf actually contains 2 pages of the book. Depending on the scanner settings, I might also accidentally have my 2 pages running vertically instead of horizontally. Finally, if I forgot to set the color settings on the scanner, then I get a low-contrast color image instead of a high-contrast monochrome scan.

Here’s a preview of pdf of an article from a book I scanned that has all these problems: scanned low contrast color pdf

If this pdf is in input.pdf then I call the following commands to create output.pdf:

pdfimages input.pdf .scan
mogrify -format png -monochrome -rotate 90 -crop 50%x100% .scan*
convert +repage .scan*png output.pdf
rm .scan*

output monochrome pdf

I’m pretty happy with the output. There are some speckles, but the simple -monochrome flag does a fairly good job.

I use Adobe Acrobat Pro to run OCR so that the text is selectable (haven’t found a good command line solution for that, yet).

Note: I think the -rotate 90 is needed because the images are stored rotated by -90 degrees but the input.pdf is compositing them after rotation. This hints that this script won’t generalize to complicated pdfs. But we’re safe here because a scanner will probably apply the same transformation to each page.

Understanding mass matrix lumping in terms of functions spaces

Monday, November 14th, 2016

$$\newcommand{\u}{\mathbf{u}}$$ $$\newcommand{\f}{\mathbf{f}}$$

Mass matrix lumping is the process of using a diagonal mass matrix instead of the “full” mass matrix resulting from Galerkin Finite Element Method. Lumping is often recommended because a diagonal mass matrix easy to invert (just invert the entries along the diagonal). As the name implies, diagonalizing the mass matrix is often justified by arguing rather hand-wavily that we’re just “lumping all of the mass or integration” at the vertices of the mesh rather than integrating over the elements. This sounds intuitive but it leaves me wanting a more formal justification.

Here’s an attempt to justify mass lumping in terms of mixed Finite Element approach to discretizing problems of the form: minimize,

$$E(u) = \frac{1}{2} \int_\Omega (u-f)^2 + \nabla u \cdot \nabla u\ dA$$

If we discretize this irreducible form directly using, for example, piecewise-linear basis functions over a mesh (e.g., \(u \approx \sum_i \phi_i u_i\), and we use \(\u\) to also mean the vector of coefficients \(u_i\)), then we get:

$$E(\u) = \frac{1}{2} (\u-\f)^TM(\u-\f) + \frac{1}{2} \u^TK\u$$

where we call \(M\) the mass matrix so that \(M_{ij} = \int_\Omega \phi_i \phi_j\ dA\). Then when we “mass lump” we are replacing \(M\) with a diagonal matrix \(D\) such that \(D_{ii} = \sum_j M_{ij}\) or \(D_{ii} = \) the area associated with vertex \(i\). This “associated area” might be the Voronoi area.

In physical dynamics, if \(\u=\u^t\) are displacements of some dynamic system and \(\f=\u^{t-\Delta t}\) are the mesh displacements of the previous time step then this gives us a physical formula for kinetic and potential energy:

$$E(\u^t) = \frac{1}{2 \Delta t^2} (\u^t-\u^{t-\Delta t})^T M (\u^t-\u^{t-\Delta t}) + \frac{1}{2} {\u^t}^TK\u^t$$

where the stiffness matrix \(K\) now might measure, for example, linear elasticity (I’m mostly going to ignore what’s going on with \(K\)).

When we write this physical energy in the continuous form we’ll introduce a variable to keep track of velocities: \(\dot{u} = \partial u / \partial t\),

$$E(u,\dot{u}) = \frac{1}{2} \int_\Omega \dot{u}^2 + (\nabla u+\nabla u^T) \cdot (\nabla u + \nabla u^T)\ dA$$

$$\dot{u} = \partial u / \partial t$$

Now we have the potential energy in terms of velocity \(\dot{u}\) and the kinetic energy in terms of displacements \(u\). This is a very natural mixed form. If we discretize both \(u\) and \(\dot{u}\) using piecewise-linear basis functions over the same mesh then we’ll have done a lot of re-shuffling to get the identical discretization above. In particular, we’ll just get the same non-diagonal mass matrix \(M\).

Instead, we’ll discretize only displacements \(u\) using piecewise-linear basis functions, and for velocities \(\dot{u}\) we’ll use piecewise-constant basis functions defined on the dual mesh. A standard choice for the dual mesh is the Voronoi dual mesh, where each zero dimensional vertex becomes zero-codimensional “cell” whose area/volume is the Voronoi area/volume of that vertex as a seed amongst all other vertices as fellow seeds. One can construct this dual mesh by connected circumcenters of all of the triangles in the primary mesh. An alternative dual mesh, we’ll call the barycentric dual, will connect all barycenters of the triangles in the primary mesh. All this to say that we approximate the velocities as sum over piecewise constant basis functions located at primary mesh vertices (aka dual mesh cells): \(\dot{u}^t \approx \sum_i \psi_i \dot{u}^t_i\). The basis functions \(\psi_i\) and \(\phi_i\) are in correspondence so that \(\dot{\u}^t\) collects elements \(\dot{u}^t_i\) in the same number and order as \(\u\).

Now we can discretize the energy as:

$$E(\dot{\u}^t,\u^t) = \frac{1}{2} \dot{\u}^T D \dot{\u} + \frac{1}{2} {\u^t}^T K \u^t,$$

and voilĂ  the diagonal mass matrix $D$ appears, where

$$D_{ij} = \int_\Omega \psi_i \psi_j\ dA = \begin{cases} \text{area}_i & i=j \ 0 & i\neq j \end{cases}$$

As a side note, if we use the barycentric dual mesh then areai \(= \sum_jM_{ij}\) , if we use the Voronoi dual mesh then \(\text{area}_i\) is the Voronoi area which may become negative for non-Delaunay meshes (and we might want to use the “fix” described in “Discrete Differential-Geometry Operators for Triangulated 2-Manifolds” [Mark Meyer et al. 2002]).

Our work is not quite done since we haven’t coupled our discretization of velocity and displacements together. We haven’t discretized the constraint: \(\dot{u} = \partial u / \partial t\). We could just quietly say that they “need to agree” at mesh vertices so that \(\dot{u}^t_i = (u^t_i – u_i^{t – \Delta t})/\Delta t\). If we do then we can immediately substitute-out all of the velocity variables and arrive at an “flattened” form of the energy only in terms of displacements $\u$:

$$E(\u^t) = \frac{1}{2 \Delta t^2} (\u^t-\u^{t-\Delta t})^T D (\u^t-\u^{t-\Delta t}) + \frac{1}{2} {\u^t}^T K \u^t.$$

Deeper down the rabbit hole: This is a bit unsettling because it seems like we just pushed the “lumping magic” deeper into the constraints between velocities and displacements. Indeed, if we try to enforce this constraint by the Lagrange multiplier method:

$$L(u,\dot{u},\lambda) = E(u,\dot{u}) + \frac{1}{2} \int_\Omega \lambda \left(\dot{u} – \frac{\partial u}{\partial t}\right) \ dA$$

(which admittedly reads a bit like nonsense because before the only difference between \(\dot{u}\) and \(\partial u / \partial t\) was notation, but think of \(\dot{u}\) as a first-class variable function).

then we have to pick a discretization for the Lagrange multiplier function \(\lambda\). Surprisingly, if we use either piecewise-linear on the primary mesh or piecewise-constant basis functions on the dual mesh then we’ll get a non-diagonal matrix coupling \(\dot{u}\) and \(\u\) which means that we can’t simply substitute-out $\dot{\u}$.

To tie things up, I found one (potentially dubious) discretization of \(\lambda\) that makes things work: Dirac basis functions

$$\delta_i(x) = \int_\Omega \delta(x-x_i) \ dA$$

Where \(\delta(x)\) is Dirac’s delta function so that \(\delta(x) = \begin{cases} \infty & x=0 \ 1 & x\neq 0 \end{cases}\) and \(\int \delta(x) \ dA = 1\).

Assuming I have permission for the Discretization God’s, then I let \(\lambda \approx \sum_i \delta_i \lambda_i\) where (by abuse of notation/LaTeX’s inability to typeset bold Greek letters) \(\lambda\) is a vector of coefficients \(\lambda_i\) in corresponding number and order with \(\u\) and \(\dot{\u}\).

Since (crossing fingers) \(\int_\Omega \delta_i \psi_j \ dA = \begin{cases} 1 & i=j \ 0 & i\neq j\end{cases}\) and \(\int_\Omega \delta_i \phi_j \ dA = \begin{cases} 1 & i=j \ 0 & i\neq j\end{cases}\). Our discretized Lagrangian is:

$$L(\u^t,\dot{\u}^t,\lambda) = \frac{1}{2} \dot{\u}^T D \dot{\u} + \frac{1}{2} {\u^t}^T K \u^t + \frac{1}{2} \lambda I \left(\u^t – \frac{\u^t – \u^{t-\Delta t}}{\Delta t} \right),$$

where \(I\) truly madly deeply is the identity matrix. This implies that a direct substitution of \(\dot{\u}^t\) with \((\u^t – \u^{t-\Delta t})/\Delta t\) is valid.

Update: Some papers making similar claims:

“A note on mass lumping and related processes” [Hinton et al. 1976]
(“Analysis Of Structural Vibration And Response” [Clough 1971] (cited in [Hinton et al. 1976] but I haven’t found a copy yet)

Implementing QSlim mesh simplification using libigl

Friday, November 4th, 2016

One of my research projects is looking at mesh simplification and the perennial comparison is always to Garland and Heckbert’s “Surface simplification using quadric error metrics” (aka QSlim). And relevant to us, the lesser-known followup for “Surfaces with Color and Texture using Quadric Error” (aka QSlim in nD)

Currently, libigl has built in edge-collapse based mesh simplification with an exposed function handle: cost_and_placement which determines the cost of collapsing an edge and the placement. This function is used for two purposes: to initialize the heap of edges to collapse and to update edges in that heap after a collapse.

Currently I had only implemented a very simple metric: cost is the length of the edge and collapsed vertices are placed at edge midpoints.

So a bit out of research necessity and a bit just for a challenge, I timed how long it would take for me to implement QSlim in nD.

cactus decimated using qslim implemented in libigl

So, 1 hour and 20 minutes later, here is my QSlim in nD demo based off of the decimation entry in the libigl tutorial:

#include <igl/collapse_edge.h>
#include <igl/edge_flaps.h>
#include <igl/read_triangle_mesh.h>
#include <igl/viewer/Viewer.h>
#include <Eigen/Core>
#include <iostream>
#include <set>

int main(int argc, char * argv[])
{
  using namespace std;
  using namespace Eigen;
  using namespace igl;
  cout<<"Usage: ./703_Decimation_bin [filename.(off|obj|ply)]"<<endl;
  cout<<"  [space]  toggle animation."<<endl;
  cout<<"  'r'  reset."<<endl;
  // Load a closed manifold mesh
  string filename(argv[1]);
  if(argc>=2)
  {
    filename = argv[1];
  }
  MatrixXd V,OV;
  MatrixXi F,OF;
  read_triangle_mesh(filename,OV,OF);

  igl::viewer::Viewer viewer;

  // Prepare array-based edge data structures and priority queue
  VectorXi EMAP;
  MatrixXi E,EF,EI;
  typedef std::set<std::pair<double,int> > PriorityQueue;
  PriorityQueue Q;
  std::vector<PriorityQueue::iterator > Qit;
  // If an edge were collapsed, we'd collapse it to these points:
  MatrixXd C;
  int num_collapsed;


  // Function for computing cost of collapsing edge (lenght) and placement
  // (midpoint)
  const auto & shortest_edge_and_midpoint = [](
    const int e,
    const Eigen::MatrixXd & V,
    const Eigen::MatrixXi & /*F*/,
    const Eigen::MatrixXi & E,
    const Eigen::VectorXi & /*EMAP*/,
    const Eigen::MatrixXi & /*EF*/,
    const Eigen::MatrixXi & /*EI*/,
    double & cost,
    RowVectorXd & p)
  {
    cost = (V.row(E(e,0))-V.row(E(e,1))).norm();
    p = 0.5*(V.row(E(e,0))+V.row(E(e,1)));
  };

  // Quadrics per vertex
  typedef std::tuple<Eigen::MatrixXd,Eigen::RowVectorXd,double> Quadric;
  std::vector<Quadric> quadrics;

  // c **is** allowed to be a or b.
  const auto & plus = [](const Quadric & a, const Quadric & b, Quadric & c)
  {
    std::get<0>(c) = (std::get<0>(a) + std::get<0>(b)).eval();
    std::get<1>(c) = (std::get<1>(a) + std::get<1>(b)).eval();
    std::get<2>(c) = (std::get<2>(a) + std::get<2>(b));
  };
  // State variables keeping track of whether we've just collpased edge (v1,v2)
  int v1 = -1;
  int v2 = -1;
  const auto & qslim_optimal = [&quadrics,&plus,&v1,&v2](
    const int e,
    const Eigen::MatrixXd & V,
    const Eigen::MatrixXi & /*F*/,
    const Eigen::MatrixXi & E,
    const Eigen::VectorXi & /*EMAP*/,
    const Eigen::MatrixXi & /*EF*/,
    const Eigen::MatrixXi & /*EI*/,
    double & cost,
    RowVectorXd & p)
  {
    // Then we just collapsed (v1,v2)
    if(v1>=0 && v2>=0)
    {
      plus(quadrics[v1],quadrics[v2],quadrics[v1<v2?v1:v2]);
      v1 = -1;
      v2 = -1;
    }
    // Combined quadric
    Quadric quadric_p;
    plus(quadrics[E(e,0)],quadrics[E(e,1)],quadric_p);
    // Quadric: p'Ap + 2b'p + c
    // optimal point: Ap = -b, or rather because we have row vectors: pA=-b
    const auto & A = std::get<0>(quadric_p);
    const auto & b = std::get<1>(quadric_p);
    const auto & c = std::get<2>(quadric_p);
    p = -b*A.inverse();
    cost = p.dot(p*A) + 2*p.dot(b) + c;
  };

  // Function to reset original mesh and data structures
  const auto & reset = [&]()
  {
    F = OF;
    V = OV;
    edge_flaps(F,E,EMAP,EF,EI);
    Qit.resize(E.rows());
    Q.clear();

    const int dim = V.cols();
    // Quadrics per face
    std::vector<Quadric> face_quadrics(F.rows());
    // Initialize each vertex quadric to zeros
    quadrics.resize(
      V.rows(),{Eigen::MatrixXd::Zero(dim,dim),Eigen::RowVectorXd::Zero(dim),0});
    Eigen::MatrixXd I = Eigen::MatrixXd::Identity(dim,dim);
    // Rather initial with zeros, initial with a small amount of energy pull
    // toward original vertex position
    const double w = 1e-10;
    for(int v = 0;v<V.rows();v++)
    {
      std::get<0>(quadrics[v]) = w*I;
      Eigen::RowVectorXd Vv = V.row(v);
      std::get<1>(quadrics[v]) = w*-Vv;
      std::get<2>(quadrics[v]) = w*Vv.dot(Vv);
    }
    // Generic nD qslim from "Simplifying Surfaces with Color and Texture
    // using Quadric Error Metric" (follow up to original QSlim)
    for(int f = 0;f<F.rows();f++)
    {
      Eigen::RowVectorXd p = V.row(F(f,0));
      Eigen::RowVectorXd q = V.row(F(f,1));
      Eigen::RowVectorXd r = V.row(F(f,2));
      Eigen::RowVectorXd pq = q-p;
      Eigen::RowVectorXd pr = r-p;
      // Gram Determinant = squared area of parallelogram 
      double area = sqrt(pq.squaredNorm()*pr.squaredNorm() - pow(pr.dot(pq),2));
      Eigen::RowVectorXd e1 = pq.normalized();
      Eigen::RowVectorXd e2 = (pr-e1.dot(pr)*e1).normalized();
      // e1 and e2 be perpendicular
      assert(std::abs(e1.dot(e2)) < 1e-10);
      // Weight face's quadric (v'*A*v + 2*b'*v + c) by area
      const Eigen::MatrixXd A = I-e1.transpose()*e1-e2.transpose()*e2;
      const Eigen::RowVectorXd b = p.dot(e1)*e1 + p.dot(e2)*e2 - p;
      const double c = (p.dot(p) - pow(p.dot(e1),2) - pow(p.dot(e2),2));
      face_quadrics[f] = { area*A, area*b, area*c };
      // Throw at each corner
      for(int c = 0;c<3;c++)
      {
        plus(
          quadrics[F(f,c)],
          face_quadrics[f],
          quadrics[F(f,c)]);
      }
    }




    C.resize(E.rows(),V.cols());
    VectorXd costs(E.rows());
    v1 = -1;
    v2 = -1;
    for(int e = 0;e<E.rows();e++)
    {
      double cost = e;
      RowVectorXd p(1,3);
      qslim_optimal(e,V,F,E,EMAP,EF,EI,cost,p);
      C.row(e) = p;
      Qit[e] = Q.insert(std::pair<double,int>(cost,e)).first;
    }
    num_collapsed = 0;
    viewer.data.clear();
    viewer.data.set_mesh(V,F);
    viewer.data.set_face_based(true);
  };

  const auto &pre_draw = [&](igl::viewer::Viewer & viewer)->bool
  {
    // If animating then collapse 10% of edges
    if(viewer.core.is_animating && !Q.empty())
    {
      bool something_collapsed = false;
      // collapse edge
      const int max_iter = std::ceil(0.01*Q.size());
      for(int j = 0;j<max_iter;j++)
      {
        std::pair<double,int> p = *(Q.begin());
        bool shouldnt_collapse = false;
        if((! Q.empty()) && 
            (p.first != std::numeric_limits<double>::infinity()))
        {
          v1 = E(p.second,0);
          v2 = E(p.second,1);
        }else
        {
          v1 = -1;
          v2 = -1;
          shouldnt_collapse = true;
        }
        if(!collapse_edge(qslim_optimal,V,F,E,EMAP,EF,EI,Q,Qit,C))
        {
          break;
        }else
        {
          // we just collapsed. 
          assert(!shouldnt_collapse);
        }
        something_collapsed = true;
        num_collapsed++;
      }

      if(something_collapsed)
      {
        viewer.data.clear();
        viewer.data.set_mesh(V,F);
        viewer.data.set_face_based(true);
      }
    }
    return false;
  };

  const auto &key_down =
    [&](igl::viewer::Viewer &viewer,unsigned char key,int mod)->bool
  {
    switch(key)
    {
      case ' ':
        viewer.core.is_animating ^= 1;
        break;
      case 'R':
      case 'r':
        reset();
        break;
      default:
        return false;
    }
    return true;
  };

  reset();
  viewer.core.is_animating = true;
  viewer.callback_key_down = key_down;
  viewer.callback_pre_draw = pre_draw;
  return viewer.launch();
}

I’m pretty happy that this didn’t take me all day, but admittedly it should have been faster. One confusing part was that the current API in libigl for edge collapsing doesn’t have an explicit function handle that’s called when an edge is successfully collapsed. Detecting this is necessary to propagate the quadric to the collapsed vertex (but isn’t necessary for simple costs like edge-length). I faked this for now using v1 and v2 as state variables. Probably, I’ll change the libigl API to accommodate this. The other time sink was worry about degenerate Quadrics (when the system matrix is singular). Rather than try to detect this categorically, I opted to sprinkle in a little energy pulling vertices toward their (ancestors’) original positions. This seems to work fairly well.