## Posts Tagged ‘matrix’

### Repeat each row of matlab matrix

Tuesday, February 16th, 2016

I swear I’ve posted this before, but here’s how to turn:

X = [11 12 13; ...
21 22 23];


Into:

Y = [
11 12 13; ...
11 12 13; ...
11 12 13; ...
11 12 13; ...
21 22 23; ...
21 22 23; ...
21 22 23; ...
21 22 23];


That is, repeat each row, but maintain the order

Y = reshape(permute(repmat(X,[1 1 num_reps]),[3 1 2]),[],size(X,2));


### Memory gotcha in eigen

Wednesday, August 26th, 2015

I recently ran into a frustrating memory leak “gotcha” involving Eigen. The following code compiles

template <typename Derived>
Eigen::PlainObjectBase<Derived> leaky(
const Eigen::PlainObjectBase<Derived> & A)
{
Derived B = A;
return B;
}

int main(int argc, char * argv[])
{
Eigen::MatrixXd A(10000,1);
Eigen::MatrixXd B = leaky(A);
}


but causes a memory leak (without any warnings/asserts from Eigen):

.main(33789,0x7fff7bb2d300) malloc: *** error for object 0x7f9ed188da00: pointer being freed was not allocated
*** set a breakpoint in malloc_error_break to debug


The problem seems to be the cast happening in the return. Changing the return type of leaky to Derived:

template <typename Derived>
Derived leaky(
const Eigen::PlainObjectBase<Derived> & A)


fixes the issue. Or it could be fixed by changing the type of B inside leaky to Eigen::PlainObjectBase<Derived>.

  Eigen::PlainObjectBase<Derived> B = A;


Update:

The preferred answer from the Eigen developers is that leaky() should return Derived. This is a little disappointing if I want to have A and the return value be different types and I don’t want to expose all possible types for the return (i.e. I only want to template on top of Eigen matrices).

### Comma initialize a const Eigen matrix

Wednesday, January 14th, 2015

I’ve forgotten this a couple times. Here’s how to initialize an Eigen matrix’s entries while keeping it const and avoiding auxiliary variables:

const Eigen::MatrixXd A = (Eigen::MatrixXd(2,2) << 1,2,3,4).finished();


### Mysterious gcc compile error near Eigen cast

Saturday, December 14th, 2013

I run into this error often and it always takes me a bit to remember the problem. The line usually looks like:

MatrixXd B = A.cast<float>();


and the error gcc spits out is:

XXX.cpp:719:34: error: expected primary-expression before 'float'
XXX.cpp:719:34: error: expected ',' or ';' before 'float'


The problem happens when A is some templated Eigen object. The solution is to add the word template in front of cast:

MatrixXd B = A.template cast<float>();


Thanks for nothing gcc error messages.

### Interleave two vectors in Eigen

Wednesday, December 11th, 2013

Today I needed to interleave two vectors, e.g. A = [1,2,3,4] and B = [5,6,7,8] then I want to make C = [1,5,2,6,3,7,4,8].

In Eigen you can do this using the Map class:

VectorXi A(4);A<<1,2,3,4;
VectorXi B(4);B<<5,6,7,8;
VectorXi C(A.size()+B.size());
Map<VectorXi,0,InnerStride<2> >(C.data(),A.size()) = A;
Map<VectorXi,0,InnerStride<2> >(C.data()+1,B.size()) = B;


Note that |A.size() - B.size()| should be at most 1.

Update: Here’s how to interleave rows of Eigen matrices:

MatrixXi A(2,3);A<<1,2,3,4,5,6;
MatrixXi B(2,3);B<<7,8,9,10,11,12;
MatrixXi C(A.rows()+B.rows(),A.cols());
Map<MatrixXi,0,Stride<Dynamic,2> >(C.data(),A.rows(),A.cols(),Stride<Dynamic,2>(2*A.rows(),2)) = A;
Map<MatrixXi,0,Stride<Dynamic,2> >(C.data()+1,B.rows(),B.cols(),Stride<Dynamic,2>(2*B.rows(),2)) = B;


Printing these matrices you’d see:

A = [
1 2 3
4 5 6
];
B = [
7  8  9
10 11 12
];
C = [
1  2  3
7  8  9
4  5  6
10 11 12
];


Note: This almost surely only works if you’re using column major order.

### Setting row of sparse matrix to zero in matlab

Friday, May 24th, 2013

I wanted to set a whole row of a sparse matrix A to zero:


A = sparse(1e8,1e8);
tic;
A(r,:) = 0;
toc


But this was rather slow. About 1.7 seconds.

Turns out all rows and columns are not created equal in matlab. Because of how matlab stores sparse matrices it’s much faster to set a whole column to zero:


tic;
A(:,c) = 0;
toc


So if you can refactor your code you’re better off messing with columns. If you can’t refactor, you might think it would be fast to transpose before and after setting a row to zero:


tic;
A = A';
A(:,r) = 0;
A = A';
toc


But this helps not at all. It seems transposes are just stored as flags, which is usually a good thing. But here it means we don’t gain anything, since after the transpose, now rows the cheap route. This also implies that refactoring could be nontrivial.

One might also think you could first find all the non-zero entries and then set them each to zero. But just the rowwise find is expensive:


tic;find(A(10,:));toc


~1 second

In fact just the row-wise access is expensive


tic;A(10,:);toc


~1 second

Seems like there’s no happy answer.

### Initializing Eigen matrix to zeros, the wrong way.

Tuesday, April 30th, 2013

I recently found a pretty silly bug in the way I was initializing an Eigen matrix to be all zeros. Say, my matrix was A, then I wrote this (admittedly in haste) to resize to an m by n matrix of zeros using:


A.resize(m,n);
A *= 0;


This worked most of the time. As should be expected if A is initialized to numbers. But unfortunately, the resize doesn’t initialize the values to anything, and some of the random leftovers in memory will be, when interpreted as floats, not-a-numbers (NaNs). Thus when I thought I was zeroing them out I was keeping the NaNs: 0*NaN = NaN. Luckily, this is also why NaN-related bugs are so easy to trace.

Of course the correct thing to do is explicit set each value to zero. Eigen even has a dedicated function:


A.setZero(m,n);


### Fast, sparse kronecker product with identity in MATLAB

Friday, July 6th, 2012

I needed to compute:


K = kron(A,speye(n,n));


but was a little dismayed at the speed. Here’s how I get about a 6x speed up:


% Compute kron(speye(n,n),A) in a fast way
C = cell(n,1);
[C{:}] = deal(A);
K = blkdiag(C{:});
% Rearrange rows and columns
I = reshape(reshape(1:(size(A,1)*n),size(A,1),n)',size(A,1)*n,1);
J = reshape(reshape(1:(size(A,2)*n),size(A,2),n)',size(A,2)*n,1);
K = K(I,J);


### LU decomposition using two Cholesky decompositions (part two)

Sunday, March 4th, 2012

I previously posted about how often you can compute a LU decomposition of a matrix Q using two Cholesky decompositions. Specifically when your matrix is of the form:


Q = / A  C \
\ CT * /


In that post I assumed you had a subroutine that computes the Cholesky factorization of a symmetric positive-definite matrix A:


LLT=A


Where L is a lower-triangular matrix.

Often, routines exist to give you an even better factorization which includes a permutation matrix S:


LLT=STAS


This often results in a more sparse L.

Working through the LU decomposition using this type of Cholesky factorization is a bit tedious and requires some careful book keeping.

This time I want to fill in the missing parts of the following system:


/ ? * \ / A  C \ / ? * \ = / ? * \ / ? ? \
\ * ? / \ CT * / \ * ? /   \ ? ? / \ * ? /


Since A is symmetric positive definite we use our factorization STAS = LLT which gives us:


/ ST * \ / A  C \ / S * \ = / L * \ / LT ? \
\ *  ? / \ CT * / \ * ? /   \ ? ? / \ *  ? /


But this means that know we need that STC = L ?. But L is lower triangular so it’s easy to compute:


M = L-1STC


We can then place M into our LU decomposition:


/ ST * \ / A  C \ / S * \ = / L  * \ / LT M \
\ *  ? / \ CT * / \ * ? /   \ MT ? / \ *  ? /


Now at this point we could use cholesky factorization without the permutation matrix to complete the missing parts, but we might as well use our better decomposition again. To do this we must be a bit careful. We want to create the zeros in the lower right corner of the system. To do this we need a factorization of MTM. But our new routine gives us:


KKT = TTMTMT


where K another lower triangular matrix and T another permutation matrix.

So we replace M with MT in our system and plug in K. This gives us:


/ ST * \ / A  C \ / S * \ = / L    * \ / LT  MT \
\ *  ? / \ CT * / \ * ? /   \ TTMT K / \ *  -KT /


Finally, compute the missing pieces by evaluating the right hand side and we indeed get our original system conjugated with a permutation matrix composed of the permutation matrices from both of the cholesky factorizations:


/ ST *  \ / A  C \ / S * \ = / L    * \ / LT  MT \
\ *  TT / \ CT * / \ * T /   \ TTMT K / \ *  -KT /


### 2D rotation matrix plus another 2D rotation matrix is a similarity matrix (scaled 2D rotation)

Wednesday, November 16th, 2011

Consider we have two rotation matrices:


R1 = / cos(θ1) -sin(θ1) \
\ sin(θ1)  cos(θ1) /

R2 = / cos(θ2) -sin(θ2) \
\ sin(θ2)  cos(θ2) /


Then we can show that:


R1 + R2 = s * R3


where s is a scalar and R3 is another 2D rotation.

First we just add the components to get:


R1 + R2 = / cos(θ1)+cos(θ2)   -(sin(θ1)+sin(θ2)) \
\ sin(θ1)+sin(θ2)    cos(θ1)+cos(θ2)   /


Notice we already have the right pattern:


R1 + R2 =  / A -B \
\ B  A /


we just need to show that we can pull out a common scaling term and angle from A and B.

We can use trigonometric identities to turn the sums of cosines and the sums of sines into products, namely:


cos(θ1)+cos(θ2) = 2*cos((θ1+θ2)/2)*cos((θ1-θ2)/2)
sin(θ1)+sin(θ2) = 2*sin((θ1+θ2)/2)*cos((θ1-θ2)/2)


Now we declare that:


s = 2*cos((θ1-θ2)/2)
θ3 = (θ1+θ2)/2


Then it follows that:


R1 + R2
=
/ s*cos((θ1+θ2)/2)  -s*sin((θ1+θ2)/2) \
\ s*sin((θ1+θ2)/2)   s*cos((θ1+θ2)/2) /
=
s * / cos(θ3)  -sin(θ3) \
\ sin(θ3)   cos(θ3) /
=
s * R3