Archive for November, 2010

Print diagonal matrices in matlab

Tuesday, November 30th, 2010

Here’s a very simple matlab function that prints the main diagonal of a matrix, one line per entry:


function printDiagonal(file_name,D)
file_id = fopen(file_name,'wt');
# uncomment this to have the first line be:
# length_of_main_diagonal
#fprintf(file_id,'%d\n', min(size(D)));
fprintf(file_id,'%g\n',full(diag(D)));
fclose(file_id);
end


Update: I add the full(…) command to make sure this works even if the D passed is a sparse matrix.

Print sparse matrix in IJV, COO format in matlab

Tuesday, November 30th, 2010

Here’s a very simple way to print sparse matrices. It uses IJV or coordinate (COO) format. For every non-zero entry Sij in the sparse matrix S we print a line:


i j v


where v is the non-zero value in S at row i, column j.

Here’s the matlab function:


function printIJV(file_name,S)
[i,j,v] = find(S);
file_id = fopen(file_name,'wt');
# uncomment this to have the first line be:
# num_rows num_cols
#fprintf(file_id,'%d %d\n', size(S));
fprintf(file_id,'%d %d %g\n',[i-1,j-1,v]');
fclose(file_id);
end


Note: I’m converting from MATLAB’s one-indexed system to zero-indexed.

Digraphs stopped working with vim 7.3

Tuesday, November 30th, 2010

I excitedly upgraded vim to the new vim 7.3 (persistent undo is the big feature). To my dismay all the fancy digraphs I’ve been using to display math characters stopped working. And when I issued:


:digraphs


I only saw a handful of characters (maybe just ASCII), certainly no interesting unicode characters.

Luckily the fix was easy. I just needed to enable the “multi-byte” feature. To do this, in the vim project directory I issued:


./configure --enable-multibyte
make
sudo make install


Combining CUDA, Qt, and Xcode

Monday, November 29th, 2010

As a proof of concept and a skeleton for some more intense code, I wanted to be sure that I could get a simple example program that used CUDA, Qt and Xcode. A priori there is no reason that this shouldn’t work.

The simplest way to do this is to separate CUDA from the main program entirely. I found a discussion of how to do this on the nvidia site. I will base my example on the final snipets on that thread.

Building a CUDA library

The first step will be to bake my gpu code into a static library. The idea is then to call that library from my Qt main program.

This involved five files.

HelloWorld.cu:


#include "HelloWorld.cuh"
#include <stdio.h>

// Kernel functions must be inlined (?)
#include "cuPrintf.cu"
__global__ void HelloFromDevice(void)
{
}

int HelloWorld()
{
// greet from the host
printf("Hello, world from the host!\n");

// initialize cuPrintf
cudaPrintfInit();

// launch a kernel with a single thread to greet from the device
HelloFromDevice<<<10,64>>>();

// display the device's greeting
cudaPrintfDisplay();

// clean up after cuPrintf
cudaPrintfEnd();

return 0;
}


HelloWorld.cuh


int HelloWorld();


cuPrintf.cu (by Nvidia)


/*

This source code and/or documentation ("Licensed Deliverables") are subject
to NVIDIA intellectual property rights under U.S. and international Copyright
laws.

These Licensed Deliverables contained herein is PROPRIETARY and CONFIDENTIAL
to NVIDIA and is being provided under the terms and conditions of a form of
Agreement") or electronically accepted by Licensee.  Notwithstanding any terms
or conditions to the contrary in the License Agreement, reproduction or
disclosure of the Licensed Deliverables to any third party without the express
written consent of NVIDIA is prohibited.

NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE LICENSE AGREEMENT,
DELIVERABLES FOR ANY PURPOSE.  IT IS PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED
WARRANTY OF ANY KIND. NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE
LICENSED DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY,
NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE.   NOTWITHSTANDING ANY
TERMS OR CONDITIONS TO THE CONTRARY IN THE LICENSE AGREEMENT, IN NO EVENT SHALL
NVIDIA BE LIABLE FOR ANY SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES,
OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,	WHETHER
IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION,  ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THESE LICENSED DELIVERABLES.

U.S. Government End Users. These Licensed Deliverables are a "commercial item"
as that term is defined at  48 C.F.R. 2.101 (OCT 1995), consisting  of
"commercial computer  software"  and "commercial computer software documentation"
as such terms are  used in 48 C.F.R. 12.212 (SEPT 1995) and is provided to the
U.S. Government only as a commercial end item.  Consistent with 48 C.F.R.12.212
and 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all U.S. Government
End Users acquire the Licensed Deliverables with only those rights set forth
herein.

Any use of the Licensed Deliverables in individual and commercial software must
include, in the user documentation and internal comments to the code, the above
Disclaimer and U.S. Government End Users Notice.
*/

/*
*	cuPrintf.cu
*
*	This is a printf command callable from within a kernel. It is set
*	up so that output is sent to a memory buffer, which is emptied from
*	the host side - but only after a cudaThreadSynchronize() on the host.
*
*	Currently, there is a limitation of around 200 characters of output
*	and no more than 10 arguments to a single cuPrintf() call. Issue
*	multiple calls if longer format strings are required.
*
*	It requires minimal setup, and is *NOT* optimised for performance.
*	For example, writes are not coalesced - this is because there is an
*	assumption that people will not want to printf from every single one
*	of thousands of threads, but only from individual threads at a time.
*
*	Using this is simple - it requires one host-side call to initialise
*	everything, and then kernels can call cuPrintf at will. Sample code
*	is the easiest way to demonstrate:
*
#include "cuPrintf.cu"

__global__ void testKernel(int val)
{
cuPrintf("Value is: %d\n", val);
}

int main()
{
cudaPrintfInit();
testKernel<<< 2, 3 >>>(10);
cudaPrintfDisplay(stdout, true);
cudaPrintfEnd();
return 0;
}
*
*	arguments to cudaPrintfInit() and cudaPrintfDisplay();
*/

#ifndef CUPRINTF_CU
#define CUPRINTF_CU

#include "cuPrintf.cuh"
#if __CUDA_ARCH__ > 100      // Atomics only used with > sm_10 architecture
#include <sm_11_atomic_functions.h>
#endif

// This is the smallest amount of memory, per-thread, which is allowed.
// It is also the largest amount of space a single printf() can take up
const static int CUPRINTF_MAX_LEN = 256;

// This structure is used internally to track block/thread output restrictions.
typedef struct __align__(8) {
int threadid;				// CUPRINTF_UNRESTRICTED for unrestricted
int blockid;				// CUPRINTF_UNRESTRICTED for unrestricted
} cuPrintfRestriction;

// The main storage is in a global print buffer, which has a known
// start/end/length. These are atomically updated so it works as a
// circular buffer.
// Since the only control primitive that can be used is atomicAdd(),
// we cannot wrap the pointer as such. The actual address must be
// calculated from printfBufferPtr by mod-ing with printfBufferLength.
// For sm_10 architecture, we must subdivide the buffer per-thread
// since we do not even have an atomic primitive.
__constant__ static char *globalPrintfBuffer = NULL;         // Start of circular buffer (set up by host)
__constant__ static int printfBufferLength = 0;              // Size of circular buffer (set up by host)
__device__ static cuPrintfRestriction restrictRules;         // Output restrictions
__device__ volatile static char *printfBufferPtr = NULL;     // Current atomically-incremented non-wrapped offset

// This is the header preceeding all printf entries.
// NOTE: It *must* be size-aligned to the maximum entity size (size_t)
typedef struct __align__(8) {
unsigned short magic;                   // Magic number says we're valid
unsigned short fmtoffset;               // Offset of fmt string into buffer
unsigned short blockid;                 // Block ID of author

// Special header for sm_10 architecture
#define CUPRINTF_SM10_MAGIC   0xC810        // Not a valid ascii character
typedef struct __align__(16) {
unsigned short magic;                   // sm_10 specific magic number
unsigned short unused;
unsigned int offset;                    // most recent printf's offset

// Because we can't write an element which is not aligned to its bit-size,
// we have to align all sizes and variables on maximum-size boundaries.
// That means sizeof(double) in this case, but we'll use (long long) for
// better arch<1.3 support
#define CUPRINTF_ALIGN_SIZE      sizeof(long long)

// All our headers are prefixed with a magic number so we know they're ready
#define CUPRINTF_SM11_MAGIC  (unsigned short)0xC811        // Not a valid ascii character

//
//  getNextPrintfBufPtr
//
//  Grabs a block of space in the general circular buffer, using an
//  atomic function to ensure that it's ours. We handle wrapping
//  around the circular buffer and return a pointer to a place which
//  can be written to.
//
//  Important notes:
//      1. We always grab CUPRINTF_MAX_LEN bytes
//      2. Because of 1, we never worry about wrapping around the end
//      3. Because of 1, printfBufferLength *must* be a factor of CUPRINTF_MAX_LEN
//
//  This returns a pointer to the place where we own.
//
__device__ static char *getNextPrintfBufPtr()
{
// Initialisation check
if(!printfBufferPtr)
return NULL;

if((restrictRules.blockid != CUPRINTF_UNRESTRICTED) && (restrictRules.blockid != (blockIdx.x + gridDim.x*blockIdx.y)))
return NULL;
return NULL;

// Conditional section, dependent on architecture
#if __CUDA_ARCH__ == 100
// For sm_10 architectures, we have no atomic add - this means we must split the
// entire available buffer into per-thread blocks. Inefficient, but what can you do.
int thread_count = (gridDim.x * gridDim.y) * (blockDim.x * blockDim.y * blockDim.z);
(blockIdx.x + gridDim.x*blockIdx.y) * (blockDim.x * blockDim.y * blockDim.z);

// Find our own block of data and go to it. Make sure the per-thread length
// is a precise multiple of CUPRINTF_MAX_LEN, otherwise we risk size and
// alignment issues! We must round down, of course.

// We *must* have a thread buffer length able to fit at least two printfs (one header, one real)
return NULL;

// Now address our section of the buffer. The first item is a header.
if(hdr.magic != CUPRINTF_SM10_MAGIC)
{
// If our header is not set up, initialise it
hdr.magic = CUPRINTF_SM10_MAGIC;
hdr.offset = 0;         // Note we start at 0! We pre-increment below.

// For initial setup purposes, we might need to init thread0's header too
// (so that cudaPrintfDisplay() below will work). This is only run once.
}

// Adjust the offset by the right amount, and wrap it if need be
unsigned int offset = hdr.offset + CUPRINTF_MAX_LEN;
offset = CUPRINTF_MAX_LEN;

// Write back the new offset for next time and return a pointer to it
return myPrintfBuffer + offset;
#else
// Much easier with an atomic operation!
size_t offset = atomicAdd((unsigned int *)&printfBufferPtr, CUPRINTF_MAX_LEN) - (size_t)globalPrintfBuffer;
offset %= printfBufferLength;
return globalPrintfBuffer + offset;
#endif
}

//
//
//  Inserts the header for containing our UID, fmt position and
//  block/thread number. We generate it dynamically to avoid
//	issues arising from requiring pre-initialisation.
//
__device__ static void writePrintfHeader(char *ptr, char *fmtptr)
{
if(ptr)
{
header.fmtoffset = (unsigned short)(fmtptr - ptr);
}
}

//
//  cuPrintfStrncpy
//
//  This special strncpy outputs an aligned length value, followed by the
//  string. It then zero-pads the rest of the string until a 64-aligned
//  boundary. The length *includes* the padding. A pointer to the byte
//  just after the \0 is returned.
//
//  This function could overflow CUPRINTF_MAX_LEN characters in our buffer.
//  To avoid it, we must count as we output and truncate where necessary.
//
__device__ static char *cuPrintfStrncpy(char *dest, const char *src, int n, char *end)
{
// Initialisation and overflow check
if(!dest || !src || (dest >= end))
return NULL;

// Prepare to write the length specifier. We're guaranteed to have
// at least "CUPRINTF_ALIGN_SIZE" bytes left because we only write out in
// chunks that size, and CUPRINTF_MAX_LEN is aligned with CUPRINTF_ALIGN_SIZE.
int *lenptr = (int *)(void *)dest;
int len = 0;
dest += CUPRINTF_ALIGN_SIZE;

// Now copy the string
while(n--)
{
if(dest >= end)     // Overflow check
break;

len++;
*dest++ = *src;
if(*src++ == '\0')
break;
}

// Now write out the padding bytes, and we have our length.
while((dest < end) && (((long)dest & (CUPRINTF_ALIGN_SIZE-1)) != 0))
{
len++;
*dest++ = 0;
}
*lenptr = len;
return (dest < end) ? dest : NULL;        // Overflow means return NULL
}

//
//  copyArg
//
//  This copies a length specifier and then the argument out to the
//  data buffer. Templates let the compiler figure all this out at
//  compile-time, making life much simpler from the programming
//  point of view. I'm assuimg all (const char *) is a string, and
//  everything else is the variable it points at. I'd love to see
//  a better way of doing it, but aside from parsing the format
//  string I can't think of one.
//
//  The length of the data type is inserted at the beginning (so that
//  the display can distinguish between float and double), and the
//  pointer to the end of the entry is returned.
//
__device__ static char *copyArg(char *ptr, const char *arg, char *end)
{
// Initialisation check
if(!ptr || !arg)
return NULL;

// strncpy does all our work. We just terminate.
if((ptr = cuPrintfStrncpy(ptr, arg, CUPRINTF_MAX_LEN, end)) != NULL)
*ptr = 0;

return ptr;
}

template <typename T>
__device__ static char *copyArg(char *ptr, T &arg, char *end)
{
// Initisalisation and overflow check. Alignment rules mean that
// we're at least CUPRINTF_ALIGN_SIZE away from "end", so we only need
// to check that one offset.
if(!ptr || ((ptr+CUPRINTF_ALIGN_SIZE) >= end))
return NULL;

// Write the length and argument
*(int *)(void *)ptr = sizeof(arg);
ptr += CUPRINTF_ALIGN_SIZE;
*(T *)(void *)ptr = arg;
ptr += CUPRINTF_ALIGN_SIZE;
*ptr = 0;

return ptr;
}

//
//  cuPrintf
//
//  Templated printf functions to handle multiple arguments.
//  Note we return the total amount of data copied, not the number
//  of characters output. But then again, who ever looks at the
//  return from printf() anyway?
//
//  The format is to grab a block of circular buffer space, the
//  start of which will hold a header and a pointer to the format
//  string. We then write in all the arguments, and finally the
//  format string itself. This is to make it easy to prevent
//  overflow of our buffer (we support up to 10 arguments, each of
//  which can be 12 bytes in length - that means that only the
//  format string (or a %s) can actually overflow; so the overflow
//  check need only be in the strcpy function.
//
//  The header is written at the very last because that's what
//  makes it look like we're done.
//
//  Errors, which are basically lack-of-initialisation, are ignored
//  in the called functions because NULL pointers are passed around
//

// All printf variants basically do the same thing, setting up the
// buffer, writing all arguments, then finalising the header. For
// clarity, we'll pack the code into some big macros.
#define CUPRINTF_PREAMBLE \
char *start, *end, *bufptr, *fmtstart; \
if((start = getNextPrintfBufPtr()) == NULL) return 0; \
end = start + CUPRINTF_MAX_LEN; \

// Posting an argument is easy
#define CUPRINTF_ARG(argname) \
bufptr = copyArg(bufptr, argname, end);

// After args are done, record start-of-fmt and write the fmt and header
#define CUPRINTF_POSTAMBLE \
fmtstart = bufptr; \
end = cuPrintfStrncpy(bufptr, fmt, CUPRINTF_MAX_LEN, end); \
writePrintfHeader(start, end ? fmtstart : NULL); \
return end ? (int)(end - start) : 0;

__device__ int cuPrintf(const char *fmt)
{
CUPRINTF_PREAMBLE;

CUPRINTF_POSTAMBLE;
}
template <typename T1> __device__ int cuPrintf(const char *fmt, T1 arg1)
{
CUPRINTF_PREAMBLE;

CUPRINTF_ARG(arg1);

CUPRINTF_POSTAMBLE;
}
template <typename T1, typename T2> __device__ int cuPrintf(const char *fmt, T1 arg1, T2 arg2)
{
CUPRINTF_PREAMBLE;

CUPRINTF_ARG(arg1);
CUPRINTF_ARG(arg2);

CUPRINTF_POSTAMBLE;
}
template <typename T1, typename T2, typename T3> __device__ int cuPrintf(const char *fmt, T1 arg1, T2 arg2, T3 arg3)
{
CUPRINTF_PREAMBLE;

CUPRINTF_ARG(arg1);
CUPRINTF_ARG(arg2);
CUPRINTF_ARG(arg3);

CUPRINTF_POSTAMBLE;
}
template <typename T1, typename T2, typename T3, typename T4> __device__ int cuPrintf(const char *fmt, T1 arg1, T2 arg2, T3 arg3, T4 arg4)
{
CUPRINTF_PREAMBLE;

CUPRINTF_ARG(arg1);
CUPRINTF_ARG(arg2);
CUPRINTF_ARG(arg3);
CUPRINTF_ARG(arg4);

CUPRINTF_POSTAMBLE;
}
template <typename T1, typename T2, typename T3, typename T4, typename T5> __device__ int cuPrintf(const char *fmt, T1 arg1, T2 arg2, T3 arg3, T4 arg4, T5 arg5)
{
CUPRINTF_PREAMBLE;

CUPRINTF_ARG(arg1);
CUPRINTF_ARG(arg2);
CUPRINTF_ARG(arg3);
CUPRINTF_ARG(arg4);
CUPRINTF_ARG(arg5);

CUPRINTF_POSTAMBLE;
}
template <typename T1, typename T2, typename T3, typename T4, typename T5, typename T6> __device__ int cuPrintf(const char *fmt, T1 arg1, T2 arg2, T3 arg3, T4 arg4, T5 arg5, T6 arg6)
{
CUPRINTF_PREAMBLE;

CUPRINTF_ARG(arg1);
CUPRINTF_ARG(arg2);
CUPRINTF_ARG(arg3);
CUPRINTF_ARG(arg4);
CUPRINTF_ARG(arg5);
CUPRINTF_ARG(arg6);
CUPRINTF_POSTAMBLE;
}
template <typename T1, typename T2, typename T3, typename T4, typename T5, typename T6, typename T7> __device__ int cuPrintf(const char *fmt, T1 arg1, T2 arg2, T3 arg3, T4 arg4, T5 arg5, T6 arg6, T7 arg7)
{
CUPRINTF_PREAMBLE;

CUPRINTF_ARG(arg1);
CUPRINTF_ARG(arg2);
CUPRINTF_ARG(arg3);
CUPRINTF_ARG(arg4);
CUPRINTF_ARG(arg5);
CUPRINTF_ARG(arg6);
CUPRINTF_ARG(arg7);

CUPRINTF_POSTAMBLE;
}
template <typename T1, typename T2, typename T3, typename T4, typename T5, typename T6, typename T7, typename T8> __device__ int cuPrintf(const char *fmt, T1 arg1, T2 arg2, T3 arg3, T4 arg4, T5 arg5, T6 arg6, T7 arg7, T8 arg8)
{
CUPRINTF_PREAMBLE;

CUPRINTF_ARG(arg1);
CUPRINTF_ARG(arg2);
CUPRINTF_ARG(arg3);
CUPRINTF_ARG(arg4);
CUPRINTF_ARG(arg5);
CUPRINTF_ARG(arg6);
CUPRINTF_ARG(arg7);
CUPRINTF_ARG(arg8);

CUPRINTF_POSTAMBLE;
}
template <typename T1, typename T2, typename T3, typename T4, typename T5, typename T6, typename T7, typename T8, typename T9> __device__ int cuPrintf(const char *fmt, T1 arg1, T2 arg2, T3 arg3, T4 arg4, T5 arg5, T6 arg6, T7 arg7, T8 arg8, T9 arg9)
{
CUPRINTF_PREAMBLE;

CUPRINTF_ARG(arg1);
CUPRINTF_ARG(arg2);
CUPRINTF_ARG(arg3);
CUPRINTF_ARG(arg4);
CUPRINTF_ARG(arg5);
CUPRINTF_ARG(arg6);
CUPRINTF_ARG(arg7);
CUPRINTF_ARG(arg8);
CUPRINTF_ARG(arg9);

CUPRINTF_POSTAMBLE;
}
template <typename T1, typename T2, typename T3, typename T4, typename T5, typename T6, typename T7, typename T8, typename T9, typename T10> __device__ int cuPrintf(const char *fmt, T1 arg1, T2 arg2, T3 arg3, T4 arg4, T5 arg5, T6 arg6, T7 arg7, T8 arg8, T9 arg9, T10 arg10)
{
CUPRINTF_PREAMBLE;

CUPRINTF_ARG(arg1);
CUPRINTF_ARG(arg2);
CUPRINTF_ARG(arg3);
CUPRINTF_ARG(arg4);
CUPRINTF_ARG(arg5);
CUPRINTF_ARG(arg6);
CUPRINTF_ARG(arg7);
CUPRINTF_ARG(arg8);
CUPRINTF_ARG(arg9);
CUPRINTF_ARG(arg10);

CUPRINTF_POSTAMBLE;
}
#undef CUPRINTF_PREAMBLE
#undef CUPRINTF_ARG
#undef CUPRINTF_POSTAMBLE

//
//	cuPrintfRestrict
//
//	Called to restrict output to a given thread/block.
//	We store the info in "restrictRules", which is set up at
//	init time by the host. It's not the cleanest way to do this
//	because it means restrictions will last between
//	invocations, but given the output-pointer continuity,
//	I feel this is reasonable.
//
__device__ void cuPrintfRestrict(int threadid, int blockid)
{
int thread_count = blockDim.x * blockDim.y * blockDim.z;

int block_count = gridDim.x * gridDim.y;
if(((blockid < block_count) && (blockid >= 0)) || (blockid == CUPRINTF_UNRESTRICTED))
restrictRules.blockid = blockid;
}

///////////////////////////////////////////////////////////////////////////////
// HOST SIDE

#include <stdio.h>
static FILE *printf_fp;

static char *printfbuf_start=NULL;
static char *printfbuf_device=NULL;
static int printfbuf_len=0;

//
//  outputPrintfData
//
//  Our own internal function, which takes a pointer to a data buffer
//  and passes it through libc's printf for output.
//
//  We receive the formate string and a pointer to where the data is
//  held. We then run through and print it out.
//
//  Returns 0 on failure, 1 on success
//
static int outputPrintfData(char *fmt, char *data)
{
// Format string is prefixed by a length that we don't need
fmt += CUPRINTF_ALIGN_SIZE;

// Now run through it, printing everything we can. We must
// run to every % character, extract only that, and use printf
// to format it.
char *p = strchr(fmt, '%');
while(p != NULL)
{
// Print up to the % character
*p = '\0';
fputs(fmt, printf_fp);
*p = '%';           // Put back the %

// Now handle the format specifier
char *format = p++;         // Points to the '%'
p += strcspn(p, "%cdiouxXeEfgGaAnps");
if(*p == '\0')              // If no format specifier, print the whole thing
{
fmt = format;
break;
}

// Cut out the format bit and use printf to print it. It's prefixed
// by its length.
int arglen = *(int *)data;
if(arglen > CUPRINTF_MAX_LEN)
{
fputs("Corrupt printf buffer data - aborting\n", printf_fp);
return 0;
}

data += CUPRINTF_ALIGN_SIZE;

char specifier = *p++;
char c = *p;        // Store for later
*p = '\0';
switch(specifier)
{
// These all take integer arguments
case 'c':
case 'd':
case 'i':
case 'o':
case 'u':
case 'x':
case 'X':
case 'p':
fprintf(printf_fp, format, *((int *)data));
break;

// These all take double arguments
case 'e':
case 'E':
case 'f':
case 'g':
case 'G':
case 'a':
case 'A':
if(arglen == 4)     // Float vs. Double thing
fprintf(printf_fp, format, *((float *)data));
else
fprintf(printf_fp, format, *((double *)data));
break;

// Strings are handled in a special way
case 's':
fprintf(printf_fp, format, (char *)data);
break;

// % is special
case '%':
fprintf(printf_fp, "%%");
break;

// Everything else is just printed out as-is
default:
fprintf(printf_fp, format);
break;
}
data += CUPRINTF_ALIGN_SIZE;         // Move on to next argument
*p = c;                     // Restore what we removed
fmt = p;                    // Adjust fmt string to be past the specifier
p = strchr(fmt, '%');       // and get the next specifier
}

// Print out the last of the string
fputs(fmt, printf_fp);
return 1;
}

//
//  doPrintfDisplay
//
//  This runs through the blocks of CUPRINTF_MAX_LEN-sized data, calling the
//  print function above to display them. We've got this separate from
//  cudaPrintfDisplay() below so we can handle the SM_10 architecture
//  partitioning.
//
static int doPrintfDisplay(int headings, int clear, char *bufstart, char *bufend, char *bufptr, char *endptr)
{
// Grab, piece-by-piece, each output element until we catch
// up with the circular buffer end pointer
int printf_count=0;
char printfbuf_local[CUPRINTF_MAX_LEN+1];
printfbuf_local[CUPRINTF_MAX_LEN] = '\0';

while(bufptr != endptr)
{
// Wrap ourselves at the end-of-buffer
if(bufptr == bufend)
bufptr = bufstart;

// Adjust our start pointer to within the circular buffer and copy a block.
cudaMemcpy(printfbuf_local, bufptr, CUPRINTF_MAX_LEN, cudaMemcpyDeviceToHost);

// If the magic number isn't valid, then this write hasn't gone through
// yet and we'll wait until it does (or we're past the end for non-async printfs).
if((hdr->magic != CUPRINTF_SM11_MAGIC) || (hdr->fmtoffset >= CUPRINTF_MAX_LEN))
{
break;
}

// Extract all the info and get this printf done
fprintf(printf_fp, "[%d, %d]: ", hdr->blockid, hdr->threadid);
if(hdr->fmtoffset == 0)
fprintf(printf_fp, "printf buffer overflow\n");
break;
printf_count++;

if(clear)
cudaMemset(bufptr, 0, CUPRINTF_MAX_LEN);

// Now advance our start location, because we're done, and keep copying
bufptr += CUPRINTF_MAX_LEN;
}

return printf_count;
}

//
//  cudaPrintfInit
//
//  Takes a buffer length to allocate, creates the memory on the device and
//  returns a pointer to it for when a kernel is called. It's up to the caller
//  to free it.
//
extern "C" cudaError_t cudaPrintfInit(size_t bufferLen)
{
// Fix up bufferlen to be a multiple of CUPRINTF_MAX_LEN
bufferLen = (bufferLen < CUPRINTF_MAX_LEN) ? CUPRINTF_MAX_LEN : bufferLen;
if((bufferLen % CUPRINTF_MAX_LEN) > 0)
bufferLen += (CUPRINTF_MAX_LEN - (bufferLen % CUPRINTF_MAX_LEN));
printfbuf_len = (int)bufferLen;

// Allocate a print buffer on the device and zero it
if(cudaMalloc((void **)&printfbuf_device, printfbuf_len) != cudaSuccess)
return cudaErrorInitializationError;
cudaMemset(printfbuf_device, 0, printfbuf_len);
printfbuf_start = printfbuf_device;         // Where we start reading from

// No restrictions to begin with
cuPrintfRestriction restrict;
cudaMemcpyToSymbol(restrictRules, &restrict, sizeof(restrict));

// Initialise the buffer and the respective lengths/pointers.
cudaMemcpyToSymbol(globalPrintfBuffer, &printfbuf_device, sizeof(char *));
cudaMemcpyToSymbol(printfBufferPtr, &printfbuf_device, sizeof(char *));
cudaMemcpyToSymbol(printfBufferLength, &printfbuf_len, sizeof(printfbuf_len));

return cudaSuccess;
}

//
//  cudaPrintfEnd
//
//  Frees up the memory which we allocated
//
extern "C" void cudaPrintfEnd()
{
if(!printfbuf_start || !printfbuf_device)
return;

cudaFree(printfbuf_device);
printfbuf_start = printfbuf_device = NULL;
}

//
//  cudaPrintfDisplay
//
//  Each call to this function dumps the entire current contents
//	of the printf buffer to the pre-specified FILE pointer. The
//	circular "start" pointer is advanced so that subsequent calls
//	dumps only new stuff.
//
//  In the case of async memory access (via streams), call this
//  repeatedly to keep trying to empty the buffer. If it's a sync
//  access, then the whole buffer should empty in one go.
//
//	Arguments:
//		outputFP     - File descriptor to output to (NULL => stdout)
//
extern "C" cudaError_t cudaPrintfDisplay(void *outputFP, bool showThreadID)
{
printf_fp = (FILE *)((outputFP == NULL) ? stdout : outputFP);

// For now, we force "synchronous" mode which means we're not concurrent
// with kernel execution. This also means we don't need clearOnPrint.
// If you're patching it for async operation, here's where you want it.
bool sync_printfs = true;
bool clearOnPrint = false;

// Initialisation check
if(!printfbuf_start || !printfbuf_device || !printf_fp)
return cudaErrorMissingConfiguration;

// To determine which architecture we're using, we read the
// first short from the buffer - it'll be the magic number
// relating to the version.
unsigned short magic;
cudaMemcpy(&magic, printfbuf_device, sizeof(unsigned short), cudaMemcpyDeviceToHost);

// For SM_10 architecture, we've split our buffer into one-per-thread.
// That means we must do each thread block separately. It'll require
// extra reading. We also, for now, don't support async printfs because
// that requires tracking one start pointer per thread.
if(magic == CUPRINTF_SM10_MAGIC)
{
sync_printfs = true;
clearOnPrint = false;
int blocklen = 0;
char *blockptr = printfbuf_device;
while(blockptr < (printfbuf_device + printfbuf_len))
{
cudaMemcpy(&hdr, blockptr, sizeof(hdr), cudaMemcpyDeviceToHost);

// We get our block-size-step from the very first header

// No magic number means no printfs from this thread
if(hdr.magic != CUPRINTF_SM10_MAGIC)
{
if(blocklen == 0)
{
fprintf(printf_fp, "No printf headers found at all!\n");
}
blockptr += blocklen;
continue;
}

// "offset" is non-zero then we can print the block contents
if(hdr.offset > 0)
{
// For synchronous printfs, we must print from endptr->bufend, then from start->end
if(sync_printfs)
}

// Move on to the next block and loop again
}
}
// For SM_11 and up, everything is a single buffer and it's simple
else if(magic == CUPRINTF_SM11_MAGIC)
{
// Grab the current "end of circular buffer" pointer.
char *printfbuf_end = NULL;
cudaMemcpyFromSymbol(&printfbuf_end, printfBufferPtr, sizeof(char *));

// Adjust our starting and ending pointers to within the block
char *bufptr = ((printfbuf_start - printfbuf_device) % printfbuf_len) + printfbuf_device;
char *endptr = ((printfbuf_end - printfbuf_device) % printfbuf_len) + printfbuf_device;

// For synchronous (i.e. after-kernel-exit) printf display, we have to handle circular
// buffer wrap carefully because we could miss those past "end".
if(sync_printfs)
doPrintfDisplay(showThreadID, clearOnPrint, printfbuf_device, printfbuf_device+printfbuf_len, endptr, printfbuf_device+printfbuf_len);
doPrintfDisplay(showThreadID, clearOnPrint, printfbuf_device, printfbuf_device+printfbuf_len, bufptr, endptr);

printfbuf_start = printfbuf_end;
}
else

// If we were synchronous, then we must ensure that the memory is cleared on exit
// otherwise another kernel launch with a different grid size could conflict.
if(sync_printfs)
cudaMemset(printfbuf_device, 0, printfbuf_len);

return cudaSuccess;
}

// Cleanup
#undef CUPRINTF_MAX_LEN
#undef CUPRINTF_ALIGN_SIZE
#undef CUPRINTF_SM10_MAGIC
#undef CUPRINTF_SM11_MAGIC

#endif


cuPrintf.cuh (also by Nvidia):


/*

This source code and/or documentation ("Licensed Deliverables") are subject
to NVIDIA intellectual property rights under U.S. and international Copyright
laws.

These Licensed Deliverables contained herein is PROPRIETARY and CONFIDENTIAL
to NVIDIA and is being provided under the terms and conditions of a form of
Agreement") or electronically accepted by Licensee.  Notwithstanding any terms
or conditions to the contrary in the License Agreement, reproduction or
disclosure of the Licensed Deliverables to any third party without the express
written consent of NVIDIA is prohibited.

NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE LICENSE AGREEMENT,
DELIVERABLES FOR ANY PURPOSE.  IT IS PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED
WARRANTY OF ANY KIND. NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE
LICENSED DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY,
NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE.   NOTWITHSTANDING ANY
TERMS OR CONDITIONS TO THE CONTRARY IN THE LICENSE AGREEMENT, IN NO EVENT SHALL
NVIDIA BE LIABLE FOR ANY SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES,
OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,	WHETHER
IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION,  ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THESE LICENSED DELIVERABLES.

U.S. Government End Users. These Licensed Deliverables are a "commercial item"
as that term is defined at  48 C.F.R. 2.101 (OCT 1995), consisting  of
"commercial computer  software"  and "commercial computer software documentation"
as such terms are  used in 48 C.F.R. 12.212 (SEPT 1995) and is provided to the
U.S. Government only as a commercial end item.  Consistent with 48 C.F.R.12.212
and 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all U.S. Government
End Users acquire the Licensed Deliverables with only those rights set forth
herein.

Any use of the Licensed Deliverables in individual and commercial software must
include, in the user documentation and internal comments to the code, the above
Disclaimer and U.S. Government End Users Notice.
*/

#ifndef CUPRINTF_H
#define CUPRINTF_H

/*
*	This is the header file supporting cuPrintf.cu and defining both
*	the host and device-side interfaces. See that file for some more
*	explanation and sample use code. See also below for details of the
*	host-side interfaces.
*
*  Quick sample code:
*
#include "cuPrintf.cu"

__global__ void testKernel(int val)
{
cuPrintf("Value is: %d\n", val);
}

int main()
{
cudaPrintfInit();
testKernel<<< 2, 3 >>>(10);
cudaPrintfDisplay(stdout, true);
cudaPrintfEnd();
return 0;
}
*/

///////////////////////////////////////////////////////////////////////////////
// DEVICE SIDE
// External function definitions for device-side code

// Abuse of templates to simulate varargs
__device__ int cuPrintf(const char *fmt);
template <typename T1> __device__ int cuPrintf(const char *fmt, T1 arg1);
template <typename T1, typename T2> __device__ int cuPrintf(const char *fmt, T1 arg1, T2 arg2);
template <typename T1, typename T2, typename T3> __device__ int cuPrintf(const char *fmt, T1 arg1, T2 arg2, T3 arg3);
template <typename T1, typename T2, typename T3, typename T4> __device__ int cuPrintf(const char *fmt, T1 arg1, T2 arg2, T3 arg3, T4 arg4);
template <typename T1, typename T2, typename T3, typename T4, typename T5> __device__ int cuPrintf(const char *fmt, T1 arg1, T2 arg2, T3 arg3, T4 arg4, T5 arg5);
template <typename T1, typename T2, typename T3, typename T4, typename T5, typename T6> __device__ int cuPrintf(const char *fmt, T1 arg1, T2 arg2, T3 arg3, T4 arg4, T5 arg5, T6 arg6);
template <typename T1, typename T2, typename T3, typename T4, typename T5, typename T6, typename T7> __device__ int cuPrintf(const char *fmt, T1 arg1, T2 arg2, T3 arg3, T4 arg4, T5 arg5, T6 arg6, T7 arg7);
template <typename T1, typename T2, typename T3, typename T4, typename T5, typename T6, typename T7, typename T8> __device__ int cuPrintf(const char *fmt, T1 arg1, T2 arg2, T3 arg3, T4 arg4, T5 arg5, T6 arg6, T7 arg7, T8 arg8);
template <typename T1, typename T2, typename T3, typename T4, typename T5, typename T6, typename T7, typename T8, typename T9> __device__ int cuPrintf(const char *fmt, T1 arg1, T2 arg2, T3 arg3, T4 arg4, T5 arg5, T6 arg6, T7 arg7, T8 arg8, T9 arg9);
template <typename T1, typename T2, typename T3, typename T4, typename T5, typename T6, typename T7, typename T8, typename T9, typename T10> __device__ int cuPrintf(const char *fmt, T1 arg1, T2 arg2, T3 arg3, T4 arg4, T5 arg5, T6 arg6, T7 arg7, T8 arg8, T9 arg9, T10 arg10);

//
//	cuPrintfRestrict
//
//	Called to restrict output to a given thread/block. Pass
//	the constant CUPRINTF_UNRESTRICTED to unrestrict output
//	for thread/block IDs. Note you can therefore allow
//	"all printfs from block 3" or "printfs from thread 2
//	on all blocks", or "printfs only from block 1, thread 5".
//
//	Arguments:
//		blockid - Block ID to allow printfs from
//
//	NOTE: Restrictions last between invocations of
//	kernels unless cudaPrintfInit() is called again.
//
#define CUPRINTF_UNRESTRICTED	-1
__device__ void cuPrintfRestrict(int threadid, int blockid);

///////////////////////////////////////////////////////////////////////////////
// HOST SIDE
// External function definitions for host-side code

//
//	cudaPrintfInit
//
//	Call this once to initialise the printf system. If the output
//	file or buffer size needs to be changed, call cudaPrintfEnd()
//	before re-calling cudaPrintfInit().
//
//	The default size for the buffer is 1 megabyte. For CUDA
//	architecture 1.1 and above, the buffer is filled linearly and
//	is completely used;	however for architecture 1.0, the buffer
//	is divided into as many segments are there are threads, even
//	if some threads do not call cuPrintf().
//
//	Arguments:
//		bufferLen - Length, in bytes, of total space to reserve
//		            (in device global memory) for output.
//
//	Returns:
//		cudaSuccess if all is well.
//
extern "C" cudaError_t cudaPrintfInit(size_t bufferLen=1048576);   // 1-meg - that's enough for 4096 printfs by all threads put together

//
//	cudaPrintfEnd
//
//	Cleans up all memories allocated by cudaPrintfInit().
//	Call this at exit, or before calling cudaPrintfInit() again.
//
extern "C" void cudaPrintfEnd();

//
//	cudaPrintfDisplay
//
//	Dumps the contents of the output buffer to the specified
//	file pointer. If the output pointer is not specified,
//	the default "stdout" is used.
//
//	Arguments:
//		outputFP     - A file pointer to an output stream.
//		showThreadID - If "true", output strings are prefixed
//		               by "[blockid, threadid] " at output.
//
//	Returns:
//		cudaSuccess if all is well.
//
extern "C" cudaError_t cudaPrintfDisplay(void *outputFP=NULL, bool showThreadID=false);

#endif  // CUPRINTF_H


Makefile


HELLOWORLDLIB := libHelloWorld.a

all : $(HELLOWORLDLIB) CUDA_INSTALL_PATH ?= /usr/local/cuda NVCC :=$(CUDA_INSTALL_PATH)/bin/nvcc
CXX	     := g++
ARCHIVER   := ar cqs

TARGETDIR := ../lib
TARGET := $(TARGETDIR)/$(HELLOWORLDLIB)

VERBOSE :=

CUDAINCLUDES  += -I(CUDA_INSTALL_PATH)/include COMMONFLAGS += -DUNIX CXXFLAGS := \ -W -Wall \ -Wimplicit \ -Wswitch \ -Wformat \ -Wchar-subscripts \ -Wparentheses \ -Wmultichar \ -Wtrigraphs \ -Wpointer-arith \ -Wcast-align \ -Wreturn-type \ -Wno-unused-function \(SPACE)

NVCCFLAGS := \
-c -Xopencc \
-OPT:unroll_size=200000

# Debug/release configuration
ifeq ($(dbg),1) COMMONFLAGS += -g NVCCFLAGS += -D_DEBUG CXXFLAGS += -D_DEBUG CFLAGS += -D_DEBUG OBJDIR := debug LIBSUFFIX := D else COMMONFLAGS += -O2 OBJDIR := release LIBSUFFIX := NVCCFLAGS += --compiler-options -fno-strict-aliasing CXXFLAGS += -fno-strict-aliasing CFLAGS += -fno-strict-aliasing endif CUDALIB := -L$(CUDA_INSTALL_PATH)/lib
CUDALIB += -lcudart -lcutil

NVCCFLAGS += $(COMMONFLAGS)$(COMMONINCLUDES) $(CUDAINCLUDES) CFLAGS +=$(COMMONFLAGS) $(COMMONINCLUDES) CXXFLAGS +=$(COMMONFLAGS) $(COMMONINCLUDES) CUDAOBJS := \$(OBJDIR)/HelloWorld.cu.o

$(HELLOWORLDLIB): directories$(CUDAOBJS)
$(ARCHIVER)$(TARGET) $(CUDAOBJS)$(OBJDIR)/HelloWorld.cu.o : HelloWorld.cu $(CU_DEPS)$(VERBOSE)$(NVCC)$(NVCCFLAGS) -I. -o $(OBJDIR)/HelloWorld.cu.o -c HelloWorld.cu directories:$(VERBOSE)mkdir -p $(OBJDIR)$(VERBOSE)mkdir -p $(TARGETDIR) clean:$(VERBOSE)rm -r $(OBJDIR)$(VERBOSE)rm -r $(TARGET)  Be sure to change any relevant paths in the makefile. I save all of these files in a folder called cuda/src/, then if I issue:  cd cuda/src make  I build the static library cuda/lib/libHelloWorld.a Building a qt app My qt app consists of four files located in the same directory as the cuda subdirectory mentioned above: HelloWorldQt.pro  INCLUDEPATH += cuda/src CUDA_LIBDIR = /usr/local/cuda/lib CUDALIB = -L$$CUDA_LIBDIR -lcudart SOURCES += main.cpp \ HelloButton.cpp HEADERS += HelloButton.h LIBS += -Lcuda/lib -lHelloWorld$$CUDALIB  HelloButton.cpp  #include <HelloWorld.cuh> #include <HelloButton.h> HelloButton::HelloButton(const QString & text, QWidget * parent) : QPushButton(text, parent) { } void HelloButton::on_clicked() { HelloWorld(); }  HelloButton.h  #include <QPushButton> class HelloButton : public QPushButton { Q_OBJECT public: public: HelloButton(const QString & text, QWidget * parent = 0); virtual ~HelloButton(){}; public slots: void on_clicked(); };  main.cpp  #include <QApplication> #include <HelloButton.h> int main(int argc, char * argv[]) { QApplication app(argc, argv); HelloButton hello_button("Hello, GPU!"); QObject::connect( &hello_button, SIGNAL(clicked()), &hello_button, SLOT(on_clicked())); hello_button.show(); return app.exec(); }  Now you can generate an Xcode project using Qmake:  qmake-mac -spec macx-xcode HelloWorldQt.pro  Building and running with Xcode There’s some trickiness getting executables to run correctly when linking to cuda libraries. If you just build and run the project generate by the above you may see errors like:  dyld: Library not loaded: @rpath/libcudart.dylib Referenced from: /Users/ajx/Code/Cuda/HelloWorldQt/build/Debug/HelloWorldQt.app/Contents/MacOS/HelloWorldQt Reason: image not found  There are a few ways to fix this. I prefer this simple one, but the down side is that the final app must be run from Xcode. Then open HelloWorldQt.xcodeproj, in the side bar open Executables > Right click on HelloWorldQt and select Get Info. Then click the Arguments tab, add a new variable “to be set in the environment”: Name: DYLD_LIBRARY_PATH Value: /usr/local/lib/cuda This will let you build and run from Xcode, to run the app NOT via Xcode you will have to do fancy stuff with otool that I’m not bothering with as of yet. Download project tree source code Blacked-out text in LaTeX Sunday, November 28th, 2010 Here’s a small command you can add to your LaTeX document’s header that will let you “blackout” text like a censored Watergate era document.  \newlength{\blackoutwidth} \newcommand{\blackout}[1] {%necessary comment \settowidth{\blackoutwidth}{#1}%necessary comment \rule[-0.3em]{\blackoutwidth}{1.125em}%necessary comment }  The command is easy to use and automatically adjusts to the word or phrase that should be blacked out, as long as it’s not longer than a line. Here’s an example of \blackout in use: The above can be compiled from the following LaTeX document:  \documentclass[letterpaper,11pt]{article} \newlength{\blackoutwidth} \newcommand{\blackout}[1] {%necessary comment \settowidth{\blackoutwidth}{#1}%necessary comment \rule[-0.3em]{\blackoutwidth}{1.125em}%necessary comment } \begin{document} \noindent {\tiny Deep Throat's true identity is \blackout{Mark Felt}. \\ Deep Throat's true identity is Mark Felt.}\\ {\small Deep Throat's true identity is \blackout{Mark Felt}. \\ Deep Throat's true identity is Mark Felt.}\\ Deep Throat's true identity is \blackout{Mark Felt}. \\ Deep Throat's true identity is Mark Felt. \\ {\bf Deep Throat's true identity is \blackout{Mark Felt}. \\ Deep Throat's true identity is Mark Felt.}\\ \emph{ Deep Throat's true identity is \blackout{Mark Felt}. \\ Deep Throat's true identity is Mark Felt.}\\ {\Large Deep Throat's true identity is \blackout{Mark Felt}. \\ Deep Throat's true identity is Mark Felt.}\\ {\huge Deep Throat's true identity is \blackout{Mark Felt}. \\ Deep Throat's true identity is Mark Felt.}\\ \end{document}  Compiling and using ARPACK++ on Mac OS X Saturday, November 27th, 2010 Hilariously arpack++ is supposed to make arpack more easily accessible. In the end, once it’s finally working this is probably true. But getting it working is not at all easy. Arpack++ is a bunch of header files that provide an interface to the amazingly complicated fortran code of arpack.So the first step is to compile arpack itself. Next follow these instructions to build a superlu static or dynamic library using xcode. If you want a universal library (32-bit and 64-bit) be sure to enable both i386 and x86_64 when building the final release. Note: I skipped UMFPACK because I knew that I wouldn’t be getting that far into arpack++’s fancy features. Now follow instructions 9 through 11 of these instructions in order to patch the outdated arpack++ code. I repeat them here: 9. So now download Arpack++ from (http://www.ime.unicamp.br/~chico/arpack++/), and its patch from (http://reuter.mit.edu/index.php/software/arpackpatch/). 10. extract Arpack++ and copy the patch file to the arpack++ folder, and type patch -p1 < arpack++1.2.patch.diff I don’t mess with the make files for the examples. Instead travel to examples/nonsym and build an example with something like:  g++ simple.cc -I../../../include -I ../../matrices/nonsym/ -I../ -o simple ~/ARPACK/libarpack.a -framework Accelerate ~/Downloads/SuperLU_4.1/lib/superlu/build/Release/libsuperlu.a -lf2c  You’ll have to change the paths in the linked libraries to match yours. Update: Upon trying to compile x86_64 architecture 64-bit executables I notice that I get linker warnings, like:  ld warning: for symbol _debug_ tentative definition of size 192 from /Users/ajx/ARPACK/libarpack.a(dneigh.o) is is smaller than the real definition of size 96 from /var/folders/XJ/XJuT0FQMG4a+QhMSnFcAwU+++TI/-Tmp-//ccoufDAR.o  Then when I execute the programs I get runtime errors like:  Arpack error in Eupp. -> Error in ARPACK Eupd fortran code. Arpack error in FindEigenvectors. -> Could not find any eigenvector. Eigenvalues:  I guess for now I am bound to the 32-bit i386 version… Compiling and using ARPACK on Mac OS X Saturday, November 27th, 2010 Computer programming paleontology Recently we have been prototyping using MATLAB’s eigs. It is an extremely easy to use eigen-decomposition tool that works on sparse matrices. It allows you to choose how many eigenvalues you want from which end of the spectrum (smallest or largest magnitude). Here’s an example of how easy it is:  % Build a sparse 100 by 100 second-order finite-difference Laplacian matrix A = delsq(numgrid('C',10)); % Get 5 smallest magnitude eigenvectors (columns of V) and eigenvalues % (diagonal of D). [V,D] = eigs(A,5,'sm')  After some digging around for a C/C++ equivalent I found that MATLAB is interfacing ARPACK. For some reason ARPACK feels ancient, but it’s actually been around since only 1996 and seemed to be maintained at least as recently as 2008. Trying to compile it though felt like what it must have felt like for Otto Lidenbrock to find dinosaurs living at present day (only near the center of the earth). Compiling ARPACK I loosely followed these helpful instructions. This also assumes that you have a universal (32-bit and 64-bit) build of f2c, if not I have previously posted instructions. Download the ARPACK source and patch as instructed on the ARPACK download site. Use zcat to patch the directories. If Safari auto uncompresses the .zips you may have to start over with pure zips to be sure that it’s being done as instructed here. Edit the ARmake.inc file, to look like this:  home = . BLASdir =$(home)/BLAS
LAPACKdir    = $(home)/LAPACK UTILdir =$(home)/UTIL
SRCdir       = $(home)/SRC DIRS =$(UTILdir) $(SRCdir) ARPACKLIB =$(home)/libarpack.a
.SUFFIXES:	.f	.o
.DEFAULT:
@$(ECHO) "Unknown target$@, try:  make help"

F2C     = f2c
F2CFLAGS = -ARw8 -Nn802 -Nq300 -Nx400
CC      = cc
CFLAGS  = -arch i386 -arch x86_64 -O

CD      = cd

ECHO    = echo

MAKE    = /usr/bin/make

RM      = rm
RMFLAGS = -f

help:
@$(ECHO) "usage: make ?"  Edit Makefile, to look like this:  include ARmake.inc PRECISIONS = single double complex complex16 all: lib lib: arpacklib clean: cleanlib arpacklib: @( \ for f in$(DIRS); \
do \
$(CD) $$f; \ (ECHO) Making lib in$$f; \$(MAKE) $(PRECISIONS); \$(CD) ..; \
done );

cleanlib:
( cd $(BLASdir);$(MAKE) clean )
( cd $(LAPACKdir);$(MAKE) clean )
( cd $(UTILdir);$(MAKE) clean )
( cd $(SRCdir);$(MAKE) clean )

help:
@$(ECHO) "usage: make ?"  Edit SRC/Makefile to look like this:  include ../ARmake.inc SOBJ = sgetv0.o slaqrb.o sstqrb.o ssortc.o ssortr.o sstatn.o sstats.o \ snaitr.o snapps.o snaup2.o snaupd.o snconv.o sneigh.o sngets.o \ ssaitr.o ssapps.o ssaup2.o ssaupd.o ssconv.o sseigt.o ssgets.o \ sneupd.o sseupd.o ssesrt.o DOBJ = dgetv0.o dlaqrb.o dstqrb.o dsortc.o dsortr.o dstatn.o dstats.o \ dnaitr.o dnapps.o dnaup2.o dnaupd.o dnconv.o dneigh.o dngets.o \ dsaitr.o dsapps.o dsaup2.o dsaupd.o dsconv.o dseigt.o dsgets.o \ dneupd.o dseupd.o dsesrt.o COBJ = cnaitr.o cnapps.o cnaup2.o cnaupd.o cneigh.o cneupd.o cngets.o \ cgetv0.o csortc.o cstatn.o ZOBJ = znaitr.o znapps.o znaup2.o znaupd.o zneigh.o zneupd.o zngets.o \ zgetv0.o zsortc.o zstatn.o .f.o:$(F2C) $(F2CFLAGS)$*.f
$(CC)$(CFLAGS) -c $*.c$(RM) $*.c all: single double complex complex16 single:$(SOBJ)

double: $(DOBJ) complex:$(COBJ)

complex16: $(ZOBJ) # # clean - remove all object files # clean: rm -f *.o a.out core *.c  Edit UTIL/Makefile to look like this:  include ../ARmake.inc OBJS = icnteq.o icopy.o iset.o iswap.o ivout.o second.o SOBJ = svout.o smout.o DOBJ = dvout.o dmout.o COBJ = cvout.o cmout.o ZOBJ = zvout.o zmout.o .SUFFIXES: .o .F .f .f.o:$(F2C) $(F2CFLAGS)$*.f
$(CC)$(CFLAGS) -c $*.c$(RM) $*.c # # make the library containing both single and double precision # all: single double complex complex16 single:$(SOBJ) $(OBJS) double:$(DOBJ) $(OBJS)$(ZOBJ)

complex: $(SOBJ)$(OBJS) $(COBJ) complex16:$(DOBJ) $(OBJS)$(ZOBJ)
#
#  clean	- remove all object files
#
clean:
rm -f *.o a.out core *.c


Now you can build all of the .o object files with:


make lib


So far we have not assembled the static library, to do this issue:


libtool -o libarpack.a SRC/*.o UTIL/*.o


Using ARPACK

Travel to EXAMPLES/SIMPLE

Then you can convert the fortran file to a c file with:


f2c sssimp.f


Then compile with:


gcc -arch i386 -arch x86_64 -framework Accelerate -lf2c ../../libarpack.a sssimp.c -o sssimp


And finally run with:


./sssimp


You should see something like:



_saupd: number of update iterations taken
-----------------------------------------
1 -    1:     1

_saupd: number of "converged" Ritz values
-----------------------------------------
1 -    1:     4

_saupd: final Ritz values
-------------------------
1 -    4:   5.040E+02   5.050E+02   5.177E+02   5.475E+02

_saupd: corresponding error bounds
----------------------------------
1 -    4:   0.000E+00   0.000E+00   0.000E+00   0.000E+00

==========================================
= Symmetric implicit Arnoldi update code =
= Version Number: 2.4                    =
= Version Date:   07/31/96               =
==========================================
= Summary of timing statistics           =
==========================================

Total number update iterations             =     1
Total number of OP*x operations            =    20
Total number of B*x operations             =     0
Total number of reorthogonalization steps  =    20
Total number of iterative refinement steps =     0
Total number of restart steps              =     0
Total time in user OP*x operation          =      .000000
Total time in user B*x operation           =      .000000
Total time in Arnoldi update routine       =      .000000
Total time in saup2 routine                =      .000000
Total time in basic Arnoldi iteration loop =      .000000
Total time in reorthogonalization phase    =      .000000
Total time in (re)start vector generation  =      .000000
Total time in trid eigenvalue subproblem   =      .000000
Total time in getting the shifts           =      .000000
Total time in applying the shifts          =      .000000
Total time in convergence testing          =      .000000

Ritz values and relative residuals
----------------------------------
Col   1       Col   2
Row   1:    8.63063E+02   0.00000E+00
Row   2:    8.86061E+02   0.00000E+00
Row   3:    9.19768E+02   0.00000E+00
Row   4:    9.48391E+02   0.00000E+00

_SSIMP
======

Size of the matrix is  100
The number of Ritz values requested is  4
The number of Arnoldi vectors generated (NCV) is  20
What portion of the spectrum: LM
The number of converged Ritz values is  4
The number of Implicit Arnoldi update iterations taken is  1
The number of OP*x is  20
The convergence criterion is   0.


nude in tank top with cigarette and necklace

Friday, November 19th, 2010

Project point to line segment

Monday, November 8th, 2010

Here some simple MATLAB code to project a point to a line segment, i.e. find the closest point on the line segment to a given point in space:


function [q] = project_point_to_line_segment(A,B,p)
% returns q the closest point to p on the line segment from A to B

% vector from A to B
AB = (B-A);
% squared distance from A to B
AB_squared = dot(AB,AB);
if(AB_squared == 0)
% A and B are the same point
q = A;
else
% vector from A to p
Ap = (p-A);
% from http://stackoverflow.com/questions/849211/
% Consider the line extending the segment, parameterized as A + t (B - A)
% We find projection of point p onto the line.
% It falls where t = [(p-A) . (B-A)] / |B-A|^2
t = dot(Ap,AB)/AB_squared;
if (t < 0.0)
% "Before" A on the line, just return A
q = A;
else if (t > 1.0)
% "After" B on the line, just return B
q = B;
else
% projection lines "inbetween" A and B on the line
q = A + t * AB;
end
end
end


Clean up pen-and-paper line drawings in browser

Saturday, November 6th, 2010

I put up a proof of concept web-app of my previously posted algorithm to clean up pen-and-paper line drawings.