As a proof of concept, I implemented a very simple image upsampling/super resolution algorithm. This method uses local self-examples: it replaces each pixel with a 2×2 block of pixels interpolated between blocks from the same image. The blocks are identified based on matching the local regions around.

If the window radius =1, then I first create a point in Wij in R^{9} for every pixel (i,j) so that W22 = [I11,I12,I13,I21,I22,I23,I31,I32,I33], where Iij is the color at pixel ij.

Then for every 2×2 block Dij centered at the bottom right corner of pixel (i,j) I create a point in R^{9}, so that D22 = [0.25*(I00 + I01 + I10 + I11), 0.25*(I02 + I03 + I12 + I13), … , 0.25*(I44 + I45 + I54 + I55)]. So the points Dij represent downsampled blocks.

The above, treats all coordinates (neighboring pixel values) equally. I found that it’s best to weight the influence by a Gaussian so that the center pixel/block is worth more.

Then for each Wij I identify the closest K Dij using `knnsearch`

. I replace the pixel (i,j) with an interpolation of the K blocks. This is done using Shepard interpolation with distances measured in the weighted R^{9} space.

Flipping around through the literature, this seems in spirit with 2000-era Freeman-type super-resolution algorithms. And the results are *comparable*. But by no means are these as good as state of the art heavy-learning based techniques around today.

Find the code for `imupsample`

in gptoolbox.

Here’re some examples:

And here’s a bigger example:

There’s still a *bathroom window* effect happening, but with more local samples to choose from, up sampling bigger images seems to work better.