Jean Forde

CS-534

Paper Review

April 15, 2011

Removing Camera Shake from a Single Photograph - Fergus, Singh, Hertzmann, Roweis, Freeman, SIGGRAPH 2006

The paper can be found at:

Camera motion when taking a picture can destroy what otherwise would have been a great photograph. Fergus, et al devised a method to remove the resulting image blur. There have been many techniques to remove image blur; however, some require that the blur kernel be known, special hardware, or multiple images to work. The goal of this paper was to require minimal user input, no extra hardware, and only one photograph so that it would be applicable to images that are likely to be taken in real life.

There are two main parts to the algorithm, determining the blur kernel and then applying it to remove the blur. I believe the most interesting part is determining the blur kernel. The main challenge in determining the blur kernel is that the blur can be complex, contain sharp edges that may distort findings, and there may not be anything to measure. To counteract this, Fergus, et al, decided to use statistics to determine the kernel. In recent studies of natural image statistics, which looks at images in our everyday world, it’s been found that images obey a heavy tailed gradient distribution. This means that images have a lot of small gradients and a few have large gradients. Using this information, they calculate the kernel by going over the image from a course to fine resolution to get around the fact that the algorithm is susceptible to local minima.

Some of the shortcomings in this method include the need for user input and the assumption that there is a single convolution. The user is required to select a small blurred area, preferable one that has lot of edges to help define the kernel. The user is also required to estimate the size of the kernel and identify whether the blur is horizontal or vertical. This paper could be expanded upon by removing the need for the user input. While removing the horizontal or vertical designation may be the easiest place to begin. Removing the selection may be harder because that selection helps to ensure the best possible kernel because it isn’t “distracted” by other elements. In this case, while it’s possible that a RANSAC style solution may work, it doesn’t seem likely as there may only be a small blurred area and the remaining portions of the image may be out of the focus plane.

In addition to removing or reducing user input, allowing multiple convolutions would allow for broader use.