MP/BME 574 Lecture 20: Blind Deconvolution

Learning Objectives:

·  Blind Deconvolution

·  Maximum likelihood

·  Expectation maximization

Assignment:

  1. Matlab Image Processing Toolbox User's Guide, “Deblurring with the Blind Deconvolution.”
  2. Lucy et al., An iterative technique for the rectification of observed distributions...

I.  Applications of Monte Carlo methods for solving case where a point response function, p(x), is not known or not easily measured.

a.  Generalized Monte Carlo algorithm:

1. Generate a random number

2. “Guess” within some constraints or boundary on the problem

i.e. mapping f(x) to coordinate space.

3. Cost function: “Is the point inside the relevant coordinate space?”

4. If yes, store value

5. Repeat

II.  Deconvolution review

a.  Recall the non-blind deconvolution problem and the inverse filter

Weiner deconvolution in the presence of noise:

This filter can be shown to minimize the mean square error (MSE) between the true object and the estimate:

b.  Iterative approaches:

(2)

.

For k iterations then,

Therefore as , then for .

III.  Blind Deconvolution

a.  Deterministic Approach

  1. “Blind” deconvolution methods where both and must be estimated, the optimal solution can be highly dependent on the initial guess.
  2. Under-determined system (i.e. one equation, g = f**h, and 2 unknowns, f and h.)
  1. Maximum Likelihood (ML) Blind Deconvolution
  2. One strategy is to combine the robust convergence properties of iterative techniques with a priori assumptions about the form of the data including statistical models of uncertainty in the measurements.

Therefore, the log likelihood of the measured data is maximized for a model in which, is minimized.

  1. General assumptions about the physical boundary conditions and uncertainty in the data

1.  e.g. Non-negativity and compact support.

  1. Statistical models of variation in the measured data:

1.  e.g. Poisson or Gaussian distributed.

2.  This leads to estimates of expected values for the measured data for ML optimization.

  1. Physical parameter constrains the solution, while the ML approach provides a criterion for evaluating convergence.

and at each k,

and

is minimized and used to optimize the convergence. The conditions for convergence are similar to the iterative procedure when is known except that the convergence to the inverse filter is no longer guaranteed and is sensitive to noise, choice of l, and the initial guess, .

Non-linear fitting problem. Searching for the minimum of the LSE cost function using steepest descent algorithm (e.g. Nelder-Mead or Levenberg-Marquardt - for combined functions) :

http://zoot.radiology.wisc.edu/~fains/Lectures/Lecture20.ppt

b.  Stochastic Approach

  1. Many cases the uncertainty in the location of the detected photons is governed by well-characterized stochastic processes

1.  For example,

  1. Photon counts in a CCD detector
  2. Astronomy
  3. Emission Tomography (SPECT, PET)
  4. X-ray Computed Tomography
  5. Inherently a Monte Carlo process where the pdf of many realizations of detected events is assumed.
  1. Intuitively is the superposition of multiple random processes used as probes (i.e. individual photons or molecules of dye) use to measure the response of the system.
  1. As before, the physical system must adhere to mass balance and, for finite counting statistics, non-negativity.
  1. For example, consider:

, where is our measured image data, is the desired corrected image, and is a conditional probability density function kernel that relates the expected value of the data to the measured data, e.g. assuming the photon counts in our x-ray image are approximately Gaussian distributed about there expected value x. For this example, then

becomes our familiar convolution process.

  1. Expectation maximization (EM)

1.  Expectation in the sense that we use a statistical model of the data measurement process to estimate the missing data

2. 

3.  Maximization in the maximum likelihood sense, where iteration is used within appropriate physical constraints to maximize the likelihood of the probability model for the image.

  1. Consider an “inverse” conditional probability function given by Bayes’ Theorem:

.

Then it is possible to estimate the value of the inverse probability function iteratively from current guesses of at the kth iteration of the measured image and deconvolved image respectively.

Our iterative estimate of the inverse filter is then:

,

where,

, and

Putting this all together starting with the last result and substituting, then the iterative estimate of the image is:

.

  1. Convergence to ML result is Guaranteed

1.  Lucy paper.

2.  If x, x are non-negative and the respective areas of and are conserved. This is because will approach and the will then approach in these circumstances. Note that the model has remained general.