Chester F. Carlson Center for Imaging Science

Ph.D. Comprehensive Examination, June 2007

Part I

Problems and Solutions

  1. In magnetic resonance imaging (MRI), the imaging system puts a stimulus into the imaged object, detects the signal coming back, and processes it to create an image. Consider a hypothetical MRI system that generates a stimulus pulse by apodizing (modulating) a cosine wave (whose frequency, , is equal to100 MHz) with a Gaussian shaped function of the form , where a= 7x1014 s-2 and b = 2x10-7 s .

a) Describe the shape of the resultant stimulus pulse and its frequency content (a sketch is acceptable if appropriately labeled and annotated). Where have you seen this stimulus pulse before?

b) This stimulus pulse is sent into the imaged object using a resonant circuit with a quality factor, Q, of 100 (note: Q = /B, where B is the resonant circuit bandwidth). Describe the shape and frequency content of the stimulus pulse in the resonant circuit.

Solution:

a) The apodizing function is a Gaussian centered at 2x10-7 s. A cosine apodized by this Gaussian is a Gaussian shaped pulse of 100 MHz energy centered at 2x10-7 s.

[Where have you seen this stimulus pulse before?] This is the new CIS logo.

b) To determine the frequency content, we take the Fourier transform. Using the convolution theorem, we see that the above function is produced by the product of the Cosine and the Gaussian. Therefore, the answer can be found by the convolution of the FT(Cos) with the FT(Gaussian). The FT of the Cosine function is a delta function at ±100 MHz. The FT of a Gaussian is a Gaussian. Since we are interested in the frequency content, we only need the magnitude of this convolution.

A resonant device at  = 100 MHz with a Q = 100 has a Gaussian frequency response with a bandwidth (full width at half height) of

B = /Q = 100x106 s-1/100 = 1x106 s-1

Clearly this is much less than the frequency content of the pulse and therefore we can assume that the frequency content of the stimulus is determined by the bandwidth of the resonant device. The shape of the frequency domain spectrum of the stimulus in the resonant device is Gaussian, centered on ±100 MHz, and its bandwidth is 1 MHz.

  1. You are tasked to design a simple spectrograph for an f/10 telescope with a 10-meter diameter primary mirror. The spectrograph is to be used to disperse IR radiation at a wavelength of 2 microns. The spectrograph consists of a rectangular slit at the prime focus, a collimator (lens) with focal length of 1m, a diffraction (reflection) grating, a one-element ideal camera lens, and a detector (see diagram below). The detector array has 10 micron pixels. The grating equation is given by

m * lambda * T = sin(a) + sin(b),

where m=order number

lambda=wavelength

T=groove density

a=angle between input beam and grating normal

b=angle between output beam and grating normal

and the diffraction limited resolution for a grating is , where N is the number of illuminated grooves and m = order number.

a)If the grating has a groove density of 23 lines/mm, its facet-normals are inclined 63 degrees to the grating normal, and the input and output light beams are also inclined by 63 degrees to the grating normal, in which order will the grating best be utilized (such that the diffracted beam will be centered on the detector)?

b)What combination of collimator diameter, camera lens focal length, and slit width will ensureboth that the spectral resolution is diffraction-limited and the dispersed (output) light is Nyquist sampled?

Solution:

a)For a=b=63 degrees, m=39

b)For m=39:

RDL=N*m=m*T*D_coll/cosa=23*39*100mm/cos63=200000=l/delta_l

N = T*D_coll/cosa,

where D_coll is the diameter of the collimator. So, the diffraction limited resolution is T*m*D_coll/cosa.

D_coll=F_coll/(f_telescope)=1000mm/10=100mm.

We can calculate the camera lens focal length by demanding that delta_l be matched to two pixels for Nyquist sampling. So,

F_camera*theta=2*x, F=2*x/theta=2*x*cosb/(m*T*delta_l)=1m

where x is the pixel size.

So, the magnification is F_coll/F_camera=1m/1m=1 and the slit width to match two pixels is 20 microns. Note that the imaging diffraction limit for the telescope is:

theta_DL=1.22*lambda/D=50 milliarcseconds

which corresponds to 24 microns at the slit. This is a good match to the grating diffraction limited slit size.

  1. The performance of an optical system in the spatial frequency domain can be characterized by its Modulation Transfer Function (MTF). The Contrast Sensitivity Function (CSF) is used to describe response of the human visual system in the spatial frequency domain. Describe the MTF and the CSF, clearly explaining the relationship and difference between the two metrics, and how each relates to the concept of a “point-spread function.” Include in your discussion the reason for the difference between the low-frequency response of optical systems and the visual system.

[Answer should describe optical MTF as ratio of modulation out/in as f(spatial frequency), and CSF as inverse of detection threshold for sine (or Gabor) targets at different frequencies. Discussion should include the Fourier relationship between MTF/PSF and CSF/retinal receptive field with inhibitory surround. All passive optical systems’ MTF = unity at f = 0, falling to zero at f = ∞, while CSF is band-pass with low-freq drop due to lateral inhibition in retina. CSF sketch should show low-freq drop, peak at ~3-6 deg-1, and limit of ~50-60 deg-1. The CSF is the product of the MTF of the visual optics and the neural (frequency) response.]

  1. What will be the output if a Gaussian of amplitude 100 and standard deviation 4 that is centered on x0= 8 is input into a shift-invariant, linear system having an impulse response that is a Gaussian of amplitude 2 and standard deviation 3, but is centered on the origin? Write down the expression and sketch the input and the output on the same graph.

Solution:see next page

  1. A photographic transparency 128 mm square is to be digitized by raster scanning with a 1 mm square hole through which light passes to a photodetector. Special transparencies are prepared for test purposes in the form of gratings consisting of alternate opaque and transparent bars of equal width. As the square hole scans across the grating in the direction normal to the bars the digitized output rises and falls between maxima Ymax and minima Ymin. Define “contrast” in decibels to mean 10log10(Ymax/Ymin).
  2. When the opaque bars are 10 mm wide, what is the contrast?
  3. Define “resolution” as the number of bars per millimeter such that the contrast is 3dB. What is the resolution?
  4. What output would you expect if the bars were 0.5 mm wide?
  5. What would the contrast be for bars of width 0.33 mm?

Solution:

The scanning aperture is a 2D RECT(x,y). the light transmitted by a transparency t(x,y) is the 2D convolution t(x,y)**RECT(x,y), whose Fourier transform is via filter theorem SINC(u,v)T(u,v).

  1. The test grating has a spatial period of 20 mm, therefore fundamental spatial frequency is 0.05 cycles/mm. Since SINC(0.05) = 0.996 the response ratio for a sinusoidal test pattern would be Ymax/Ymin. For the bar pattern Ymax/Ymin is infinite.
  2. Since SINC(0.75) = 0.3 we might expect 0.75 bar/mm to be a rough estimate of the resolution. It turns out that Ymax is 2/3 of the full response (1 mm hole centered on 0.67 mm transparency strip) and Ymin is 1/3 (hole centered on 0.67 mm opaque bar)
  3. A steady output equal to ½ full response.
  4. The contrast is 3dB because Ymax is 0.67 (hole centered on 0.33 mm bar) and Ymin = 0.33 (hole centered on 0.33 mm transparency strip).
  1. Maximum likelihood classification of multiband data is based on descriptive statistics of Gaussian-distributed random variables. Beginning with Bayes’ theorem, using probabilities defined in N-space, derive the linear discriminant functions used in maximum likelihood classification and describe the physical interpretation of each term that remains in the final discriminant function.

Solution:

Given Bayes’ theorem, one can define the conditional probability of finding a pixel that belongs to class given we have a pixel located at position in n-space

where is the conditional probability that a pixel exist at position in N-space given that we are interested in pixels of material type , is the a priori probability of finding a pixel of type in the scene, and is the n-dimensional probability density function of the image.

Assuming Gaussian-distributed data, we can represent the conditional probability above in the numerator as

where and are the mean vector and covariance matrix for representative pixels chosen to represent class .

The decision rule for deciding what class of material a particular pixel belongs to, one can consider the decision rule

if for all

but since we don’t know directly, one can rewrite the inequality in terms of the Bayes theorem representation above as

which reduces to

and allows the decision rule to be rewritten as

if for all

This decision rule allows us to utilize probabilities that can either be determined from class-specific descriptive statistics or estimated from visual inspection of the image.

Converting these probability products to discriminant functions, our decision rule becomes

if for all

where is represented as

and eliminating the constant term for the N-space, one is left with

At this point, the remaining three terms can be interpreted as

represents the probability that a pixel is of type given no other knowledge of the actual multiband digital count data in the image (your best guess knowing what the rough scene composition is)

represents a penalty function for a class which has a very “large” covariance among its band constituents

represents a penalty function for being far away from the mean (in a covariance normalized sense) of a class of pixels in question.

  1. I want to use the digital camera set up shown below to measure the reflectance of samples located as shown below. What reflectance would you calculate for my sample if I observe a signal of 5.7x104 electrons? What error would I assign to that measurement due to the detector if the noise for similar signal levels is 900 electrons?

Solution: see next page

  1. A photon detector has a dark noise whose variance 2D= 0.1 counts/secondis proportional to the exposure time, and a readout noise whose varianceis 2R= 10 counts and is at a fixed level. The detector area is A = 0.3 cm2and the detector conversion efficiency is  = 0.75 counts/photon. Find the DQE when the detector is placed in a flux of 3.3 photons/cm2/s and is exposed for T = 10 minutes.

Chester F. Carlson Center for Imaging Science

Ph.D. Comprehensive Examination, June 2007

Part II

Problems and Solutions

  1. The Center for Imaging Science is considering putting a small telescope, equipped with CCD camera (at the telescope focal plane), on the roof ofthe Carlson building. The telescope would be a 16" (~ 0.4 m) diameter f/10 reflector. It is estimated that (1) telescope vibrations, due to such things as students walking around on the roof and elevator motion,will have mean amplitudes of about 20 microns at frequencies ~10 Hz;. and (2) the best-case “seeing” (atmospheric turbulence) will smear the angular diameter of images of point sources (i.e., stars) to a full-width at half maximum of about 2 arcseconds (1/1800 of a degree) in typical exposure times of a few seconds. The available CCD cameras have CCDs with the following specifications: 512x512, 20 micron pixels; 1600x1200, 7.4 micron pixels; 765x510, 9 micron pixels. The goal is to perform wide-field imaging while fully sampling the system point spread function. Which is the best choice of CCD, and what field of view (in solid angle, i.e. square degrees or steradians) will it yield? Be sure to state any important assumptions you have made.

Solution:

The student should recognize, and implicitly show or explicitly state, that (s)he needs to calculate the telescope system PSF under three different limiting assumptions -- (a) diffraction limited; (b) vibration limited; (c) atmosphere (“seeing”) limited – then work out which CCD offers the best compromise between spatial sampling at the Nyquist frequency and maximal angular field of view.

a) The diffraction limit of this telescope at 550 nm is

theta = 1.22 (lambda/D) = 1.22 * 1.38e-6 = 1.67e-6 radians (~0.33 arcsec)

Its focal length is 4 m (10*0.4m). Hence the PSF FWHM at the focal plane (assumption: objects to be imaged are at infinity) is

FWHM = 1.67e-6 * 4m = 6.7 microns

b) the vibration limit is given as 20 microns, assuming exposure times > 0.1 sec

c) 2 arcsec ~ 1e-5 radians, or 40 microns at the focal plane

Hence the image will be seeing-limited, and the best choice is the 20-micron-pixel CCD, which would yield a FOV of about 0.02 sq degrees (~6e-6 steradians).

  1. Two searchlights are pointing toward a two-slit diffraction apparatus you have constructed for purposes of measuring interference patterns.The searchlights are equally bright and at about the same distance (10 km); they are placed 3 km apart from each other. The apparatus consists of a filter that passes monochromatic light of wavelength lambda -- where lambda can be selected anywhere within the range 100 nm to 1 micron -- followed by a flatsurface with a pair of slits whose separation is also adjustable(call this separation d). The normal tothe slit surface points to the midpoint between the searchlights.

a)Find the total intensity as a function of d and angle (theta) away from this normal, forthe interference pattern of the light that passes through the slits.

b)How must one change d so as to produce patterns with the same spatial frequency for wavelengths that lie at the extremes of the range of human vision?

Solution:

a)To arrive at the correct solution the student must first note which sources are coherent. Light from each beam falls on each of the two slits, and so the light from each beam emerging from the two slits is coherent. But the two beams are not coherent sources with respect to each other.So we have a two-slit interference pattern for each beam, centered on the line joining the beam and the slits. These two patterns then add incoherently (that is, the intensities add). The student should write down the expression for intensity as a function of angle that is appropriate for two-slit diffraction for each beam, and then the expression for the sum of the intensities of the two patterns. (See solution on next page to “Rigel and Betelguese” problem, for specific formulae – the two searchlights are separated in angle by about 17 degrees, as in that problem)

b)As the wavelength increases, the slit separation must increase, to maintain the same spatial frequency.Since the range of human vision is about a factor of two in wavelength, d must change by about a factor of two…


  1. Like many optical systems, the human eye suffers from chromatic and spherical aberrations, but there are characteristics of the system that serve to limit the image-quality limitations due to those aberrations. Describe the causes of chromatic and spherical aberration in the visual system and discuss the characteristics and methods that serve to maintain the quality of the perceived image.

[Answer should discuss the fact that index of refraction is f(λ), so dispersion takes place in cornea, crystalline lens, aqueous and vitreous humors; spherical surfaces are not ideal shape for the s/s’ conjugates, and marginal rays are refracted too strongly wrt paraxial rays.

Chromatic Aberration:

  1. When sufficient light is available, the pupil constricts, limiting the aperture and the circle-of-confusion due to chromatic aberration.
  2. There are far fewer S cones in the retina than M&L cones, so the short-wavelength image (which is significantly blurred by chromatic aberration) is under-sampled compared to the rest of the spectrum.
  3. The macula lutea (‘yellow spot’) blocks blue light over the ~5º of central retina, limiting further blurring due to short wavelength in central vision.
  4. There is a very small (~0.5º) region in the very center of the retina that has no S cones at all.

Spherical Aberration:

  1. When sufficient light is available, the pupil constricts, limiting the aperture and therefore the spherical aberration.
  2. The cornea is not spherical; at the point where it joins with the sclera the radius gradually increases, forming an aspherical surface with lower power at the periphery (compensating for spherical aberration).
  3. The crystalline lens is a gradient-index (GRIN) material, with the index greatest at the center (compensating for spherical aberration).
  4. The Stiles-Crawford Effect: Cones in the fovea are directionally selective; light entering the center of the pupil has a higher relative luminous efficiency than marginal rays (which are more likely to be absorbed in the retina before reaching the photodetectors).]
  1. Suppose you are allowed only one input to an imaging system that is known to be linear and shift-invariant. That input function f(x) is given below:

Let the system MTF be given by H(), wherestands for the spatial frequency. Assume that you have the system output available to you for analysis. Decide if you can determine the system MTF at only one frequency or at many other discrete frequencies as well. Justify your answer and be as quantitative as possible. If you can determine the system MTF at other discrete frequencies, what are those possible frequencies and how will you determine the MTF?

Solution: