A comparison of potential Raven centroiding algorithms

David Andersen

November 5, 2012

1.  Introduction

The initial performance modeling for Raven was conducted using MAOS, a c++ based AO simulation tool built by L. Wang for simulating MCAO on TMT. It has advantages in that it is very fast and has a well-established tomographic reconstructor. However while it is built to simulate NFIRAOS on TMT very well, it is somewhat difficult to configure. In particular, it is difficult to address the problem of open loop centroiding using this code. We do try to account for open loop errors both in the Raven CoDR report and the Raven modeling paper (Andersen et al. 2012), but which open loop centroiding algorithm to use has remained an open question. As a reminder, we summarize the wavefront error terms associated with the WFSs and DMs in Table 1. In particular, we highlight the “Sampling” error which arrises from the fact that we do not oversample the spots on the WFS. In Table 1, this assumed using a Matched Filter algorithm (described below). In this paper, we explore ways of minimizing this (and WFS noise) using different centroiding algorithms.

Table 1: Raven Error Budget

WFE term / WFE (nm RMS)
T/T removed Tomography / 175
DM Fitting σ2 ~ 0.25 (d0/r0)5/3 / 155
Aliasing σ2 ~ 0.1 (d0/r0)5/3 / 103
Sampling / 72
Noise (m=14; Fs=180 Hz) / 95

It is also important to note the limitations of the Raven WFSs. We are using the Andor iXon cameras with 128 pixels. As part of our trade study, we determined that we need ~5” FOV and at least 10x10 subapertures to achieve the dynamic range and sensitivity, respectively, to achieve the overall system requirements. This means that there are only going to be ~12x12 pixels per subaperture and that the pixel size needs to be ~0.4”/pixel. Since the spot size on the WFS will most often be smaller than 0.8”, that means that the spots will be under-sampled. We expect, given Table 1, that we have a ~70 nm WFE due to this under-sampling, but this will be algorithm dependent. We need to determine which centroiding algorithm works best at both high and low S/N.

In this document, we describe simulations carried out to test the sensitivity of different spot centroiding algorithms in the context of the Raven Open-Loop Wavefront Sensors (OL WFSs). In section 2, we describe the basic pixel processing required for use with different centroiding algorithms we explored, and in section 3 we describe the simulations. We present results of the centroiding tests in section 4. We also apply these simulations to help determine whether Raven requires an atmospheric dispersion corrector (ADC) or a blue cutoff filter. We finally summarize our results in section 5.

1.  Pixel processing

We have explored the performance of 3 basic centroiding algorithms: thresholded-center of gravity (tCOG), correlation and matched-filter (MF) centroiding. Thresholded center of gravity is the simplest method and will be the fastest centroid algorithm computationally. In this flux-threshold variant, the maximum flux is determined in each subaperture and only those pixels with a flux above some percentage of the peak flux would be used for determining a flux-weighted mean (There should also be a minimum threshold which removes almost all pixels with just background and readnoise, even if this level is greater than the threshold determined from the peak flux). Correlation centroiding relies on a knowledge of the PSF. We describe how we generate a reference PSF from our measurements and then correlate our subaperture images with this reference PSF. This correlation process creates a new image for a subaperture that has highlighted all objects in the frame that have a similar shape to the PSF while downgrading structure that does not look like a PSF (shot noise and cosmic rays). The centroid can be determined from the correlation image using a thresholded center-of-gravity technique with a relatively high threshold. Finally, the matched filter also uses knowledge of the PSF to generate slopes, but the MF technique has a limited dynamic range which may not be well-suited to work on an open loop system like Raven. We would not expect the MF to work better than the Correlation method, but it would be more efficient computationally. We do not explore the performance of the MF in great detail here, but do describe the pixel processing steps in the next section for all three centroiding methods.

A. Thresholded Center of Gravity (tCOG)

Figure 1: Data flow for the tCOG centroiding algorithm. Each of the boxes are described below.

A.1 Basic Pixel Processing

The following steps are included for all centroiding methods.

Camera

Parameters: frame rate, gain, etc.

Andor iXon 860 128x128 pixel camera with 500 fps. Data is 14 bit.

Convert Raw to Subap

Parameters: Subaperture number, ROI

RTC reads in pixel stream, and starts making images for each subaperture. After each subaperture is filled, further computations can be parallelized. There will be of order 80 subapertures/WFS

Back Subtract

Parameters: Dark Current/Background constant or image (TBD)

Subtract background pixel by pixel. Values converted from 14bit to double (single should be ok). The background flux could be measured in the Realtime Parameter Generator (RPG), or could be an input parameter. The RTC should allow for either possibility.

Flatfield (optional)

Parameters: Flat field frame for subaperture

Divide subap image by flatfield (or pixel to pixel variation) map. Could multiply pixel by pixel the inverse of the flat field to speed things up. The flatfield may possibly be taken during calibration, or could be a theoretical function.

A.2 Windowing (optional)

Windowing reduces the number of pixels that are considered in a subaperture by creating a subwindow around an initial guess of the centroid. Windowing should improve the accuracy of the centroid measurements by removing effects of noise peaks far from the WFS spot.

Centroid

Parameters: Method

Initial centroid measurement for subaperture. Can be correlation, thresholded center of gravity (tCOG) or matched filter (MF) or … X and Y slope are used in subsequent steps.

If tCOG method is chosen (baseline), then operations consist of 1) determining the threshold (usually some fraction of the peak flux), 2) applying that threshold on a pixel-by-pixel basis (meanwhile 2a building a standard deviation of “background” pixels to estimate noise), 3) for pixels above threshold, sum fluxes and sum x and y pixel locations times flux (3a flux is signal for S/N calc). 4) calculate 2 slopes by dividing two pairs of sums.

Report low S/N

Parameters: predicted S/N for subaperture (based on subaperture vignetting, r0, frame rate, NGS magnitude, user input extinction, airmass), number of frames for averaging

Take time average of S/N over N frames (specified by user). If S/N calculated during previous centroid step was lower than predicted/allowed, broadcast message to pub/sub alerting user of potential problem. Could have a “smart” fix of increasing exposure time slightly?

Round

Parameters: none

Round centroids to integers for use with windowing.

Window

Parameters: window size

Crop the image by user specified amount, centered on initial (rounded) centroid guess. Useful for reducing the size of the image and more importantly, removing noise far from spot. If window exceeds size of subaperture, use window as close to edge as possible. Trigger warning.

Report Window Failure

Parameters: none

User-specified window exceeds dimensions of subaperture.

A3. Centroiding

Determine Threshold

Input: Fraction of peak flux to be used for threshold (typically 15%) and minimum threshold.

Find maximum of image. Set threshold to fraction of the maximum image flux. If fraction of maximum image flux is below the noise, use the second, user-supplied minimum threshold.

tCOG

Input: none

Check each pixel (of possibly windowed image) to ensure it is above threshold (determined from previous step). For all pixels above threshold, calculate flux-weighted x and y centroids.

Report Bad spot

Calculate S/N of spot used to generate measurement in tCOG. If lower than expected, report error. If centroid is near edge of subaperture or window, report error.

A4. Applying Reference Values

Add initial Rounded centroids

If windowing is used, the difference in center between the subaperture and the window generated by round above (A2) needs to be added back to the new centroid measurement.

Determine Reference Slopes

Parameters: Probe location, slopes

The OL WFSs are presented with different field-dependent aberrations as they move. The location of the probe arms help determine the reference slopes (look-up-table in RPG). The mean tip and tilt of the WFSs will also be used to apply a non-common path aberration (NCPA) correction (a beam with a global tip or tilt will have a different optical footprint in the OL WFSs). Unlike all previous steps, this process is not done 1 subaperture at a time. All slopes are needed to calculate the average tip/tilt of the beam.

Subtract Reference Slopes

Parameters: none

The reference slopes are subtracted from the slope measurements. Slope measurements at this point may or may not be projected into phase space or onto a modal basis.

B. Correlation Centroiding

Figure 2: Pixel processing and Correlation Centroiding Flow diagram. All boxes are described below.

B1. Basic Pixel Processing

Camera, Convert Raw to Subap, Back Subtract, and Flatfield are described in (A1) above.

B2. Windowing (optional)

Centroid, Report low S/N, Round, Window and Report window failure are described in (A2) above. The Window function is slightly modified as described here:

Window/blkrep

Parameters: window size, drizzling factor

Crop the image by user specified amount, centered on initial (rounded) centroid guess. Useful for reducing the size of the image and more importantly, removing noise far from spot. If window exceeds size of subaperture, use window as close to edge as possible. Trigger warning.

If Drizzle is used, block-replicate the windowed image by the drizzling factor. For example, if the drizzling factor is 2, create new subaperture image with dimensions 2x the window size, copy each pixel into 4 new sub-pixels. Drizzling only improves the resolution of the reference image – not the current subaperture spot image.

B3. Create Reference Image

For Correlation Centroiding and MF centroiding, a reference image is needed. Generating this reference image is not a real time task, as not all images need to be processed. We describe the steps below.

Drizzle (shift/add)

Parameters: drizzling factor

Each spot image is re-sampled onto a finer (by the drizzling factor) grid. The spot is shifted by the fractional measured centroid, and the flux from 1 camera pixel is divided between multiple subpixels with the fraction of flux going into each subpixel proportional to the area. The drizzling factor can be 1, which means that the light from a subaperture is shifted and resampled on a grid with the same pixel scale. Our experience indicates that a drizzling factor greater than 2 does not produce reference images with significantly higher spatial resolution.

Window

Parameters: window size

Trim the drizzled image to provide an image with the same dimensions as that produced by the window/blkrep step described in (B2).

Integrate i0

Parameters: # of frames to add, switch to produce 1 reference image or 1 reference image/subaperture

Add drizzled and windowed images from many exposures to create a high signal-to-noise reference image that is not dependent on the instantaneous turbulence.

This task is performed by the RPG, and does not need to be real-time. New reference images will be produced on the order of every ~10 seconds. If some frames are dropped from the integration, that is acceptable.

The user can choose whether to produce a single reference image from the images of all subapertures, which will increase the S/N, or build a different reference image for each subaperture (accounts for diffraction effects and laser elongation in the LGS WFS). In the end, the reference image will be scaled to the mean of the individual spot images.

Exposure Time

Input: User specified S/N for reference image

Uses the guide star magnitude, r0 estimate, and frame rate to calculate the total number of images that should be integrated to reach a sufficient S/N ratio in the final reference image, i0.

Current i0

Store the current reference image for use by the correlation function. The reference images io should be saved to DMS. A new reference image will be produced on the order of every 5-10 seconds.

B4. Centroiding

Correlate

Produce cross correlation image of reference image versus current frame. Either done through FFT or brute force calculations.

Determine Threshold

Input: Fraction of peak flux to be used for threshold (typically 15-25%) and minimum threshold.

Find maximum of image. Set threshold to fraction of the maximum image flux. If fraction of maximum image flux is below the noise, use the second, user-supplied minimum threshold. Fractional threshold is typically after applying correlation since S/N of correlated image is so great.

tCOG

Input: none

Check each pixel (of possibly windowed image) to ensure it is above threshold (determined from previous step). For all pixels above threshold, calculate flux-weighted x and y centroids. We use tCOG rather than fitting a functional form to the correlation peak to increase computational speed and robustness.

Report Bad spot

Input: none

Calculate S/N of spot used to generate measurement in tCOG. If lower than expected, report error. If centroid is near edge of subaperture or window, report error.

B5. Applying Reference Values

Same steps as in (A4) above.

C. Matched Filter (MF) Centroiding

Figure 3: Pixel processing and MF centroiding flow diagram.

C1. Basic Pixel Processing

Camera, Convert Raw to Subap, Back Subtract, and Flatfield are described in (A1) above.

C2. Windowing

Centroid, Report low S/N, Round, Window and Report window failure are described in (B2) above. Windowing is more important in the case of MF centroiding because the MF only works if the spot center is within a FWHM of the reference image. Otherwise the MF centroiding accuracy is low. By windowing, we are placing the spots within a pixel of the center of the window using our best guess.