15/01/2004 W.M. van der Flier 6

(E) Appendix: Segmentation software

For more information please contact

Inhouse developed semi-automated software (Dr. F. Admiraal – Behloul, Division of Image Processing (LKEB), department of Radiology, Leiden University Medical Center, The Netherlands) was used to obtain intracranial, whole brain and white matter hyperintensity (WMH) volume. In the following, a description of the algorithms incorporated in the segmentation software is provided.

Intracranial/ whole brain volume.

Segmentation of the intracranial and whole brain volume is performed using coronal T1-weighted images. The image segmentation algorithm consists of 2 main steps.

1.  Intracranial mask detection in T1-weighted images

Using fuzzy clustering1-3 the voxels of an MR image are clustered into two sets: foreground and background. The minimum membership degree to the foreground cluster is set to a value of 0.3, in order to include the CSF in the foreground cluster. Morphological filters are then applied on the foreground mask followed by a 3D region growing that isolates the intracranial (IC) compartment from the skull and skin. Subsequently, the IC mask is edited manually to remove the remaining non-brain structures (e.g. eyeballs).

2.  CSF detection in T1

The voxels within the IC mask are classified into two clusters. The voxels belonging to the cluster with the lower signal intensity are classified as CSF. The whole brain volume is obtained by subtracting the CSF from the IC. The whole brain, CSF and IC volumes are automatically computed and saved for further statistical analysis.

White matter hyperintensities (WMH).

Segmentation of the WMH volume is performed using axial dual fast spin-echo sequences. The image segmentation algorithm consists of 3 main steps.

1.  Intracranial mask detection in dual echo (DE) images

The proton density (PD) image is used to create an intracranial mask. Using fuzzy clustering, the voxels of an MR image are clustered into two sets: foreground and background. Morphological filters are then applied on the foreground mask followed by a 3D region growing that isolates the IC compartment from the skull and skin. Remaining non-brain structures (e.g. eyeballs) are removed manually.

  1. CSF detection in DE

A ratio image ((PD -T2 /PD ) x 1000) is computed to counter the effect of signal inhomogeneity and increase the CSF/WMH contrast (figure 1b). The CSF is extracted in this ratio image using fuzzy clustering. The voxels are classified into two clusters. The voxels within the IC compartment belonging to the cluster with the lower signal intensity, are classified as CSF (Figure 1c). Subsequently, the ventricles are detected from the CSF using 3D region growing and prior knowledge about their relative size. The ventricle mask (VM) will allow the discrimination between subcortical and periventricular lesions.

3.  Lesion detection

The T2 image is used (in conjunction with masks generated using the PD (step 1) and the ratio image (step 2)) to detect lesions. The total volume of WMH consists of periventricular lesions (PVL) and subcortical lesions (SCL).

Within the IC compartment, the voxels of the T2 image are clustered in 3 clusters. The brightest cluster consists of CSF+WMH+CSF/GM partial volume (M1). From M1, WMH and partial volume are isolated using the CSF mask generated in step 2. Partial volume and the PVL are both connected to the CSF. Therefore, to discriminate partial volume from PVL, we first isolate PVL by 3D region growing from the ventricles (VM). The partial volume voxels are then isolated by region growing seeded in CSFM-VM, and reattributed to the CSF. The remaining WMH (not connected to the ventricles nor to the CSF) are considered subcortical lesions (SCL).

Lesions smaller than 6 voxels are automatically excluded. Moreover, elongated structures with a cross section area smaller than 6 voxels per slice are automatically excluded. In this way, false positives and negatives are considerably reduced.

The algorithm is integrated into a software package that offers reviewing and mask editing tools (figure 2). The automatically detected lesions (figure 1d) are edited manually according to a protocol established by an experienced neuroradiologist (M.A. van Buchem). The protocol specifies explicit rules for selection of PVL and SCL, minimizing the erroneous selection of CSF, gray matter and Vichow-Robin Spaces. Finally, the IC, CSF and WMH volumes, the number of WMH and the size and position of each lesion are automatically computed and saved for further statistical analysis. For the present study, PVL and SCL were summed to obtain the total WMH volume.

The average time required to analyze a scan (including automatic lesion detection and manual editing) was 25 minutes. Among others, the software has been successfully used to analyze 1054 brains for a multicenter study: the PROspective Study of Pravastatin in the Elderly at Risk (PROSPER).4

Reliability

Nine patients underwent scan-rescan with and without repositioning procedures to assess the reliability of the automated lesion detection of the software (intraclass correlation coefficients (ICC) >0.90 for SCL and >0.75 for PVL). In addition, ten randomly selected brains were edited twice to assess the intra- and interrater reliability (intrarater: ICC = 1.0 for SCL and PVL, interrater: ICC = 0.99 for SCL and PVL).


References

1.  J. C. Bezdek. Pattern Recognition with Fuzzy Objective Function Algorithms, New York: Plenum, (1981).

2.  S. Abe, R.Thawonmas. A Fuzzy Classifier with Elliptical Regions, IEEE Transactions on Fuzzy Systems, 5 (1997) 358-368.

3.  I. Gath, A. B. Geva. Unsupervised Optimal Fuzzy Clustering, IEEE Transactions on Pattern Analysis and Machine Intelligence, 11 (7) (1989) 773-781.

4.  Shepherd J, Blauw GJ, Murphy MB, et al. Pravastatin in elderly individuals at risk of vascular disease (PROSPER): a randomised controlled trial. Lancet 2002; 360(9346):1623-1630.


Legends

Figure 1

Automatic CSF and lesion detection before manual correction. (a) T2 weighted image, (b) ratio image ((PD-T2/PD) x 1000), (c) CSF segmentation, (d) lesion detection.

Figure 2

Graphical user interface for automatic lesion detection and manual editing.

(E) appendix: Segmentation software