A New Model for Measuring Object Shape using non-collimated fringe-pattern projections

B A Rajoub, D R Burton, M J Lalor, S A Karout

General Engineering Research Institute – Liverpool John Moores University - UK

Abstract. Successful measurement of object shape using structured light optical techniques depends on various factors, most importantly in the final stage of relating the measured unwrapped phase distribution to the object height. Although various phase-to-height models exist in the literature, the different approaches employ numerous assumptions and simplifications which could render the derived model inaccurate or very sensitive to parameter variations. This paper presents a new approach for deriving the true analytic phase-to-height model for non-collimated projected fringes. The 3-D spatial phase distribution of the pattern existing over the object surface is first evaluated which is then transformed to its 2-D version via the camera 2-D image mapping transformation. Therefore, the phase information stored in the camera images is expressed in terms of the camera, projector and object variables. Complete measurement requires evaluation of the x, y, and z coordinates of the measured samples over the object surface. Unlike existing approaches, the proposed approach is universal in the sense it is not restricted to certain optical arrangements. Therefore, it will be very useful in helping us understand the various effects of system parameters on the measurement outcome.

1. Introduction

A wide range of structured light optical techniques for 3-D measurement of object shape based on a variety of projection and imaging principles have been discussed in the literature [1-11]. In this paper we are mainly interested in structured light systems based on non-collimated fringe-pattern projections using digital projectors.

Its well known that when a periodic fringe pattern is projected onto an object surface and observed from a different view angle a phase-modulated pattern will be formed. The amount of phase modulation is obviously a function of the object height variations. Therefore, in order to extract the surface height the phase information must be extracted (demodulated) from the intensity images and then related to the required object height.

Various approaches have been adopted in the literature in order to relate the fringe phase to the surface height. Usually, the phase-to-height model is obtained by relating the fringe displacements to the equivalent phase differences. Since the fringe displacements are proportional to the height variations the model can be derived in a straightforward manner.

In this paper, we are interested in finding a more generic approach that could be used to derive the true analytic phase-to-height model for non-collimated fringe pattern projections and that can work with arbitrary optical arrangements.

This paper is organized as follows: Section II presents proposed model where the mathematical derivation of the true phase distribution in the object’s world coordinates as well as in the camera image coordinates for arbitrary viewing angles is presented. Experimental results will be presented in section IV. Finally, Section V presents discussion and conclusions.

2. Proposed Model

2.1. 2D based approach

In this section we present the derivation of our model based on 2D geometry. We have previously used the presented approach for collimated projection applications and showed that the approach is consistent [12]. Figure 1 shows the geometrical arrangement of a typical non-collimated projection of sinusoidal fringes. The line, P, is called the projector’s phase line or phase axis. It lies in the yz-plane making angle q with the y-axis. A lens with a focal point located at (yf, zf) is placed in front of the phase axis to produce non-collimated projections of structured light. For non-collimated projections, the emitted light rays pass through the projector’s lens centre where each has an phase value depending on its source location on P. The phase received by any point C(yi,zi) on object surface can be calculated by tracing back the light ray illuminating this point to its source location on P , i.e., point B in Figure 1 with coordinates (y¢, z¢). From the figure we can write the equation for the phase axis, P as

(1)

where (y0,z0) is the projection centre and m is the slope of P.

from triangle ABC we can write

(2)

from which

(3)

Equations (1) and (3) are linearly independent and can be solved for y¢ and z¢ to get

(4)

(5)

The phase value of object point C(yi,zi) is the same as the phase of point B(y¢,z¢) on the phase axis; hence from (7) and (8) we can write

(6)

where

(7)

(8)

Equation (6) gives us the actual 3-D phase value for any point (x,y,z) on the object surface.

If a camera is used to capture the intensity variations over the object surface, the resulting images will contain side effects due to camera operation and the phase-to-height relation in (6) need to be modified in order to relate the phase distribution of the camera images to the object height. This is obvious since the camera actually stores a mapped version of the actual image. Without the loss of generality, assuming a pinhole perspective image mapping, a point in the world coordinates can be mapped to its image in the camera coordinates using

(9)

The registered phase in the camera image f(Yi, Z0) is thus equal to the phase of the mapped 3-D object point, f(yi,zi) . Substituting yi from (9) in (6 ) results in

(10)

The height function in terms of camera world coordinates is thus

(11)

This equation relates the phase at any pixel (Yi, Z0) in the camera image to the object height by the projection and viewing parameters. It can be shown that if the camera and projector were allowed to rotate arbitrarily then the phase-to-height relationship can be obtained in the constrained solution sense. This renders 2D modeling approaches useless as they only provide us with little information; besides, we also need to consider other sources that might affect our model including lens aberrations and optimised system constants. In the following section we present a more generalised 3D approach.

2.2. 3D based approach

In order to generalise the solution we need to have a plane with arbitrary orientation and centre location as well. Let Õ be a plane that lies in the xz plane whose centre is at the origin and having a unit normal defined by . The equation of this plane is . If this plane is clockwise rotated by arbitrary angles q, z, and a around the coordinate axis x, z, and y respectively then its centre translated by r0 we thus can represent any arbitrary plane mathematically by utilising the appropriate geometric transformations. The plane rotation and translation will result in a new normal vector n¢ and plane centre r0. The geometric transformation associated with this operation is generally of the form

(12)

Therefore, the new equation of the plane will become , where. A ray illuminating a point ri on the object surface passes through two distinct points: the lens centre, F(xf,yf,zf) and the illuminated point ri(xi,yi,zi). These rays can thus be represented by the parametric equation

(13)

the source point can be traced back by solving the equation that describes the intersection between this ray and the projection plane whose unit normal is n¢. The intersection point can be given by

(14)

Figure 1. Problem of relating phase-to-height in 2D

Expanding (14) in terms of xi, yi and zi we get

(15)

where the g’s, l’s, b’s, and m’s are the projection constants. It should be noted that since the physical phase in the projector’s LCD is independent of the grating orientation we can reduce the complexity of the projector’s source point by reversing the rotational and transitional geometric transformations involved in (12). This can be done by using

(16)

The new source point lies in the normalised xz plane and is given by

(17)

where h’s and v’s are the new projection constants. The projector projects a fringe pattern whose intensity is based on the 2D phase distribution over the grating plane. This 2D phase function can be of any arbitrary form. For now let us consider only three types of linear phase functions that produce horizontal fringes, vertical fringes, and circular fringes, respectively.

Let the phase of any point on the projector’s grating be a function of the Euclidian distances associated with that point and the grating’s origin. According to (17) we can write

(18)

(19)

(20)

where yx , yz ,and yxz denote the phase function of horizontal fringes, vertical fringes, and circular fringes, respectively. The quantities wx, wz and wxz are the angular frequencies of the projected patterns in radians. The projected phase-to-height models can thus be obtained by solving (18-20) for the height zi .

There is still one thing remaining, including camera effects. In another words, expressing the phase-to-height relationship in terms of camera and projector constants. It can be shown that (analogous to the projector case) the camera the 3D mapping transformation is of the form

(21)

by forcing one of the camera coordinates to become a constant, for example assuming our camera lies in the xy – plane then the height of the image plane is constant at zc = Z0 then q = p/2 , z = a = 0 therefore m1 = m2 = 0 and m3 = -1;

(22)

note that this form is helpless some how since it is not possible to solve for xi and yi. Therefore, we can return to the normalised focal plan of equation (i.e., similar to the form in (17) with q = p/2, z = a = 0 in mind) and write

(23)

solving for xi and yi and using them in (18) we can then solve for zi to get

(24)

this equation describes the generic form of the phase-to-height model. Note that unlike existing models, this equation directly relates the absolute phase to the object height and not the phase difference.

3. Experimental results

In order to use this model in practice we still need to do one more thing, that is, we need to 1) extract the camera and projector parameters (i.e., position and relative rotation) 2) include effects of lens aberrations (i.e., radial and tangential distortions) and 3) optimise the phase-to-height model. In order to do that we perform elaborate calibration of projector and along optimizing the phase-to-height system constants using genetic algorithms. Here we are actually optimising the phase-to-height solution for the reference plane which is of the form z(f) = 0. This is in fact some sort of empirical calibration for the measuring system. This is necessary in order to compensate for other sources of error, for example, errors of phase measurement due to environmental illuminations, object properties, projection and imaging contrast variations, .. etc where these were not accounted for in the geometric calibration of projector and camera. It should be noted that this outcome is extremely important, for example, in [13], the system calibration was based on optimising a certain equation of some form that is relatively correct for a certain 2D arrangement. However, in this paper the correct generic 3D form of the model is used. Figure 2 shows the measured outcome of such approach.

Figure 2. A 3D Measurement outcome using the optimised model

4. Conclusions

In this paper we presented a new generalised approach for modeling the phase-to-height relationship. Due to length constraints, a lot of detail has been omitted but the principle behind the approach should be clear. The model shows that 2D models are valid in the local sense and that 3D approaches are necessary. Camera and projector parameters including lens aberrations were obtained using geometric calibration techniques while error sources were minimised using genetic algorithms to minimise the system constants using the data obtained from the reference plane; hence, elaborate calibration objects are avoided.

References

[1] M. Takeda and K. Mutoh, "Fourier transform profilometry for the automatic measurement of 3-D object shapes," Applied Optics, vol. 22, pp. 3977-82, 1983.

[2] D. R. Burton and M. J. Lalor, "Multichannel Fourier fringe analysis as an aid to automatic phase unwrapping," Applied Optics, vol. 33, pp. 2939-48, 1994.

[3] G. S. Spagnolo, G. Guattari, C. Sapia, D. Ambrosini, D. Paoletti, and G. Accardo, "Three-dimensional optical profilometry for artwork inspection," Journal of Optics a-Pure and Applied Optics, vol. 2, pp. 353-361, 2000.

[4] G. S. Spagnolo and D. Ambrosini, "Diffractive optical element-based profilometer for surface inspection," Optical Engineering, vol. 40, pp. 44-52, 2001.

[5] G. S. Spagnolo, R. Majo, D. Ambrosini, and D. Paoletti, "Digital moire by a diffractive optical element for deformation analysis of ancient paintings," Journal of Optics a-Pure and Applied Optics, vol. 5, pp. S146-S151, 2003.

[6] Q. C. Zheng and R. J. Gu, "Triangulation of point cloud data for stamped parts," in Seventh Issat International Conference on Reliability and Quality in Design. Piscataway: INT SOC SCI APPL TECHNOL, 2002, pp. 205-208.

[7] S. H. Wang, C. G. Quan, C. J. Tay, I. Reading, and Z. P. Fang, "Measurement of a fiber-end surface profile by use of phase- shifting laser interferometry," Applied Optics, vol. 43, pp. 49-56, 2004.

[8] D. N. Borza, "High-resolution time-average electronic holography for vibration measurement," Optics and Lasers in Engineering, vol. 41, pp. 515-527, 2004.

[9] C. Quan, Y. Fu, and C. J. Tay, "Determination of surface contour by temporal analysis of shadow moire fringes," Optics Communications, vol. 230, pp. 23-33, 2004.