A SOM Based Stereo Pair Matching Algorithm for 3-D Particle Tracking Velocimetry

10
D.-S. Huang et al. (Eds.): ICIC 2009, LNAI 5755, pp. 11–20, 2009. © Springer-Verlag Berlin Heidelberg 2009 A SOM Based Stereo Pair Matching Algorithm for 3-D Particle Tracking Velocimetry Kazuo Ohmi 1 , Basanta Joshi 2 , and Sanjeeb Prasad Panday 2 1 Dept. of Information Systems Engineering, Osaka Sangyo University, Daito-shi, Osaka 574-8530, Japan 2 Dept. of Information Systems Engineering, Graduate Student of Faculty of Engineering, Osaka Sangyo University, Daito-shi, Osaka 574-8530, Japan [email protected] Abstract. A self-organizing map (SOM) based algorithm has been developed for 3-D particle tracking velocimetry (3-D PTV) in stereoscopic particle pairing process. In this process every particle image in the left-camera frame should be paired with the most probably correct partner in the right-camera frame or vice versa for evaluating the exact coordinate. In the present work, the performance of the stereoscopic particle pairing is improved by applying proposed SOM op- timization technique in comparison to a conventional epipolar line analysis. The algorithm is tested with the 3-D PIV standard image of the Visualization Soci- ety of Japan (VSJ) and the matching results show that the new algorithm is ca- pable of increasing the recovery rate of correct particle pairs by a factor of 9 to 23 % compared to the conventional epipolar-line nearest-neighbor method. Keywords: Particle pairing problem; Particle tracking velocimetry; PIV; PTV; Stereoscopic PIV; Neural network; Self-organizing map; SOM. 1 Introduction The basic algorithm of the 3-D particle tracking velocimetry is composed of two successive steps of particle pairing [1] (or identification of same particles) as de- picted in Fig.1. The first one is the spatio-differential (parallactic) particle pairing, in which the particles viewed by two (or more) stereoscopic cameras with different viewing angles have to be correctly paired at every synchronized time stage. This is an indispensable procedure for computing the 3-D coordinates of individual particles. And the second one is the time-differential particle pairing, where the 3-D individual particles have to be correctly paired between two time stages in a short interval. Of these two steps of particle pairing, the second one is relatively rich in methodology because many of the known 2-D time-differential tracking algorithms can be extended into 3-D tracking without any additional complexity. However, the first step particle pairing is difficult when 3-D particle coordinates must be calcu- lated with accuracy and with high recovery ratio. When the parallactic angle between the two camera axes is small (say 10 or less), some of the currently used temporal particle pairing algorithm can be applied to the spatial particle pairing, but in this

Transcript of A SOM Based Stereo Pair Matching Algorithm for 3-D Particle Tracking Velocimetry

D.-S. Huang et al. (Eds.): ICIC 2009, LNAI 5755, pp. 11–20, 2009. © Springer-Verlag Berlin Heidelberg 2009

A SOM Based Stereo Pair Matching Algorithm for 3-D Particle Tracking Velocimetry

Kazuo Ohmi1, Basanta Joshi2, and Sanjeeb Prasad Panday2

1 Dept. of Information Systems Engineering, Osaka Sangyo University, Daito-shi, Osaka 574-8530, Japan

2 Dept. of Information Systems Engineering, Graduate Student of Faculty of Engineering, Osaka Sangyo University, Daito-shi, Osaka 574-8530, Japan

[email protected]

Abstract. A self-organizing map (SOM) based algorithm has been developed for 3-D particle tracking velocimetry (3-D PTV) in stereoscopic particle pairing process. In this process every particle image in the left-camera frame should be paired with the most probably correct partner in the right-camera frame or vice versa for evaluating the exact coordinate. In the present work, the performance of the stereoscopic particle pairing is improved by applying proposed SOM op-timization technique in comparison to a conventional epipolar line analysis. The algorithm is tested with the 3-D PIV standard image of the Visualization Soci-ety of Japan (VSJ) and the matching results show that the new algorithm is ca-pable of increasing the recovery rate of correct particle pairs by a factor of 9 to 23 % compared to the conventional epipolar-line nearest-neighbor method.

Keywords: Particle pairing problem; Particle tracking velocimetry; PIV; PTV; Stereoscopic PIV; Neural network; Self-organizing map; SOM.

1 Introduction

The basic algorithm of the 3-D particle tracking velocimetry is composed of two successive steps of particle pairing [1] (or identification of same particles) as de-picted in Fig.1. The first one is the spatio-differential (parallactic) particle pairing, in which the particles viewed by two (or more) stereoscopic cameras with different viewing angles have to be correctly paired at every synchronized time stage. This is an indispensable procedure for computing the 3-D coordinates of individual particles. And the second one is the time-differential particle pairing, where the 3-D individual particles have to be correctly paired between two time stages in a short interval. Of these two steps of particle pairing, the second one is relatively rich in methodology because many of the known 2-D time-differential tracking algorithms can be extended into 3-D tracking without any additional complexity. However, the first step particle pairing is difficult when 3-D particle coordinates must be calcu-lated with accuracy and with high recovery ratio. When the parallactic angle between the two camera axes is small (say 10 or less), some of the currently used temporal particle pairing algorithm can be applied to the spatial particle pairing, but in this

12 K. Ohmi, B. Joshi, and S.P. Panday

Fig. 1. Typical flow chart of 3-D particle tracking velocimetry

case, the resultant measures of particle coordinates are not so much resolved in depth direction as in the two planar directions. For more depth resolution, the parallactic angle has to be larger and the most commonly used method for the particle pairing is the epipolar line nearest neighbor analysis [2]. But with this method the recovery ratio of correct particle pairs is relatively low, especially with densely seeded particle images.

One of the present author and his coworkers have already tried to improve this low recovery ratio by using a genetic algorithm in the epipolar line nearest neighbor analysis [3] [4].The basic concept of this method is to find out a condition in which the sum of the normal distance between an epipolar line and its pairing particle image should be minimized. The accuracy of particle pairing was indeed increased to some extent but even with a new-concept high speed genetic algorithm, the computation time for particle pairing was increased exponentially with the number of particles. For this reason, realistic computation time is only met with less than 1000 particles. An-other problem of the genetic algorithm is that the calculation is not reproducible be-cause the results of genetic operations depend on random numbers. In most cases, this drawback can be cancelled by setting up a strict terminal condition for the iterative computation. But even with this the pairing results are not always reproducible if the number of pairing particles is much increased. So, In the present work,a self-organizing maps (SOM) neural network is applied to the epipolar line proximity analysis for more accuracy in the stereoscopic particle pairing with a large parallax.

2 SOM Neural Network

Neural networks are one of the most effective and attractive algorithms for the parti-cle matching problem of the particle tacking velocimetry (PTV) because in many cases they work without preliminary knowledge of the flow field to be examined. Among others, the self-organising maps (SOM) model seems to have turned out par-ticularly useful tool for this pupose. The principles of SOM neural network were originally proposed [5], basically aimed at clumped distribution of like terms. There was room for application of this principle to clumped distribution of correct particle

Left camera image at t=t0

Right camera image at t=t0

Spatial particle matching

Right camera image at t=t1

Left camera image at t=t1

Computation of 3-D position of particles at t=t0

Spatial particle matching

Computation of 3-D position of particles at t=t1

Temporal particle matching

Displacement vector map

A SOM Based Stereo Pair Matching Algorithm for 3-D Particle Tracking Velocimetry 13

pairs between two time-differential or spatio-differential particle images. In this re-gard, the SOM neural network was used for 2-D time-differential particle tacking velocimetry by Labonté [6] and then improved by one of the authors [7] with success-ful results. In the present work SOM neural network is applied to spatio-differential particle images.

The SOM neural network consists of an input layer composed of a number of input vectors (multi-dimensional input signals) and an output (competitive) layer composed of network neurons as shown in Fig.2. All the neurons in the output layer are subject to learning from the input layer and the connection between inputs and neurons is represented by weight vectors defined for each of their combinations. According to the SOM implementation by Labonté [6] for particle pairing of the 2-D time-differential particle images, the input vectors are given by the 2-D coordinates of particle centroids in one of the two image frames and the network neurons are the relocation of particle centroids of the opposite image frame. In response to every input vector signal, the network neurons are subjected to the Kohonen learning (dis-placement of neurons in reality). And as a result of iteration of this Kohonen leaning, more probable particle pairs come in more proximity and less probable ones are kept away gradually.

Fig. 2. SOM neural network architecture by Kohonen [5]

In spatio-differential particle matching, the SOM learning is based on the Epipolar line normal distance projected on one of the two stereoscopic camera screens. Theo-retically, on either of these two camera screens, the particles viewed directly by one camera should be located exactly on their respective Epipolar lines derived from the particles viewed by the other camera. But in reality, the algebraic equation of the Epipolar line is determined through a camera calibration process, which is not free from experimental errors [1]. As a result, the particles viewed directly by each camera are not necessarily located on their respective Epipolar lines but slightly separated from them as shown Fig.3. So in order to match the particles on the two stereoscopic camera screens, minimization of the normal distance between direct-view particles and Epipolar lines is usually used as a particle match condition. But this condition is not always correctly applied as the number of particles in the image is increased. So

x1 x2 x3 - - - - - - - - - xn Input layer

mi1 mi2 mi3 - - - - - min Weight Vectors

min

Output layer (Competitive layer) Network neurons

14 K. Ohmi, B. Joshi, and S.P. Panday

Fig. 3. Epipolar line normal distance for stereoscopic particle matching

Fig. 4. Schematic illustration of one single step of Kohonen learning

the most probable coupling of the particles and their respective Epipolar lines must be found out by using some optimization method. And one of the best methods for this would be the use of the SOM neural network.

In the present case of spatio-differential particle images, the network neurons are represented by the centroid of every individual particle in one of the two image frames. But the input signal to this network is not given by the particle centroids of the opposite image frame but by the epipolar lines derived from the related particle centroids. And the Kohonen learning is realized by the displacement of neurons in the normal direction to the epipolar line presented as an input. This learning is iterated until the best possible combinations of a single particle and a single epipolar line are established as shown in Fig.4.

3 Particle Pairing and SOM Implementation

The mathematical form of the epipolar line in a stereoscopic camera arrangement can be formulated from the following perspective transform equations [8]:

Right-camera screenLeft-camera screen

Target particle (Input signal)

Lea

rnin

g ar

ea w

idth

Epi

pola

r li

ne

(inp

ut p

rese

ntat

ion)

Neuron displacement corresponding to Kohonen learning

Winner neuron

p (x, y, z)

2-D Epipolar lines on projected screens

Right-camera screen Left-camera screen

PR (X2,Y2) PL (X1,Y1)

2-D normal distance

2-D normal distance

Particle in 3-D space

A SOM Based Stereo Pair Matching Algorithm for 3-D Particle Tracking Velocimetry 15

11 12 13 14 31 1 32 1 33 1 1

21 22 23 24 31 1 32 1 33 1 1

11 12 13 14 31 2 32 2 33 2 2

21 22 23 24 31 2 32 2 33 2 2

c x c y c z c c xX c yX c zX X

c x c y c z c c xY c yY c zY Y

d x d y d z d d xX d yX d zX X

d x d y d z d d xY d yY d zY Y

+ + + − − − =⎧⎪ + + + − − − =⎪⎨ + + + − − − =⎪⎪ + + + − − − =⎩

(1)

where x, y and z are the physical-space 3-D coordinates of a particle centroid, X1 and Y1 the 2-D particle coordinates on the left-camera projection screen, and X2 and Y2 those on the right-camera projection screen. The two sets of matrix coefficients cxx and dxx are the camera parameters for left and right cameras, which are determined by means of calibration using a given number of calibrated target points viewed by the same two cameras. In these equations, if either set of (X1, Y1) or (X2, Y2) is given, the other set of X and Y comes into a linear relation, providing an arithmetic equation of the relevant epipolar line. Once the camera parameters are known, for any particle image in one of the two camera frames, the corresponding epipolar line in the other camera frame is mathematically defined and, then, the normal distance between the epipolar line and any candidate pairing particle is calculated by a simple geometric algebra.

(a) Before learning (b) After learning

Fig. 5. SOM learning for stereoscopic particle pairing

The conventional stereoscopic particle pairing uses this normal distance as a defi-nite index. For more assurance, the sum of the two normal distances for the same particle pair derived from different camera image frames is used but no more than that. By contrast, the SOM particle pairing is a kind of topological optimization proc-ess, in which more probable pairs of a particle centroid and an epipolar line come close together and others get away. The key factor of the Kohonen learning is assign-ing the neuron (particle centroid) with a minimum of this normal distance as winner but the learning itself applies to all the neighboring neurons. And this learning goes on with different input vector signals so that all the opposite-frame particle centroids are presented as epipolar lines as shown in Fig. 5 (a).This learning cycle is iterated until the fixed and unique combinations of a particle and an epipolar line are estab-lished as shown in Fig.5 (b).After the establishment of one to one relationship be-tween the particle centroids, the 3D particle coordinates are computed by solving (1).

16 K. Ohmi, B. Joshi, and S.P. Panday

The SOM Neural network system is implemented by considering two similar net-works covering the particles of the two camera frames. Let xi (i=1,..,N) and y j

(j=1,..,M) be the 2-D coordinate vectors of the particles in the left-camera and right-camera frames respectively. The left network has N neurons situated at xi and the right one has M neurons at y j. Each neuron has two weight vectors, corresponding to the two components of the coordinate vectors xi and y j, and is denoted by vi for the left network and by wj for the right one. These weight vectors are assigned the following initial values:

( )Niii 1,.., == xv , ( )Mjjj 1,.., == yw

(2)

The weight vectors are so updated that the neurons of one network should work as stimuli for the other network. More concretely, the stimulus vector v i from the left network is presented to the right network in the form of the corresponding epipolar line. Then, a winner neuron is selected from the latter network as the one with the weight vector closest to the epipolar line derived from vi. Let c be the index of this winner neuron and wc its weight vector, u i be the intersection of the epipolar line and the normal line dropped from wc, then each neuron of the right network is subjected to the following displacement of weight vectors:

( )Mjc cijj ,...,1 )()( =−=Δ wuw α ,⎩⎨⎧ ∈

= otherwise0

)( neuron if rSj cj

αα (3)

where α j is a scalar variable between 0 and 1 and Sc (r) the closed band region with a half width r centered by the epipolar line. The increment of weight vector in (3) is given an important modification from the original Kohonen (1982) network model, in which the right-hand term is expressed as (u i – wc) instead of (u i – w j). Each time the input vector, or the epipolar line derived from vi, is presented to the right network, the weight vectors of the latter network are updated according to:

( )MjcN

iijjj 1,..., )(

1

=Δ+← ∑=

www (4)

In the next step, on the contrary, the stimulus vector wj from the right network is pre-sented to the left network in the form of the corresponding epipolar line. A winner neuron is selected as the closest one to the epipolar line. Each time the weight vector wj is presented to the left network, the weight vectors of the latter network are updated according to:

( ) 1,..., )(

1

NicM

jjiii =Δ+← ∑

=vvv (5)

Each time the weight vectors from either network are updated, the width r of the band region, within which the weight vectors of neurons are changed, is altered by rr β←

1) 0( << β . At the same time, the amplitude α of the weight translation is altered by βαα /← .

These alternate steps are iterated until the width r of the band reaches a given threshold value of rf , which should be small enough to include only the winner neuron. Since the resultant correspondence between a left network neuron and its matching

A SOM Based Stereo Pair Matching Algorithm for 3-D Particle Tracking Velocimetry 17

right network neuron is not always reciprocally identical, a final nearest-neighbor check is done with a neighborhood criterion of small distance ε. Out of the probable neurons in the neighborhood, a tolerance distance ε is set, within which two neuron weight vectors will be considered equal. The solution time can be shortened by taking ε larger.

4 Results and Discussion

The present new particle pairing algorithm is tested by using synthetic particle im-ages, namely the PIV Standard Images by Okamoto[9], which are now available at the web site of the Visualization Society of Japan (http://vsj.or.jp/piv/). Out of 6 sets of 3-D particle images offered at this site, only 4 image sets (Series #351, #352, #371 and #377) are selected for the present work. Table 1 gives the technical details of all these Standard Images. All of these are synthetic particle images showing different portions of a 3-D transient flow induced by an impinging jet. The parallax angle between the view axes of the two cameras is also different from series to series. In order to simu-late particle refraction effect in a real experimental environment, the use of cylindrical volume illumination and water refractive index of 1.33 are taken into account.

Fig.6 shows a sample pair of the 3-D PIV Standard Image tested in the present work and Fig.7 is the corresponding pair of marker particle images used for the

Table 1. Summary of the tested 3-D PIV Standard Images

Series # / Frame # 352 / 0000 351 / 0000 371 / 0000 377 / 0000

Number of existing particles 372 2092 366 939

Minimum /Mean particle diameter 1/5 pix 1/5 pix 1/5 pix 1/5 pix

Standard deviation of diameter 2 pix 2 pix 2 pix 2 pix

Volume of visualized flow (in situ) 2cm3 2cm3 1cm3 0.5cm3

Maximum flow rate (in situ) 12 cm/sec 12 cm/sec 12 cm/sec 12 cm/sec

Refraction index 1.33 1.33 1.33 1.33

Number of calibr. marker particles 27 27 125 27

Distance to origin cen-ter

20 cm 20 cm 20 cm 11.5 cm

Inclination from x-axis -30 deg -30 deg -30 deg -29.9 deg

Inclination from y-axis 0 deg 0 deg -10 deg -45 deg

Left camera

Inclination from z-axis 0 deg 0 deg 0 deg 16.1 deg

Distance to origin cen-ter

20 cm 20 cm 20 cm 11.5 cm

Inclination from x-axis 30 deg 30 deg 30 deg 0 deg

Inclination from y-axis 0 deg 0 deg -10 deg -90 deg

Right camera

Inclination from z-axis 0 deg 0 deg 0 deg 30 deg

18 K. Ohmi, B. Joshi, and S.P. Panday

(a) Left frame (frame# 0000) (b) Right frame (frame# 2000)

Fig. 6. 3-D PIV Standard Images – Series #351

(a) Left frame (frame# 0999) (b) Right frame (frame# 2999)

Fig. 7. 3-D calibration marker images – Series #351

stereoscopic camera calibration In each calibration image, there are 27 calibration points in a cubic cell arrangement with known absolute coordinates in a real 3-D space and they are used for computing the 11 camera parameters of each camera which becomes input to (1).

The particle pairing results from these 3-D PIV Standard Images are shown in Table 2, in which the performance of the two algorithms (epipolar-line nearest-neighbor pairing with and without SOM) are compared in terms of the “correct pair rate”. The SOM computation parameters for these four series of PIV images are kept within: r (initial radius) = 5, rf (final radius) = 0.01 to 0.001, α (initial translation rate) = 0.05 to 0.005, β (attenuation rate) = 08 to 0.9 and ε (max distance for final pairing) = 0.01. It can be seen from these results that the performance of the particle pairing is improved with the introduction of the SOM neural network strategy and this im-provement is more marked when the number of existing particles is increased up to 1000 or more. This is indicative of the practical-use effectiveness of the SOM neural

A SOM Based Stereo Pair Matching Algorithm for 3-D Particle Tracking Velocimetry 19

Table 2. Particle pairing results with or without SOM neural network

Particle pairing with SOM Particle pairing without SOM

Series # / Frame #

Number of existing particle pairs

Number of correct pairs

Correct pair rate

Number of existing particle pairs

Number of correct pairs

Correct pair rate

#352/000 283 263 92.93 % 283 239 84.45 %

#351/000 1546 1129 73.03 % 1546 986 63.78 %

#371/000 157 145 92.36 % 157 134 85.35 %

#377/000 352 286 81.25 % 352 233 66.19 %

network particle pairing for 3-D particle tracking velocimetry. The computational cost variation for Image #352, #371, #377 is about 1 sec(s) and that of Image #351 is 20s. However, the computational cost variation for the proposed algorithm with reference to previous algorithms is insignificant.

Further, it was observed that correct pairing rate for the performance is even better than that of the genetic algorithm particle pairing proposed earlier by the one of the author and his coworkers [3] [4]. Another merit of the SOM neural network particle pairing is that the performance is considerably stable regardless of the optical condi-tions of particle imaging. Even with or without incidence, yaw and roll angles of the two stereoscopic cameras, the correct pair rate keeps a constantly high level. This is certainly important when the stereoscopic PTV has to be employed in many industrial applications, where the positions of cameras and of laser light units are more or less restricted.

5 Conclusions

A SOM neural network based algorithm was successfully implemented for the stereo-scopic particle pairing step in 3D particle tracking velocimetry and tested with the PIV standard image data. With this scheme, the overall performance of particle pair-ing is improved and the correct pairing rate for two sets of the synthetic 3D particle images goes up to 93%. The increase factor is not dramatic but it should be noted here that the accuracy of the subsequent time series particle tracking is fairly sensitive to that of the parallactic particle pairing. A slight improvement of the parallactic particle pairing may cause considerable increase in the correct time series particle tracking in the 3D particle tracking velocimetry. Further, efforts should be made to apply the methodology to more densely seeded particle images with larger numbers of particles. Moreover, the particle matching process can be made more accurate in presence of loss-of-pair particles.

20 K. Ohmi, B. Joshi, and S.P. Panday

References

1. Mass, H.G., Gruen, A., Papantoniou, D.: Particle tracking velocimetry in three-dimensional flows. Experiments in Fluids 15, 133–146 (1993)

2. Nishino, K., Kasagi, N., Hirata, M.: Three-dimensional particle tracking velocimetry based on automated digital image processing. Trans. ASME, J. Fluids Eng. 111, 384–391 (1989)

3. Ohmi, K., Yoshida, N.: 3-D Particle tracking velocimetry using a genetic algorithm. In: Proc. 10th Int. Symposium Flow Visualization, Kyoto, Japan, F0323 (2002)

4. Ohmi, K.: 3-D particle tracking velocimetry with an improved genetic algorithm. In: Proc. 7th Symposium on Fluid Control, Measurement and Visualization, Sorrento, Italy (2003)

5. Kohonen, T.: A simple paradigm for the self-organized formation of structured feature maps. In: Competition and cooperation in neural nets. Lecture notes in biomathematics, vol. 45. Springer, Heidelberg (1982)

6. Labonté, G.: A new neural network for particle tracking velocimetry. Experiments in Flu-ids 26, 340–346 (1999)

7. Ohmi, K.: Neural network PIV using a self-organizing maps method. In: Proc. 4th Pacific Symp. Flow Visualization and Image Processing, Chamonix, France, F-4006 (2003)

8. Hall, E.L., Tio, J.B.K., McPherson, C.A., Sadjadi, F.A.: Measuring Curved Surfaces for Robot Vision. Computer 15(12), 42–54 (1982)

9. Okamoto, K., Nishio, S., Kobayashi, T., Saga, T., Takehara, K.: Evaluation of the 3D-PIV Standard Images (PIV-STD Project). J. of Visualization 3(2), 115–124 (2000)