Performance analysis of a modified moving shadow elimination method developed for indoor scene...

10
Performance Analysis of a Modified Moving Shadow Elimination Method Developed for Indoor Scene Activity Tracking Bhargav Kumar Mitra * , Muhammad Kamran Fiaz, Ioannis Kypraios, Philip Birch, Rupert Young, Chris Chatwin Laser and Photonic Systems Research Group Department of Engineering and Design University of Sussex Falmer, Brighton BN1 9QT ABSTRACT Moving shadow detection is an important step in automated robust surveillance systems in which a dynamic object is to be segmented and tracked. Rejection of the shadow region significantly reduces the erroneous tracking of non-target objects within the scene. A method to eliminate such shadows in indoor video sequences has been developed by the authors. The objective has been met through the use of a pixel-wise shadow search process that utilizes a computational model in the RGB colour space to demarcate the moving shadow regions from the background scene and the foreground objects. However, it has been observed that the robustness and efficiency of the method can be significantly enhanced through the deployment of a binary-mask based shadow search process. This, in turn, calls for the use of a prior foreground object segmentation technique. The authors have also automated a standard foreground object segmentation technique through the deployment of some popular statistical outlier-detection based strategies. The paper analyses the performance i.e. the effectiveness as a shadow detector, discrimination potential, and the processing time of the modified moving shadow elimination method on the basis of some standard evaluation metrics. Keywords: surveillance systems, moving shadows, computational colour model, outlier detection strategies, performance evaluation metrics 1. INTRODUCTION Formation of a shadow takes place when light from a source is intercepted by an opaque object in such a way that the other side of the body not facing the source is in darkness [1]. Projection of this shaded region on a surface behind the object is known as a shadow region [1]. In general, such shadows can be categorized as static shadows or moving shadows depending upon whether the causal object is static or moving [2], [3], [1]. A robust foreground object recognition process of a surveillance system usually does not get jeopardized by the presence of static shadows which usually form a part of the background; elimination of static shadows, thus, has never been judged as a crucial preprocessing step [1], [2], [3]. On the other hand, shadows cast by dynamic objects or by objects suddenly brought into a background scene are often misclassified as the actual foreground objects leading to poor segmentation and tracking [1], [3]. Hence, moving shadow detection is treated as an important step in an automated surveillance system in which a dynamic object is to be segmented and tracked. The authors of the paper have already conceived a computational model in the RGB colour space [2] that marks/eliminates the shaded region through a pixel-wise search process. However, it has been observed that the model is only capable of marking the strong (umbra) portion of the shadow if there is a slightly strict constraint on the false detection rate [1], [2]. If the soft portion of the shadow has to be included through the use of relaxed thresholds [1] without loosing control over the false detection rate then a binary-mask based shadow search method has to be deployed. This, in turn, calls for a prior foreground object segmentation process, and then generation of the binary mask through proper selection of thresholds. The authors have used a standard background subtraction process to segment the * [email protected], Tel No: +44 1273 872642, Fax: +44 1273 678399 Optics and Photonics for Counterterrorism and Crime Fighting IV, edited by Gari Owen, Proc. of SPIE Vol. 7119, 71190A · © 2008 SPIE CCC code: 0277-786X/08/$18 · doi: 10.1117/12.800376 Proc. of SPIE Vol. 7119 71190A-1 2008 SPIE Digital Library -- Subscriber Archive Copy

Transcript of Performance analysis of a modified moving shadow elimination method developed for indoor scene...

Performance Analysis of a Modified Moving Shadow Elimination Method Developed for Indoor Scene Activity Tracking

Bhargav Kumar Mitra*, Muhammad Kamran Fiaz, Ioannis Kypraios, Philip Birch, Rupert Young,

Chris Chatwin

Laser and Photonic Systems Research Group Department of Engineering and Design

University of Sussex Falmer, Brighton BN1 9QT

ABSTRACT

Moving shadow detection is an important step in automated robust surveillance systems in which a dynamic object is to be segmented and tracked. Rejection of the shadow region significantly reduces the erroneous tracking of non-target objects within the scene. A method to eliminate such shadows in indoor video sequences has been developed by the authors. The objective has been met through the use of a pixel-wise shadow search process that utilizes a computational model in the RGB colour space to demarcate the moving shadow regions from the background scene and the foreground objects. However, it has been observed that the robustness and efficiency of the method can be significantly enhanced through the deployment of a binary-mask based shadow search process. This, in turn, calls for the use of a prior foreground object segmentation technique. The authors have also automated a standard foreground object segmentation technique through the deployment of some popular statistical outlier-detection based strategies. The paper analyses the performance i.e. the effectiveness as a shadow detector, discrimination potential, and the processing time of the modified moving shadow elimination method on the basis of some standard evaluation metrics.

Keywords: surveillance systems, moving shadows, computational colour model, outlier detection strategies, performance evaluation metrics

1. INTRODUCTION Formation of a shadow takes place when light from a source is intercepted by an opaque object in such a way that the other side of the body not facing the source is in darkness [1]. Projection of this shaded region on a surface behind the object is known as a shadow region [1].

In general, such shadows can be categorized as static shadows or moving shadows depending upon whether the causal object is static or moving [2], [3], [1]. A robust foreground object recognition process of a surveillance system usually does not get jeopardized by the presence of static shadows which usually form a part of the background; elimination of static shadows, thus, has never been judged as a crucial preprocessing step [1], [2], [3]. On the other hand, shadows cast by dynamic objects or by objects suddenly brought into a background scene are often misclassified as the actual foreground objects leading to poor segmentation and tracking [1], [3]. Hence, moving shadow detection is treated as an important step in an automated surveillance system in which a dynamic object is to be segmented and tracked.

The authors of the paper have already conceived a computational model in the RGB colour space [2] that marks/eliminates the shaded region through a pixel-wise search process. However, it has been observed that the model is only capable of marking the strong (umbra) portion of the shadow if there is a slightly strict constraint on the false detection rate [1], [2]. If the soft portion of the shadow has to be included through the use of relaxed thresholds [1] without loosing control over the false detection rate then a binary-mask based shadow search method has to be deployed. This, in turn, calls for a prior foreground object segmentation process, and then generation of the binary mask through proper selection of thresholds. The authors have used a standard background subtraction process to segment the

* [email protected], Tel No: +44 1273 872642, Fax: +44 1273 678399

Optics and Photonics for Counterterrorism and Crime Fighting IV, edited by Gari Owen,Proc. of SPIE Vol. 7119, 71190A · © 2008 SPIE

CCC code: 0277-786X/08/$18 · doi: 10.1117/12.800376

Proc. of SPIE Vol. 7119 71190A-12008 SPIE Digital Library -- Subscriber Archive Copy

foreground object, and gone for an automatic selection of thresholds to generate the binary-mask from the difference image through the use of some popular outlier detection strategies [4-8].

The paper has been organized as follows: Section 1 describes our conceived computational model; Section 2 broaches the topic of outliers, and some of the popular outlier detection strategies deployed by us; the following section talks about the methodologies we have used for the pixel-wise shadow search process, and for the binary-mask based one; Section 4 defines the performance metrics we have used to analyze the two methods; we have then compared the results obtained through the use of the methods, and have drawn some conclusions based on those.

2. THE COMPUTATIONAL COLOUR MODEL The computational colour model is analogous to that developed by the authors of [9] who like us have exploited the fact that a shadow can be considered as ‘a semi-transparent region in the image, which retains a representation of the underlying surface pattern, texture or colour value’ [9], [10]. The model estimates the brightness and chromaticity distortion factor values separately for each pixel of the current frame with respect to the corresponding pixel of the expected background frame. As illustrated in Fig. 1, iE is the expected colour vector of the ith pixel in the RGB colour space, obtained after averaging N background frames, and iI is the corresponding colour vector of the ith pixel obtained from the current frame. The brightness distortion estimate can be obtained by finding out the difference between

iE and the magnitude of the projection of iI on iE .

O

B

G

R

Ii

Pi

Ei

θi

Fig. 1. The computational model in the RGB colour space; Ei is the expected colour vector, Ii is the current colour

vector, OPi is the projection of Ii on Ei, and iθ is the angle between Ii and Ei.

On the other hand, an estimate of the chromaticity distortion factor for the ith pixel can be determined by determining the angle between iE and iI .

Let iN2i1i r,...,r,r be the values of the red channel (unit vector: r ) of the ith pixel for the N background frames respectively. Similarly, let iN2i1i g,...,g,g be the N different values of the green channel (unit vector: g ) and

Proc. of SPIE Vol. 7119 71190A-2

iN2i1i b,...,b,b of that of the blue channel (unit vector: b ) of the ith pixel for the N background frames. Then, the expected colour vector for the ith pixel is given as:

bN

bg

N

gr

N

rE

N

1jij

N

1jij

N

1jij

i

⎟⎟⎟⎟⎟

⎜⎜⎜⎜⎜

+

⎟⎟⎟⎟⎟

⎜⎜⎜⎜⎜

+

⎟⎟⎟⎟⎟

⎜⎜⎜⎜⎜

=∑∑∑=== (1)

The current colour vector of the ith pixel, obtained from the current frame is given as:

bbggrrI ccci ++= (2)

The projection of iI on iE is given as:

iii eIOP ˆ•= (3)

where in equation (3), ie is the unit vector in the direction of iE and • is the dot product.

Therefore, the difference, iρ , between iE and iOP , ( )[ ]iii OPE −=ρ , can be used as an estimate of the brightness

distortion factor for the ith pixel of the current frame.

The chromaticity distortion factor for the ith pixel is expressed as the angle, iθ , between iE and iI , given as:

⎟⎟⎠

⎞⎜⎜⎝

⎛ •=

ii

iii IE

IEarccosθ (4)

3. OUTLIER DETECTION STRATEGIES Outliers are data points, skξ , in a data set, Ψ , that do not comply with our expectations based on the majority of the data [4]. Various detection strategies have been thought of to segregate such noncompliant data points for data pre-processing or filtering. Such strategies include visual inspection of data values of the set, fitting of models of desired form to the data, and then examining the residuals or using deletion diagnostic approaches, etc [4]. However, it should be mentioned that the efforts made towards the detection of outliers using the above mentioned strategies may prove to be futile due to the reasons described in [4] and, even if possible, are not applicable to meet our objective, as not only is the data set we are dealing with typically large but also due to the fact that the overall process has severe time constraints. In this context, note that the visual inspection method of outlier detection cannot even be considered, as the overall process of moving region detection and classification has to be inherently automatic.

Fortunately, there are some other popular approaches for outlier detection. These well known and frequently used strategies depend on two estimates: (1) an estimate of a nominal reference value for the data set, and (2) a scatter estimate of the data. Based on these estimators, outliers can be detected based on the following criterion:

Ψ∈∀=⇒>− kkokkrk ξξξαγξξ , (5)

where in (5), krξ is the nominal reference value of the dataset, α is the threshold parameter, γ the scatter estimate, and

koξ an outlier.

3.1 The ‘3σ edit rule’

The ‘3σ edit rule’ considers the mean of the data values of the data set as the nominal reference value, and the corresponding standard deviation as an estimate of the scatter:

∑=

==N

kkmeankr N 1

1 ξξξ (6)

Proc. of SPIE Vol. 7119 71190A-3

where in (6), N is the total number of observations in the data set.

( )2/1

1

2

11

⎥⎦

⎤⎢⎣

⎡−

−= ∑

=

N

kmeankN

ξξγ (7)

Note that if the distribution is assumed to be approximately normal, then the probability of getting a data value greater than three times the standard deviation of the data ( )3=α , added to the mean, is around 0.3% [4]. The technique, however, suffers from the fact that both the mean and the standard deviation of the data are very much outlier sensitive [4]. Moreover, the strategy heavily depends on the assumption that the underlying distribution is approximately Gaussian [1].

3.2 Strategy based on Hampel identifier

This strategy capitalizes on the fact that the outlier sensitive mean and standard deviation estimates are replaced by the outlier resistant median (breakpoint value of 50%), and median absolute deviation from the median (MAD) scale estimates, respectively. The median of a data sequence is obtained as follows [11]:

1. The observations are ranked according to their magnitude.

2. If N is odd, the median is taken as the value of the ( ) th

21N⎥⎦⎤

⎢⎣⎡ + ranked observation; otherwise if N is even, the

median is taken as the mean of the th

2N⎟⎠⎞

⎜⎝⎛ and

th

12N

⎥⎦

⎤⎢⎣

⎡+⎟

⎠⎞

⎜⎝⎛ ranked observations.

The MAD scale estimate is defined as:

{ }mediankse medianMAD ξξγ −×== 4826.1 (8)

where in (8), ‘the factor 1.4826 was chosen so that the expected value of γ is equal to the standard deviation for normally distributed data’ [4].

The strategy, although quite often very effective in practice [4], is stymied by the fact that if more than 50% of the observations are of the same value, then the scale estimate is equal to 0, i.e. every data value greater than the median would then be considered as an outlier.

In this context, it should be mentioned that the mean can also be replaced by the median and the standard deviation by the inter-quartile deviation, giving rise to the so called standard boxplot outlier detection strategy [12].

4. METHODOLOGY For indoor video sequences- (a), (b), (c), and (d), as shown in Fig. 2, were chosen to test the efficiency and robustness of the moving shadow detection methods. In each case, the scene was illuminated by a fixed incandescent source of light.

For the pixel-wise shadow search method N frames of the background are taken and averaged to get the expected background frame. This is done since, due to camera sensor noise, the RGB colour value of any given pixel, iυ , does not remain constant for all the N background frames. Any one of the current frames should then be considered and a pixel by pixel search undertaken to mark the shadow pixels ( )siυ based on the following criteria:

( ){ ( ) MNiiif iisii ×=∀<>⇒ ,...2,1,&, ςθβρυυ (9)

where in equation (9), MN × denotes the frame size and β and ς are the threshold values.

For the modified method first the current frame is subtracted from the expected background frame to obtain the difference frame [1]. Then, one of the outlier detection strategies (‘ σ3 -edit’ rule, rule based on the use of the Hampel Identifier, rule based on an ad hoc selection of threshold) is deployed to generate the binary mask. A mask-region based

Proc. of SPIE Vol. 7119 71190A-4

shadow search is then used to mark/eliminate the shaded region. Note that the shadow search criterion remains the same [equation (9)]:

(i) (a) (ii)

(i) (b) (ii)

(i) (c) (ii)

(i) (d) (ii)

Fig. 2. For the sample indoor video sequences (a) Jar, (b) Cup, (c) Ball and (d) Mannequin: (i) shows the expected background frame, and (ii) one of the object frames.

Proc. of SPIE Vol. 7119 71190A-5

( ){ ( ) 1'&', ==∀<>⇒ kif kkskk ςθβρυυ on the binary mask (10)

where in (10), 'β and 'ς are the corresponding values of the relaxed thresholds.

It should be mentioned here that koriρ is to be calculated as ( )korikori OPE − so as to suppress highlights; the algorithm

thus works well for near-Lambertian surfaces. A standard cleaning process is utilized to remove noise, and to close the gaps in-between the detected regions.

5. PERFORMANCE METRICS Investigations were also carried out as a part of the work to compare the modified method with the original one. In this regard, it should be borne in mind that the efficiency of a given method can be quantitatively evaluated, firstly, by assessing its effectiveness as a good shadow point detector i.e. by determining what is the probability of misclassifying an actual foreground shadow point as a non-shadow point; also, secondly, by determining its discrimination potential i.e. by finding out the probability of classifying a non-foreground shadow point as an actual foreground shadow point [13]. Values for both the parameters were determined for all the video sequences for the two methods using metrics similar to those proposed in [14]. One of the two metrics is termed the shadow detection rate,κ , and is defined as follows:

fs

dfs

ηη

κ = (11)

where in (11), dfsη is the total number of actual foreground shadow pixels detected by the scheme and fsη is the total number of foreground shadow pixels.

From the definition it is evident that the range of κ is between 0 and 1. In the ideal case the total number of detected foreground shadow pixels should be equal to the total number of actual foreground shadow pixels and, then, κ = 1. In the practical case, the more efficient the shadow detection scheme, the closer to 1 will be the value attained byκ .

Another metric is the false detection rate,λ , which quantifies the discriminatory potential of the utilised method. It is defined as follows:

ds

dfsds

ηη−η

=λ (12)

where in (12), dsη is the total number of pixels detected as shadow points by the scheme.

The range of λ also lies between 0 and 1. Here, in the ideal case, the difference ( dfsds η−η ) should be 0 and then λ will be equal to 0. However, in reality a value close to 0 indicates the efficient performance of the method. The values of the metrics were found for the four different video sequences, considering both the strong and soft portions of the shadow in each case, and the results have been tabulated in Table 1.

We also calculated the total times taken by the two methods (Table 2) to mark the moving shaded region: →τpws total

time taken by the pixel-wise shadow search method, and →τbms total time taken by the modified method. For the modified method time taken by the segmentation/(binary-mask generation) process ( )segτ , and the time taken for the core shadow search process ( )csdτ have been determined separately. Note that:

csdsegbms τττ += (13)

6. RESULTS AND CONCLUSION Fig. 3 (i) - (iv) shows the final results after background retrieval based on the detected moving shaded region using a method that employs the ‘ −σ3 edit’ rule; it becomes quite evident that the method completely fails for all the sequences.

Proc. of SPIE Vol. 7119 71190A-6

This is because the binary-mask generated using the rule only partly covered the dynamic region in the scene. In turn, this suggests that the underlying data distribution was not Gaussian.

(i) (ii)

(iii) (iv)

Fig. 3. Results after background retrieval based on the detected shaded region in video sequences: (a) – (d); the shaded regions were detected through the use of binary-mask (generated through the deployment of the ‘ −σ3 edit’ rule) based

shadow search method.

Fig. 4 – 7 depict the outcome after background retrieval based on the detected moving shaded region mask; the moving shaded region mask was obtained through the application of the pixel-wise shadow search method, and the other binary-mask based ones. Note the binary-masks were developed through the use of thresholds either chosen using the Hampel Identifier rule, or on an ad hoc basis.

It becomes quite clear that the deployment of the binary mask based methods would help us to cover both the strong and soft portions of the moving shadows unlike the pixel-wise method which through the use of strict thresholds is able to encompass only the strong portions.

(i) (ii) (iii)

Fig. 4. Results after background retrieval based on the detected shaded region mask in video (a) using (i) pixel-wise moving shadow search process; (ii) binary-mask (generated through the use of a threshold chosen on ad hoc basis) based moving shadow search process; (iii) binary-mask (generated through the use of the Hampel Identifier) based

moving shadow search process.

Proc. of SPIE Vol. 7119 71190A-7

(i) (ii) (iii)

Fig. 5. Results after background retrieval based on the detected shaded region mask in video (b) using (i) pixel-wise moving shadow search process; (ii) binary-mask (generated through the use of a threshold chosen on ad hoc basis) based moving shadow search process; (iii) binary-mask (generated through the use of the Hampel Identifier) based

moving shadow search process.

(i) (ii) (iii)

Fig. 6. Results after background retrieval based on the detected shaded region mask in video (c) using (i) pixel-wise moving shadow search process; (ii) binary-mask (generated through the use of a threshold chosen on ad hoc basis) based moving shadow search process; (iii) binary-mask (generated through the use of the Hampel Identifier) based

moving shadow search process.

(i) (ii) (iii)

Fig. 7. Results after background retrieval based on the detected shaded region mask in video (d) using (i) pixel-wise moving shadow search process; (ii) binary-mask (generated through the use of a threshold chosen on ad hoc basis) based moving shadow search process; (iii) binary-mask (generated through the use of the Hampel Identifier) based

moving shadow search process.

Proc. of SPIE Vol. 7119 71190A-8

Table 1: Values of the two performance evaluation metrics: shadow detection rate ( )κ and the false detection rate ( )λ for the four video sequences after application of the pixel-wise moving shadow search methods, and the binary-mask based methods.

Method1: Pixel-wise Moving Shadow Search Process

Method 2: Binary-mask Based Moving Shadow Search Processes

Method where the binary-mask was generated though the use of a

threshold chosen on an ad hoc basis

Method where the binary-mask was generated through the use

of the Hampel Identifier

Video Sequen

-ces

κ

λ

κ λ κ λ

(a) 0.77 0.06 0.90 0.08 0.96 0.13

(b) 0.78 0.15 0.91 0.20 0.93 0.21

(c) 0.75 0.09 0.89 0.08 0.91 0.12

(d) 0.79 0.22 0.86 0.25 0.88 0.26

Table 2: The relative times taken by the segmentation/(binary-mask) generation processes, and the core moving shadow search processes with respect to the total times taken by the binary-mask based methods; the relative times taken by the binary-mask based methods with respect to the total times taken by the pixel-wise shadow search method.

Relative times taken by the methods, and the sub-methods

Method where the binary-mask was generated though the use of a

threshold chosen on an ad hoc basis

Method where the binary-mask was generated through the use of

the Hampel Identifier

Video Sequences

Times taken in sec when the codes

were executed in MATLAB

pwsτ (s) bms

seg

ττ

bms

csd

ττ

pws

segcsd

pws

bms

τττ

ττ +

=

bms

seg

ττ

bms

csd

ττ

pws

segcsd

pws

bms

τττ

ττ +

=

(a) 0.30 0.56 0.44 2.13 0.61 0.39 2.16

(b) 0.39 0.52 0.48 1.79 0.55 0.45 1.83

(c) 0.31 0.55 0.45 2.15 0.58 0.42 2.21

(d) 0.37 0.51 0.49 1.88 0.54 0.46 1.68

Table 1 lists the efficiency of each of the applied methods [shadow detection rate ( )κ ] and the discrimination potential [false detection rate ( )λ ] evinced by the same (the methods). The occasional increment of the false detection rate in cases where the binary-mask based methods were deployed results due to the fact that the methods use relaxed threshold values to cover the entire shadow area. The relative times taken by the binary-mask based methods with respect to the pixel-wise moving shadow search methods have been listed in Table 2. Table 2 also shows the relative (foreground object segmentation)/(binary-mask generation) times, and the core shadow search process times with respect to the total times in case of the modified methods. It becomes quite obvious that the binary-mask generation takes more than 50% of the overall time, and makes such methods slower compared to the pixel-wise moving shadow region search methods.

Proc. of SPIE Vol. 7119 71190A-9

Note that the binary-mask based methods outperform the pixel-wise moving shadow search methods efficiency-wise. Moreover, it should be noted that though the modified methods take more time as compared to the pixel-wise search methods it could still be applied for real-time applications. The Hampel Identifier based method shows good results efficiency-wise and should be used to make the overall process (inherently) almost automatic.

REFERENCES

[1] Mitra, B. K., Young, R. C. D., Chatwin, C. R., "On shadow elimination after moving region segmentation based on different threshold selection strategies," Optics and Lasers in Engineering, 45(11), 1088 – 1093 (2007).

[2] Mitra, B. K., Birch, P., Kypraios, I., Young, R. C. D., Chatwin, C. R., “On a Method to Eliminate Moving

Shadows in Video Sequences,” Proc. SPIE, 7000, 700012 – 1:9 (2008). [3] Nadimi, S., Bhanu, B., “Physical Models for Moving Shadow and Object Detection in Video,” IEEE Transactions

on Pattern Analysis and Machine Intelligence, 1(1), 65 – 76 (1999). [4] Pearson, R. K., “Outlier in process modelling and identification,” IEEE Transactions on Control Systems

Technology, 10(1), 55 – 63 (2002). [5] Liu, H., Shah, S., Jiang, W., “On-line outlier detection and data cleaning,” Journal of Computers & Chemical

Engineering 28, 1635-1647 (2004). [6] Martin, R. D., Thompson, D. J., “Robust-resistant spectrum estimation,” Proc. IEEE 70, 1097 – 1114 (1992). [7] Huber, P. J., [Robust Statistics], Wiley, New York (1981). [8] Belsley, D. A., Kuh, E., Welsch, R. L., [Regression Diagnostics], Wiley, New York (1980). [9] Horprasert, T., Harwood, D., Davis, L. S., “A Statistical approach for Real-Time Robust Background

Subtraction and Shadow Detection,” Proc. IEEE International Conference on Computer Vision (’99 FRAME- RATE Workshop) (1999).

[10] Rosin, P. L., Ellis, T., “Image difference threshold strategies and shadow detection,” Proc. Sixth British Machine

Vision Conference, 347-356 (1995). [11] Hahn, G. J., Shapiro, S. S., [Statistical Models in Engineering], John Wiley & Sons Inc, USA (1967). [12] Pearson, R. K., [Mining Imperfect Data: Dealing with Contamination and Incomplete Records], SIAM,

Philadelphia (2005). [13] Prati, A., Mikic, I., Trivedi, M. M, Cucchiara, R., “Detecting Moving Shadows: Algorithms and Evaluation,” IEEE Transactions on Pattern Analysis and Machine Intelligence 25 (7), 918-923 (2003). [14] Onuguchi, K., “Shadow Elimination Method for Moving Object Detection,” Proc. International Conference on

Pattern Recognition 1, 583 – 587 (1998).

Proc. of SPIE Vol. 7119 71190A-10