OR-PCA with Dynamic Feature Selection for Robust Background Subtraction

6
OR-PCA with Dynamic Feature Selection for Robust Background Subtraction Sajid Javed School of Computer Science and Engineering Kyungpook National University 80 Daehak-ro, Buk-go, Daegu, 702-701, Republic of Korea [email protected] Andrews Sobral Laboratoire L3I Université de La Rochelle 17000, France andrews.sobral@univ- lr.fr Thierry Bouwmans Laboratoire MIA Université de La Rochelle 17000, France thierry.bouwmans@univ- lr.fr Soon Ki Jung * School of Computer Science and Engineering Kyungpook National University 80 Daehak-ro, Buk-go, Daegu, 702-701, Republic of Korea [email protected] ABSTRACT Background modeling and foreground object detection is the first step in visual surveillance system. The task be- comes more difficult when the background scene contains significant variations, such as water surface, waving trees and sudden illumination conditions, etc. Recently, subspace learning model such as Robust Principal Component Analy- sis (RPCA) provides a very nice framework for separating the moving objects from the stationary scenes. However, due to its batch optimization process, high dimensional data should be processed. As a result, huge computational com- plexity and memory problems occur in traditional RPCA based approaches. In contrast, Online Robust PCA (OR- PCA) has the ability to process such large dimensional data via stochastic manners. OR-PCA processes one frame per time instance and updates the subspace basis accordingly when a new frame arrives. However, due to the lack of fea- tures, the sparse component of OR-PCA is not always ro- bust to handle various background modeling challenges. As a consequence, the system shows a very weak performance, which is not desirable for real applications. To handle these challenges, this paper presents a multi-feature based OR- PCA scheme. A multi-feature model is able to build a ro- bust low-rank background model of the scene. In addition, a very nice feature selection process is designed to dynami- cally select a useful set of features frame by frame, according * Prof. Jung is a corresponding author. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SAC’15 April 13-17, 2015, Salamanca, Spain. Copyright 2015 ACM 978-1-4503-3196-8/15/04...$15.00. http://dx.doi.org/10.1145/2695664.2695863 to the weighted sum of total features. Experimental results on challenging datasets such as Wallflower, I2R and BMC 2012 show that the proposed scheme outperforms the state of the art approaches for the background subtraction task. Categories and Subject Descriptors I.4.9 [Image Processing and Computer Vision]: Appli- cations. General Terms System, Algorithm Keywords Multiple features, Online Robust-PCA, Feature selection, Foreground detection, Background modeling 1. INTRODUCTION Separating moving objects from video sequence is the first step in many computer vision and image processing applica- tions. This pre-processing step consists of isolation of mov- ing objects called “foreground” from the static scene called “background”. However, it becomes really hard task when the scene has sudden illumination change or geometrical changes such as waving trees, water surfaces, etc. [6] Many algorithms have been developed to tackle the chal- lenging problems in the background subtraction (also known as foreground detection) [6], [5]. Among them, Robust Prin- cipal Component Analysis (RPCA) based approach shows a very nice framework for separating foreground objects from highly dynamic background scenes. Excellent surveys on background subtraction via RPCA can be found in [1]. Although RPCA based approach for background subtrac- tion attracts a lot of attention, it currently faces some limi- tations. First, the algorithm includes batch optimization. In order to decompose an input image A into low-rank matrix L and sparse component S, a chunk of samples are required

Transcript of OR-PCA with Dynamic Feature Selection for Robust Background Subtraction

OR-PCA with Dynamic Feature Selectionfor Robust Background Subtraction

Sajid JavedSchool of Computer Science

and EngineeringKyungpook National University80 Daehak-ro, Buk-go, Daegu,

702-701, Republic of [email protected]

Andrews SobralLaboratoire L3I

Université de La Rochelle17000, France

[email protected]

Thierry BouwmansLaboratoire MIA

Université de La Rochelle17000, France

[email protected]

Soon Ki Jung∗

School of Computer Scienceand Engineering

Kyungpook National University80 Daehak-ro, Buk-go, Daegu,

702-701, Republic of [email protected]

ABSTRACTBackground modeling and foreground object detection isthe first step in visual surveillance system. The task be-comes more difficult when the background scene containssignificant variations, such as water surface, waving treesand sudden illumination conditions, etc. Recently, subspacelearning model such as Robust Principal Component Analy-sis (RPCA) provides a very nice framework for separatingthe moving objects from the stationary scenes. However,due to its batch optimization process, high dimensional datashould be processed. As a result, huge computational com-plexity and memory problems occur in traditional RPCAbased approaches. In contrast, Online Robust PCA (OR-PCA) has the ability to process such large dimensional datavia stochastic manners. OR-PCA processes one frame pertime instance and updates the subspace basis accordinglywhen a new frame arrives. However, due to the lack of fea-tures, the sparse component of OR-PCA is not always ro-bust to handle various background modeling challenges. Asa consequence, the system shows a very weak performance,which is not desirable for real applications. To handle thesechallenges, this paper presents a multi-feature based OR-PCA scheme. A multi-feature model is able to build a ro-bust low-rank background model of the scene. In addition,a very nice feature selection process is designed to dynami-cally select a useful set of features frame by frame, according

∗Prof. Jung is a corresponding author.

Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, torepublish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee.SAC’15 April 13-17, 2015, Salamanca, Spain.Copyright 2015 ACM 978-1-4503-3196-8/15/04...$15.00.http://dx.doi.org/10.1145/2695664.2695863

to the weighted sum of total features. Experimental resultson challenging datasets such as Wallflower, I2R and BMC2012 show that the proposed scheme outperforms the stateof the art approaches for the background subtraction task.

Categories and Subject DescriptorsI.4.9 [Image Processing and Computer Vision]: Appli-cations.

General TermsSystem, Algorithm

KeywordsMultiple features, Online Robust-PCA, Feature selection,Foreground detection, Background modeling

1. INTRODUCTIONSeparating moving objects from video sequence is the first

step in many computer vision and image processing applica-tions. This pre-processing step consists of isolation of mov-ing objects called “foreground” from the static scene called“background”. However, it becomes really hard task whenthe scene has sudden illumination change or geometricalchanges such as waving trees, water surfaces, etc. [6]

Many algorithms have been developed to tackle the chal-lenging problems in the background subtraction (also knownas foreground detection) [6], [5]. Among them, Robust Prin-cipal Component Analysis (RPCA) based approach shows avery nice framework for separating foreground objects fromhighly dynamic background scenes. Excellent surveys onbackground subtraction via RPCA can be found in [1].

Although RPCA based approach for background subtrac-tion attracts a lot of attention, it currently faces some limi-tations. First, the algorithm includes batch optimization. Inorder to decompose an input image A into low-rank matrixL and sparse component S, a chunk of samples are required

to store in memory. As a result, it suffers from huge mem-ory usage and high computational cost. Second, there is noRPCA based approach which uses features instead of pixelintensity for background modeling, because it causes muchmore memory usages. Therefore, RPCA based approach isnot suitable for practical background subtraction systems.

In contrast, Online Robust Principal Component Analy-sis (OR-PCA) [4] process one frame per time instance viastochastic optimization provides a very interesting solutionof RPCA based scheme. In [6], OR-PCA is modified tobe adapted for background/foreground separation using im-age decomposition with initialization scheme. Only intensityfeatures are considered in this work and due to the param-eters setting the system is not applicable for visual surveil-lance system.

In this paper, we present a multi-feature based OR-PCAscheme for robust background subtraction. We briefly sum-marize our methodology here. First, multiple feature ex-traction process is performed on a sliding block of N videoframes, then the feature model is updated when a new sam-ple arrives. Second, OR-PCA is applied on every incomingvideo block per frame by multiple features. Third, a simi-larity measures are computed between the background fea-ture model and extracted low-dimensional subspace modelfor each feature. In addition, a weighted sum of similaritymeasures for all features is computed and dynamic featureselection scheme is applied according to background statis-tics. Finally, the foreground detection is performed on theresults of OR-PCA. Multiple feature integration into OR-PCA improves the quality of foreground and increases thequantitative performance as compared to other RPCA viaPCP based methods [1] and single feature OR-PCA [6].

The rest of this paper is organized as follows. In Sec-tion 2, the related work is reviewed. Section 3 describesthe proposed framework based on OR-PCA with DynamicFeature Selection (ORPCA-DFS). Experimental results arediscussed in Section 4. Finally, Section 5 concludes our work.

2. RELATED WORKOver the past few years, excellent methods have been pro-

posed for background subtraction using subspace learningmodel [1]. Among them, Oliver et al. [8] are the first authorsto model the background using Principal Components Anal-ysis (PCA). The foreground detection is then achieved bythresholding the difference between the reconstructed back-ground and input image. PCA provides a robust subspacelearning model but it is not robust when the data is cor-rupted and outliers appear in the new subspace basis. Incontrast, recent RPCA based approaches in [1] can tacklethe problem of traditional PCA.

A remarkable improvement has been found on RPCA forbackground modeling. Excellent surveys on background mod-eling using RPCA can be found in [1]. Candes et al. [2]proposed a robust convex optimization technique to addressthe PCA problems. Many RPCA approaches, such as Aug-mented Lagrangian Multiplier (ALM), Singular Value Thresh-olding (SVT) and Linearized Alternating Direction Methodwith an Adaptive Penalty (LADMAP) discussed in [1] solvethe sub-optimization problem to separate the low-rank ma-trix and sparse error in each iteration under defined con-vergence criteria. These RPCA methods work in a batchoptimization manner, as a result huge memory usage andhigh time complexity issues occur.

Feng and Xu [4] recently proposed Online Robust-PCA(OR-PCA) algorithm which processes one chunk per time in-stance using stochastic approximations (no batch optimiza-tion is needed). A nuclear norm objective function is refor-mulated in this approach, and therefore all the samples aredecoupled in optimization process for sparse error separa-tion but no interesting results are observed for backgroundsubtraction application in their work. Therefore, Javed etal. [5] modified OR-PCA for background/foreground sepa-ration. Only intensity information via image decompositionwith Markov Random Field (MRF) is utilized to enhancethe sparse component for dynamic background subtraction.A number of encouraging results are shown in [5]. But an-noying parameter tunning is the main drawback in theirapproach.

All these RPCA based schemes either works in online orbatch optimization manners. In addition, only intensity orsingle color information is used for sparse error separation.As a result, the foreground detection is not always robust,since the pixel values are insufficient to perform in differentbackground scenes. To deal with this situation, we presenta multiple feature scheme which is integrated into OR-PCAwith dynamic feature selection for handling different back-ground dynamics.

3. METHODOLOGYIn this section, we discuss our multi-feature based ORPCA-

DFS scheme for robust background subtraction in detail.Our scheme consists of several steps: multiple features ex-traction, feature background model, update feature model,OR-PCA, dynamic feature selection and foreground detec-tion, which are described as a system diagram in Fig. 1.

To proceed, initially the features are extracted and thefeature background model is created using video block of Nframes. Then, the model is updated continuously to adaptchanges of background scene. Modified OR-PCA methodol-ogy is applied to each feature model for every incoming videoframe. Then similarity measures are computed between thelow-rank features model and features of input frame. More-over, a frame by frame dynamic feature selection schemeis designed according to weighted sum of all features and fi-nally the foreground detection is performed. In the followingsections, we will describe each module in detail.

3.1 Build Feature ModelFirst, a feature extraction process is described. Our work

is different from the previous background modeling schemeswhere the model is created directly from grayscale imageor color information. In this paper, the background modelis created using a multiple feature extraction process. Thesliding block is created to store the last N frames in a datamatrix e.g At ∈ <m×n×N , where At represents the inputdata matrix A at time t. The width and height of the frameis denoted as m and n, whereas N denotes the number offrames stored in a sliding block e.g ten in our experiment.

The matrix At is transformed into another matrix Dt ∈<d1×N×d2 after a feature extraction process, where d1 isthe number of pixels (i.e. m × n) and d2 is the number offeatures.

In our experiments, nine different features are used, suchas three color channels (red, green and blue), intensity, localbinary pattern, spatial gradients in horizontal and verticaldirections and spatial gradient magnitude. In addition, His-

Sliding block of frames

Multiple Features Extraction

Update FeatureModel

Online Robust-PCA Foreground Detection

Dynamic FeatureSelection

(a) (b)

(d)

(e)

(f)

(g)

Feature Background Model

(c)

Figure 1: Overview of Multiple feature based ORPCA-DFS

tograms of Oriented Gradients (HOG) [3], a well known fea-tures initially developed for human detection are also usedin this work. A different image resolution is used accordingto datasets. For example, 120×160 (19, 200 pixels) are usedfor wallflower dataset [10]. So the dimension of our featuremodel is Dt ∈ <19,200×10×9.

Once the model is created using sliding block of N frames,the model is updated continuously when a new frame arrives.Every time a new video frame approaches, the sliding blockappends (adds) the new frame and removes the old one sameas sliding window concept. The steps described above areshown in Fig. 1 (a), (b), (c) and (d), respectively.

3.2 OR-PCA via Stochastic OptimizationOnline Robust PCA [4] is applied on each frame having

multiple features. OR-PCA decomposes the nuclear normof the objective function of the traditional PCP algorithmsinto an explicit product of two low-rank matrices, i.e., basisand coefficient. Thus, OR-PCA can be formulated as

minL∈<d×p,R∈<p×r,E

{1

2‖D − LRT − E‖2F

+λ1

2(‖L‖2F + ‖R‖2F ) + λ2‖E‖1

}, (1)

where D is an input data of any size (one sample), d is thenumber of pixels with multiple features, e.g. (d1×d2), L is abasis, R is a coefficient and E is a sparse error. λ1 controlsthe basis and coefficients for low-rank matrix, whereas λ2

controls the sparsity pattern, which can be tunned accordingto video analysis. In addition, basis and coefficient dependon the value of rank. In our experiments, we used a fixedvalue of λ1 = 0.01, λ2 = 0.04 and r = 5 for initializing alow dimensional subspace basis for OR-PCA. As a result,the OR-PCA converges faster than original one [4].

In particular, the OR-PCA optimization consists of twoiterative updating components. First, every incoming videoframe from sliding block is projected onto current initialized

basis and we separate the sparse noise component, whichincludes the outliers contamination. Then, the basis is up-dated with a new input video frame. More details can befound in [4].

The low-rank model Lt of each feature from each frame isthen obtained by a multiple of basis L and its coefficient R,whereas the sparse component E of only intensity featureis computed, which constitutes the foreground objects. Thestep described above is shown in Fig. 1 (e).

3.3 Dynamic Feature SelectionIn this section, our dynamic feature selection scheme is de-

scribed in detail. As mentioned above the low-rank modelLt is obtained using OR-PCA. Then, for each new frame aweighted sum of similarity measures is computed. It con-sists of two main steps: at first similarity measures are com-puted, then weighting factor of each feature components iscomputed dynamically.

Let F be a set of features extracted from the input frameIt and F ′ be a set of low-rank features extracted from the re-constructed low-rank model Lt. Then the similarity functionS for kth feature at the pixel (i, j) is computed as follows:

Sk(i, j) =

Fk(i,j)F ′k(i,j)

, if Fk(i, j) < F ′k(i, j),

1, if Fk(i, j) = F ′k(i, j),F ′k(i,j)Fk(i,j)

, if Fk(i, j) > F ′k(i, j),

(2)

where Fk(i, j) and F ′k(i, j) are the feature value of pixel (i, j)for the kth feature and Sk(i, j) is betwen 0 and 1. In ad-dition, since HOG features are the histogram distributionstherefore its distance can be measured by using a well knowndistance measure called Battacharya coefficients. Let ht0

and ht be two normalized histograms computed from inputand reference images. Then the Battacharya distance d iscomputed as:

d =

L∑l=1

√ht0l h

tl , (3)

where L is the number of distributions in each histogramand d is in the range of 0 to 1. Next a weighted combinationof similarity measures is computed as follows:

W (i, j) =

K∑k=1

wkSk(i, j), (4)

whereK is the total number of features and wk is the weight-ing factor for kth feature.

In the previous approaches, when features are extractedand matched, the weighting factor wk of each componentis chosen empirically to maximize the true pixels and min-imize the false pixels in the foreground detection, which isa very tedious work for large-scale video analysis. After an-alyzing the weighting factors, we observed a tendency thatthe static background requires a smaller value whereas dy-namic background needs a higher value to adapt changes ofthe scene.

In this work, our scheme selects the weight wk for eachcomponent dynamically. The weight wk of each componentis computed frame by frame, which is the sum of the ra-tio of mean µ to its variance σ for each feature. It is ob-served experimentally that the mean µ and variance σ ofdynamic backgrounds are always greater than of the staticbackgrounds. Therefore, the total weighted sum of all fea-tures Wsum for dynamic background is less than for thestatic background as mentioned in Fig. 3. The wk for eachfeature is computed as follows:

wk =∑i

∑j

µk(i, j)

σk(i, j), Wsum =

K∑k=1

wk, (5)

where K is the number of features, σk and µk is the varianceand mean value of kth feature respectively.

We have experimentally observed that HOG features arerobust for clearly visible Human detection as shown in Fig. 2(b). Color information, local binary pattern and gradientfeatures are robust for highly dynamic backgrounds, but failsin static case where background and foreground object havesimilar features, such as in Fig. 2 (c). Moreover, sparsefeatures E in Eqn.(1) of OR-PCA are very robust for staticbackground scenes, but fails in highly dynamic backgroundsas shown in Fig. 2 (d) and [5].

Therefore, the selected and useful features will partici-pate frame by frame according to the weighted sum of allfeatures Wsum in three types of background scenes: staticbackground scene where pixel values have no variations asshown in 1st and 4th rows of Fig. 2, little dynamic back-ground scene where some part of the background scene con-tains pixel variations as shown in 2nd row of Fig. 2 andhighly dynamic background where most part of the back-ground pixels have high variations as shown in 3rd row ofFig. 2.

The features will participate frame by frame according tobackground dynamics, which is as under:

(a) (b) (c) (d)

Figure 2: An example of multi-features selection.From left to right: (a) input, (b) HOG features,(c) color, local binary pattern and gradient features,and (d) sparse features.

Backgrounddynamics

=

Highly dynamic if Wsum ≤ 0.2,

Small dynamic if 0.2 < Wsum < 0.3,

Static if Wsum ≥ 0.3.

(6)HOG features will participate in every situation if the fore-ground object is human, otherwise a minor participation willbe observed. For static background scenes, only sparse fea-tures E will participate. All other features can be ignoredin this situation. Color, local binary pattern and gradientfeatures will participate only for highly dynamic backgroundscenes, rest of other features can be rejected. Similarly, allfeatures will participate together in case of small dynamicscene. The result of these individual features are shown inFig. 2 with different scenarios. These different backgroundscenes do not occur independently but simultaneously asshown in Fig. 3.

0

0.1

0.2

0.3

0.4

0.5

0.6

1 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200

Number of Frames

Highly Dynamic Background Static Background Little Dynamic Background

Wsum

Figure 3: Different background scenes according toEqn.(6). Magnify it for better visualization.

3.4 Foreground DetectionThe foreground mask is obtained after the feature selec-

tion process according to Wsum. OR-PCA sparse featureis thresholded to get the binary foreground mask when itis participating. The foreground mask is obtained from allother features by applying the following threshold function,

fg(i, j) =

{1 if W (i, j) < t,

0 otherwise,(7)

where t is a threshold and its value varies from 0.2 to 0.4.In our experiments, 0.2 is used for static, 0.3 and 0.4 is usedfor little and higly dynamic backgrounds.

4. EXPERIMENTAL RESULTSIn this section both quantitative and qualitative results

are presented in detail. We have tested our algorithm onthree challenging datasets namely, Wallflower [10], I2R [7]and BMC 2012 [11] dataset.

Due to the space limitations, qualitative results are pre-sented on some specific sequences from each dataset. Ouralgorithm is compared with four methods namely, Mixtureof Gaussians (MOG) [9], Semi-Soft GoDec (SGD) [12] andDECOLOR (DEC) [13]. In addition we have also comparedthis work with ORPCA-MRF (OR-MRF) for dynamic back-ground subtraction of Javed et al. [5]. The algorithm is im-plemented in Matlab R2013a with 3.40 GHZ Intel core i5processor with 4 GB RAM. Additionally, 5 × 5 median fil-tering is applied as a post-processing step on binary mask.

From Wallflower dataset [10], five sequences namely Wav-ing Trees (WT), Camouflage (CF), Foreground Aperture(FA), Light Switch (LS) and Time of Day (TOD) out ofseven sequences are presented. Four sequences namely Mov-ing Curtain (MC), Water Surface (WS), lobby and ShoppingMall (SM) are taken from I2R dataset [7] for visual results.Each sequence contains a size of 120×160 for both datasets.Fig. 4 and 5 show the qualitative results of these datasets.Synthetic sequences, such as rotary and street are testedfrom category Evaluation of BMC 2012 [11] dataset. Eachcategory contains five videos having size of 480×640. Fig. 6show the viusal results of rotary sequences.

ORPCA-DFS is also evaluated quatitatively with othermethods. A well-known F-measure score is computed for allsequences, by comparing our results with their available cor-responding ground truth data. Wallflower and I2R datasetsare quantitatively evaluated according to this criteria. ButBMC 2012 is evaluated according to their provided proce-dural tool 1. The average score is computed from each videoaccording to this tool. Table 1, 2 and 3 show the achievedperformance on three datasets. The (−) in Table 1 indicatesthat DEC method can not process large amount of frames,therefore F-measure score is not available. In each case ouralgorithm outperforms with other state of art methods e.g.,on average F-measure score of 82.38%, 84.99%, 85.99% and86.50% in each dataset, respectively.

The computational time is also investigated during ourexperiments. Since multiple features are participating, com-putational complexity is observed frame by frame. It takesalmost 0.7 seconds to process 120 × 160 each video framewithout HOG features and almost 1.5 seconds with HOGfeatures.

1http://bmc.univ-bpclermont.fr/?q=node/7

(a) (b) (c) (d) (e) (f) (g)

Figure 4: Wallflower Dataset. From left to right:(a) Input, (b) Ground Truth, (c) Semi Soft Go Dec,(d) MOG, (e) DECOLOR, (f) ORPCA-MRF, and(g) ORPCA-DFS.

(a) (b) (c) (d) (e) (f) (g)

Figure 5: I2R Dataset. From left to right: (a) Input,(b) Ground Truth, (c) Semi Soft Go Dec, (d) MOG,(e) DECOLOR, (f) ORPCA-MRF, and (g) ORPCA-DFS.

(a) (b) (c) (d) (e) (f)

Figure 6: BMC Dataset Rotary Sequence. From leftto right: (a) Input, (b) Ground Truth, (c) Semi SoftGo Dec, (d) MOG, (e) DECOLOR, and (f) ORPCA-DFS.

These good statistical evaluations show that the OR-PCAwith dynamic feature selection scheme has a very nice po-tentional for background/ foreground separation.

5. CONCLUSIONIn this paper, OR-PCA with dynamic feature selection

Table 3: BMC Rotary/Street Sequence: Comparison of F-measure score in (%) (direct one-to-one correspon-dence of rotary sequences with Fig. 6)

Methods 1st 2nd 3rd 4th 5th AvgMOG 88.01 / 83.60 88.00 / 84.33 62.45 / 63.94 61.84 / 61.46 77.80 / 66.06 75.62 / 71.87SGD 87.03 / 86.56 86.87 / 86.55 85.66 / 0 81.05 / 66.80 74.47 / 65.60 83.01 / 78.27DEC 88.65 / 87.45 89.25 / 86.78 85.55 / 86.10 81.69 / 78.52 77.61 / 74.13 84.55 / 82.59Ours 89.66 / 88.00 89.80 / 88.10 85.40 / 84.80 83.60 / 84.80 81.50 / 84.60 85.99/ 86.50

Table 1: Wallflower Dataset: Comparison of F-measure score in (%).

Sequene MOG SGD DEC OR-MRF OursWT 66.39 18.29 88.45 86.89 88.60LS 16.86 26.71 - 85.17 80.36FA 32.91 24.51 - 69.10 80.17CF 74.21 66.31 38.56 91.18 92.38

TOD - 13.73 - 70.63 70.40Average 38.07 29.91 25.40 80.59 82.38

Table 2: I2R Dataset: Comparison of F-measurescore in (%).

Sequene MOG SGD DEC OR-MRF oursMC 77.09 43.44 87.00 89.20 90.39WS 86.23 44.73 90.22 91.60 88.26

Lobby 58.98 36.20 64.60 80.81 83.13SM 67.78 65.54 68.22 73.60 78.18

Average 72.52 47.22 77.51 83.80 84.99

scheme is presented for challenging background scenes. OR-PCA with dynamic feature selection scheme provides a verynice framework to select multiple features frame by frame.The features are selected according to the weighted sumof total features component. First, the feature backgroundmodel is initialized then OR-PCA is applied on feature modelto separate the low-rank model. Experimental evaluationsand comparisons with other state of the art methods showthe effectiveness and robustness of our proposed technique.However, the similarity measures process takes a siginificanttime when features are extracted from every incoming videoblock. Therefore, our future work will focus more on to im-prove the computational complexity as well as to make afuzzy and tensor verision of OR-PCA for robust foregrounddetection.

6. ACKNOWLEDGEMENTThis work is supported by the World Class 300 project,

Development of HD video/network-based video surveillancesystem(10040370), funded by the Ministry of Trade, Indus-try, and Energy (MOTIE, Korea).

7. REFERENCES[1] T. Bouwmans and E. H. Zahzah. Robust PCA via

Principal Component Pursuit: A review for acomparative evaluation in video surveillance.Computer Vision and Image Understanding, pages22–34, 2014.

[2] E. J. Candes, X. Li, Y. Ma, and J. Wright. RobustPrincipal Component Analysis? Journal of the ACM(JACM), 58(3):11, 2011.

[3] N. Dalal and B. Triggs. Histograms of orientedgradients for human detection. In Proceedings of the2005 IEEE Computer Society Conference onComputer Vision and Pattern Recognition (CVPR’05)- Volume 1 - Volume 01, CVPR ’05, pages 886–893,Washington, DC, USA, 2005. IEEE Computer Society.

[4] J. Feng, H. Xu, and S. Yan. Online Robust PCA viastochastic optimization. In Advances in NeuralInformation Processing Systems, pages 404–412, 2013.

[5] S. Javed, S. OH, S. Andrews, T. Bouwmans, and S. K.Jung. OR-PCA with MRF for Robust ForegroundDetection in Highly Dynamic Backgrounds. In 12thAsian Conference on Computer Vision (ACCV), 2014,page To appear, 2014.

[6] S. Javed, S. H. Oh, J. Heo, and S. K. Jung. RobustBackground Subtraction via Online Robust PCAusing image decomposition. In Proceedings of the 2014Research in Adaptive and Convergent Systems, pages90–96, 2014.

[7] L. Li, W. Huang, I.-H. Gu, and Q. Tian. Statisticalmodeling of complex backgrounds for foregroundobject detection. Image Processing, IEEETransactions on, 13(11):1459–1472, 2004.

[8] N. M. Oliver, B. Rosario, and A. P. Pentland. Abayesian computer vision system for modeling humaninteractions. Pattern Analysis and MachineIntelligence, IEEE Transactions on, 22(8):831–843,2000.

[9] C. Stauffer and W. E. L. Grimson. Adaptivebackground mixture models for real-time tracking. InComputer Vision and Pattern Recognition, 1999.IEEE Computer Society Conference on., volume 2.IEEE, 1999.

[10] K. Toyama, J. Krumm, B. Brumitt, and B. Meyers.Wallflower: principles and practice of backgroundmaintenance. In Computer Vision, 1999. TheProceedings of the Seventh IEEE InternationalConference on, volume 1, pages 255–261 vol.1, 1999.

[11] A. Vacavant, T. Chateau, A. Wilhelm, andL. Lequievre. A benchmark dataset for outdoorforeground/background extraction. In ComputerVision-ACCV 2012 Workshops, pages 291–300.Springer, 2013.

[12] T. Zhou and D. Tao. Godec: Randomized low-rank &sparse matrix decomposition in noisy case. InProceedings of the 28th International Conference onMachine Learning (ICML-11), pages 33–40, 2011.

[13] X. Zhou, C. Yang, and W. Yu. Moving objectdetection by detecting contiguous outliers in thelow-rank representation. Pattern Analysis andMachine Intelligence, IEEE Transactions on,35(3):597–610, 2013.