Distinguishing Fall Activities using Human Shape Characteristics

6
Homa Foroughi 1 , Mohamad Alishahi 2 , Hamidreza Pourreza 1 , Maryam Shahinfar 3 1 Department of Computer Engineering, Ferdowsi University of Mashhad, Iran 2 Islamic Azad University, Mashhad Branch, Iran 3 Islamic Azad University, Ramsar Branch, Iran [email protected], [email protected], [email protected], [email protected] Abstract- Video Surveillance is an omnipresent topic when it comes to enhancing security and safety in the intelligent home environments. In this paper we propose a novel method to detect various posture-based events in a typical elderly monitoring application in a home surveillance scenario. These events include normal daily life activities, abnormal behaviors and unusual events. Due to the fact that falling and its physical-psychological consequences in the elderly are a major health hazard, we monitor human activities with a particular interest to the problem of fall detection. Combination of best-fit approximated ellipse around the human body, horizontal and vertical velocities of movement and temporal changes of centroid point, would provide a useful cue for detection of different behaviors. Extracted feature vectors are finally fed to a fuzzy multiclass support vector machine for precise classification of motions and determination of a fall event. Reliable recognition rate of experimental results underlines satisfactory performance of our system. I. INTRODUCTION Intelligent video surveillance systems are receiving a great deal of interest especially in the fields of personal security and assistance. The fundamental capabilities of a visual surveillance system are to provide recognition trajectories of the moving objects in the scene, and to use them for further processing. In the past few decades, vision-based surveillance has been extensively applied on industrial inspection, traffic control, security systems, and medical and scientific research. One of the application areas of video surveillance is monitoring the safety of elderly in home environments. Human society is experiencing tremendous demographic changes in aging since the turn of the 20th century. And in most of our countries elderly people represent the fastest growing segment of the population and this trend will increase over the next years. According to a report of U.S. Census Bureau, there will be a 210% increase in the population with age of 65 and over within the next 50 years [1]. In the case of elderly people living on their own, there is a particular need for monitoring their behavior, such as a fall, or a long period of inactivity. A fall incident not also causes many disabling fractures but also has dramatic psychological consequences that reduce the independence of the person. It is shown in [11] that 28-34% elderly people in the community experience at least one fall every year, and 40-60% of the falls lead to injury. Thus with the population growing older and increasing number of people living alone, supportive home environments able to automatically monitor human activities are likely to widespread due to their promising ability of helping elderly people. Nowadays, the usual solution to detect falls is wearable sensors. These autonomous sensors are usually attached under the armpit, around the wrist, behind the ear’s lobe or at the waist. These devices integrate accelerometer and/or inclinometer sensors and can monitor velocity and acceleration, vertical posture toward lying posture. However the problem of such detectors is that older people often forget to wear them, indeed their efficiency relies on the person’s ability and willingness to wear them. Moreover in the case of noncontact sensors, they often provide fairly crude data that’s difficult to interpret. Computer vision systems track the resident using cameras installed at vantage locations and attempt to detect a fall event based on image- processing algorithms that are designed to identify unusual inactivity [3]. In this paper we present a novel video analysis based approach for monitoring human activities with a particular interest to the problem of fall detection. The reminder of the paper is organized as follows: in section II we briefly review some existing vision-based fall detection systems, in section III our proposed system is introduced and more technical details are described. Experimental results are represented in section IV. Section V draws conclusions and outlines future work directions. II. RELATED WORK Recently some research has been done to detect falls using image processing techniques. A simple method was used in [5], [6] based on analyzing aspect ratio of the moving object’s bounding box. This method could be inaccurate, depending on the relative position of the person, camera, and perhaps occluding objects. The works in [2], [7] used the normalized vertical and horizontal projection of segmented object as feature vectors. To overcome occluding objects problem, some M. Iskander et al. (eds.), Technological Developments in Education and Automation, DOI 10.1007/978-90-481-3656-8_95, © Springer Science+Business Media B.V. 2010 Distinguishing Fall Activities using Human Shape Characteristics

Transcript of Distinguishing Fall Activities using Human Shape Characteristics

Homa Foroughi1, Mohamad Alishahi2, Hamidreza Pourreza1, Maryam Shahinfar3

1Department of Computer Engineering, Ferdowsi University of Mashhad, Iran 2Islamic Azad University, Mashhad Branch, Iran 3Islamic Azad University, Ramsar Branch, Iran

[email protected], [email protected], [email protected], [email protected]

Abstract- Video Surveillance is an omnipresent topic when it comes to enhancing security and safety in the intelligent home environments. In this paper we propose a novel method to detect various posture-based events in a typical elderly monitoring application in a home surveillance scenario. These events include normal daily life activities, abnormal behaviors and unusual events. Due to the fact that falling and its physical-psychological consequences in the elderly are a major health hazard, we monitor human activities with a particular interest to the problem of fall detection. Combination of best-fit approximated ellipse around the human body, horizontal and vertical velocities of movement and temporal changes of centroid point, would provide a useful cue for detection of different behaviors. Extracted feature vectors are finally fed to a fuzzy multiclass support vector machine for precise classification of motions and determination of a fall event. Reliable recognition rate of experimental results underlines satisfactory performance of our system.

I. INTRODUCTION

Intelligent video surveillance systems are receiving a great deal of interest especially in the fields of personal security and assistance. The fundamental capabilities of a visual surveillance system are to provide recognition trajectories of the moving objects in the scene, and to use them for further processing. In the past few decades, vision-based surveillance has been extensively applied on industrial inspection, traffic control, security systems, and medical and scientific research. One of the application areas of video surveillance is monitoring the safety of elderly in home environments.

Human society is experiencing tremendous demographic changes in aging since the turn of the 20th century. And in most of our countries elderly people represent the fastest growing segment of the population and this trend will increase over the next years. According to a report of U.S. Census Bureau, there will be a 210% increase in the population with age of 65 and over within the next 50 years [1]. In the case of elderly people living on their own, there is a particular need for monitoring their behavior, such as a fall, or a long period of inactivity. A fall incident not also causes many disabling fractures but also has dramatic psychological consequences that reduce the independence of the person. It is shown in [11] that 28-34% elderly people in the

community experience at least one fall every year, and 40-60% of the falls lead to injury. Thus with the population growing older and increasing number of people living alone, supportive home environments able to automatically monitor human activities are likely to widespread due to their promising ability of helping elderly people. Nowadays, the usual solution to detect falls is wearable sensors. These autonomous sensors are usually attached under the armpit, around the wrist, behind the ear’s lobe or at the waist. These devices integrate accelerometer and/or inclinometer sensors and can monitor velocity and acceleration, vertical posture toward lying posture. However the problem of such detectors is that older people often forget to wear them, indeed their efficiency relies on the person’s ability and willingness to wear them. Moreover in the case of noncontact sensors, they often provide fairly crude data that’s difficult to interpret. Computer vision systems track the resident using cameras installed at vantage locations and attempt to detect a fall event based on image-processing algorithms that are designed to identify unusual inactivity [3].

In this paper we present a novel video analysis based approach for monitoring human activities with a particular interest to the problem of fall detection. The reminder of the paper is organized as follows: in section II we briefly review some existing vision-based fall detection systems, in section III our proposed system is introduced and more technical details are described. Experimental results are represented in section IV. Section V draws conclusions and outlines future work directions.

II. RELATED WORK

Recently some research has been done to detect falls using image processing techniques. A simple method was used in [5], [6] based on analyzing aspect ratio of the moving object’s bounding box. This method could be inaccurate, depending on the relative position of the person, camera, and perhaps occluding objects. The works in [2], [7] used the normalized vertical and horizontal projection of segmented object as feature vectors. To overcome occluding objects problem, some

M. Iskander et al. (eds.), Technological Developments in Education and Automation, DOI 10.1007/978-90-481-3656-8_95, © Springer Science+Business Media B.V. 2010

Distinguishing Fall Activities using Human Shape Characteristics

researchers have mounted the camera on the ceiling: Lee [8] detected a fall by analyzing the shape and 2D velocity of the person. Nait-Charif [9] tracked the person using an ellipse and inferring falling incident when target person is detected as inactive outside normal zones of inactivity like chairs or sofas. Rougier [4] used wall-mounted cameras to cover large areas and falls were detected using motion history image and human shape variation. Other systems used the audio information or using 3D trajectory and speed of head to infer events [11]. These mechanisms tend to be more complex and need more additional cost. Despite the considerable achievements that has accomplished on this field in the recent years, there are still some clear challenges to overcome. To address limitations, this paper presents a novel approach which aims not only to detect and record fall events, but also other postures in a home environment with a considerable recognition rate. The main contributions of this paper are as follows: • One of the major challenges of vision-based systems is

the perceived intrusion of privacy because of the way that image data are transmitted and analyzed. To circumvent intrusion of privacy problem, the captured images in the proposed method are immediately filtered at the first level into motion features, which encapsulate only the motion vectors of the subject. Visual images are not stored or transmitted at any stage of the process so that it is impossible to reconstruct the abstract information back into the original image.

• Visual fall detection is inherently prone to high levels of false positive as what appears to be a fall might not be a fall, but a deliberate movement towards the ground. In other words, most of current systems [6], [7], [9], [10] are unable to discriminate between real fall incident and an event when the person is simply lying or sitting down abruptly. This paper presents an approach which is not inherently prone to false positives. This is because it incorporates an efficient classifier in assessing the presence of fall events.

• To simulate real life situations, we considered comprehensive and various movement scenarios consisting normal daily life activities such as walking, running, bending down, sitting down and lying down, some abnormal behaviors like limping or stumbling, and also unusual events like falling.

III. PROPOSED SYSTEM

This paper proposes a novel method for monitoring human posture-based events in a home environment with focus on detecting three types of fall. Our approach exploits computer vision techniques to detect and track people inside a single room with a single camera for classifying different postures and detecting falls. Since

we are interested in analyzing the motion occurring in a given window of time, firstly we need to obtain the segmentation of moving objects. Therefore, a background estimation procedure is performed to separate motion from background. After the silhouettes are acquired, the next step involves extracting features and tracking their patterns over time. Because of the dependence of classification results on efficient representation of images by the feature vectors, this procedure is a determinant factor in the performance. Since changes in the human shape can discriminate if the detected motion is normal (e.g. the person walks or sits) or abnormal (e.g. the person falls), we analyze the shape changes of the extracted silhouettes in the video sequence. It seems that combination of approximated ellipse around the human body, horizontal and vertical velocities and temporal changes of centroid of human body; would provide a useful cue for detecting different behaviors. The extracted features will then be fed to the fuzzy multi-class support vector machine for precise classification of motions.

A. Foreground Segmentation

Background subtraction is a particularly popular method for motion segmentation and attempts to detect moving regions in an image by differencing between current image and a reference background image in a pixel-by-pixel fashion. In order to human segmentation and extracting the moving objects; here we use a background subtraction method described in [15], which is fairly robust and gives appropriate results on image sequences with shadows and highlights.

B. Feature Extraction

One of the issues of major importance in a recognition system is feature extraction, i.e., the transition from the initial data space to a feature space that will make the recognition problem more tractable. Intuitively, the human shape appears to be a good feature to look at as it captures the motion of most of the body parts. So we analyze the shape changes of the extracted silhouettes in the video sequence. To this aim, three main features that retain the motion information of the actions and can be easily obtained are selected to form the feature vector.

1) Best-fit Ellipse When a motion is detected, an analysis on the moving

object is performed to detect a change in the human shape, more precisely in orientation and proportion. The person is then approximated by an ellipse using moments [4]. An ellipse is defined by its center ),( yx , its orientation θ and the length a and b of its major and minor semi-axes [13]. The authors believe that this representation is rich enough to support recognition of different events. For a continuous image f(x, y), the moments are given by:

FOROUGHI ET AL. 524

..),( ,.2,1,0,f =+∞

∞−

+∞

∞−∫ ∫= qporqp

pq dxdyyxfyxm

(1)

The center of the ellipse is obtained by computing the coordinates of the center of mass with the first and zero order spatial moments. Then the centroid ),( yx is used to compute the central moment as follows:

)()(),()()( yydxxdyxfyyxx qppq −−−−= ∫ ∫

+∞

∞−

+∞

∞−μ

(2)

The angle between the major axis of the person and the horizontal axis x gives the orientation of the ellipse, and can be computed with the central moments of second order:

)2(arctan21

0220

11μμ

μθ−

=

(3)

Values Imin and Imax, (the least and greatest moments of inertia) are computed by evaluating the eigenvalues of the covariance matrix [10]:

⎥⎦

⎤⎢⎣

⎡=

0211

1120

μμμμ

J

(4)

Then the major semi-axis a and the minor semi-axis b of the best fitting ellipse are given by [6]:

81

max

3min41

81

min

3max41 )4()4(

⎥⎥⎦

⎢⎢⎣

⎡=

⎥⎥⎦

⎢⎢⎣

⎡=

IIb

IIa ππ

(5)

With a and b, we can also defined the ratio of the ellipse ρ = a/b (related to its eccentricity). Fig. 1 shows examples of the approximated ellipse of the person in different situations. In order to detecting changes in the human shape discriminate an abnormal behavior from other normal activities, two values are computed: • The orientation standard deviation σθ of the ellipse:

If a person falls perpendicularly to the camera optical

axis, then the orientation will change significantly and σθ will be high. If the person just walks, σθ will be low.

• The ratio standard deviation σρ of the ellipse: If a person falls parallelly to the camera optical axis, then the ratio will change significantly and σρ will be high. If the person just walks, σρ will be low. Note that σθ and σρ are computed for a 2 second

duration (approximately 60 frames). As Fig. 1 demonstrates, the orientation standard deviation of the ellipse and also the ratio standard deviation of the ellipse are completely distinctive for different postures.

2) Vertical/Horizontal Velocity

The velocity in the vertical direction (Vv) and in horizontal plane (Vh) are calculated and then two parameters were determined: peak velocities, and lead time of fall activities, defined as the time between both velocities exceeding the peak velocity of the normal activity (e.g., 1 m/s) and the time when the vertical position of the body reached either the height of the mattress or minimum.

Fig. 2. Absolute Peak Velocity in Horizontal and Direction

There are two main observations for all the normal activities. Firstly, peak Vh and Vv are limited within 1m/s range, as shown in Fig. 1. Secondly, it was found that these peak velocities did not occur at the same time. However, for all fall activities, in contrast to the normal

Walk Run Stumble Limp Bend down Sit down σθ =1.16, σρ=0.18 σθ =2.53, σρ=0.24 σθ =5.14, σρ=0.56 σθ =8.32, σρ=0.53 σθ =19.32, σρ=0.47 σθ =16.17, σρ=0.62

Forward Fall Backward Fall Sideway Fall Lie down σθ =9.10, σρ=0.72 σθ =13.56, σρ=0.81 σθ =11.87, σρ=0.79 σθ =12.13, σρ=0.70

Fig. 1. Approximated Ellipsis of Different Human Postures

DISTINGUISHING FALL ACTIVITIES USING HUMAN SHAPE CHARACTERISTICS

Vh

525

activities, the peak Vh and Vv tended to occur simultaneously. Besides, both Vh and Vv were found to exceed the 1 m/s range at about 300-400 ms before fall.

Fig. 3. Absolute Peak Velocity in Vertical and Direction

3) Temporal Changes of Centroid Centroid is an effective way to estimate object motion

in space. We choose to track the centroid because it is a potentially powerful and intuitive pointing cue if it can be obtained accurately, and has a large movement during a fall. Our proposed method tries to estimate a person's centroid in every frame of a video sequence and further infer his/her attentive behavior.

Centroids can be computed as temporal averages of a

series of events. To localize the centroid of the person, firstly silhouette is enclosed by minimum bounding rectangle and then the centroid point (xc,yc) of the bounding box is marked in each frame:

The sequence of centroid points over successive frames forms this part of our feature vector. The curves obtained from sequence of successive centroid positions are shown

in Fig. 5. Notice that the movement of centroid is computed for a 2 second duration. We can infer from Fig. 4 that the trajectory of centroid is completely different in ten scenarios. For example when fall starts, a major change occurs in vertical direction and the centroid point has meaningful fluctuations, but while walking or running movement of centroid is not so sensible.

C. Posture Classification

The Support Vector Machine (SVM) is a new and promising classification technique. Basically, there are two types of approaches for multi-class SVM. One is considering all data in one optimization formulation; the other is constructing and combining a series of binary SVMs, such as One-Against-All (OVA), One-Against-One (OVO) and DAGSVM. OVO is constructed by training binary SVMs between pairwise classes. Thus, OVO model consists of k(k-1)/2 binary SVMs for K-class problem. Each of the k(k-1)/2 SVMs casts one vote for its favored class, and finally the class with most votes wins. Furthermore, to resolve unclassifiable regions and improvement of recognition rate, we introduce fuzzy version of OVO SVM.

1) Conventional OVO SVM

Although pairwise SVMs reduce the unclassifiable regions but they still exist [16]. In pairwise SVMs, we determine the decision functions for all the combinations of class pairs. In determining a decision function for a class pair, we use the training data for the corresponding two classes. Let the decision function for class i against class j, with the maximum margin, be:

ijTijij bxWxD += )()( (7)

Where Wij is the - dimensional vector, g(x) is a mapping function that maps x into the - dimensional feature space, bij is the bias term, and )()( xDxD ijij −= . The regions },,...,2,1,0)(|{ ijnjxDxR iji ≠=>= do not overlap, and if x is in Ri, we classify x into class i. If x is not in Ri , x is classified by voting. Namely, for the input vector x, D(x) is calculated as follows:

))(()(,1

xDSignxDn

ijjiji ∑

≠==

(8)

And x is classified into class: )(maxarg

,...2,1xDi

ni= (9)

If ikfornxDnxDRx kii ≠−<−=∈ 1)(,1)(, , then x is classified into i. But if any of Di(x) is not n-1, equation (9) may be satisfied for multiple i s, so x is unclassifiable. 2) Fuzzy OVO SVM

To resolve unclassifiable regions we introduce the membership function. To this aim, for ith optimal

⎥⎥⎥⎥

⎢⎢⎢⎢

=∑=

N

xx

N

ii

c1

⎥⎥⎥⎥

⎢⎢⎢⎢

=∑=

N

yy

N

ii

c1

(6)

(a) Unsual Behavaior( Fall) (b) Abnoraml Gait

(c) Normal Daily Activity

Fig. 4. Centroid Displacement

FOROUGHI ET AL.

Vv

526

separating hyperplane )(0)( jixDi ≠= one-dimensional membership functions mij(x) are defined in the directions orthogonal to Di(x)=0 as follows:

⎪⎩

⎪⎨⎧ ≥

=OtheerwisxD

xDifxm

ij

ijij )(

1)(1)(

(10)

Class i of membership function of x is defined by the minimum or average operation for ),...,2,1,()( njijxmij =≠

)(min,...2,1

)( xmijxm

ijnj

i

≠=

= (11)

∑≠=−

=n

ijj

iji xmn

xm1

)(1

1)(

(12)

Now an unknown datum x is classified into the class: )(maxarg

,...2,1xmi

ni= (13)

Equation (11) is equivalent to

ijnj

iji xDxm

≠=

=,...,2,1

))(min,1(min)( (14)

Because mi(x)=1 holds for only one class, classification with the minimum operator is equivalent to classifying into the class:

)(minmaxarg,...,2,1,...2,1

xDij

ijnjni

≠==

(15)

So the unclassifiable region is resolved for the fuzzy SVM with the minimum operator.

IV. EXPERIMENTAL RESULTS AND DISCUSSION

A. Data Acquisition In order to validate the overall system performance we

applied the proposed approach to a set of videos recorded in our lab. The dataset has been collected along two weeks, by considering different light and weather conditions. 48 subjects with different height, weight and genders whose ages ranged from 20 to 30 were asked to participate in the project. We repeated 10 kinds of activities by 5 times in the experimental space and finally 50 video clips were captured and recorded in AVI format with a resolution of 320*240 pixels at 30 frames per second. Figure 4 illustrates examples of each motion. Our dataset for our experiment is composed of video sequences representing three main categories: a) Normal Daily Activity: Five different normal daily

activities have been considered: Walking with normal speed, Running, Bending down for catching something on the floor and then rising up, Sitting down on the floor and then standing up and Lying down on the floor.

b) Unusual Behavior (Fall): Although the scenarios of falling are very various, we can categorize them in three classes. Detection of various types of falls is valuable in

clinical studies to determine fall incidence and costs associated with the treatment of fall injuries. As most falls occur during intentional movements initiated by the person, they happen mainly in forward or backward: stumbling on an obstacle during walking, backward slip on wet ground. But in some cases the fall occurs sideways, either during a badly controlled sit to stand transfer. In this case, the person frequently tries to grip the wall [1].

c) Abnormal Gait: Subjects were asked to walk in two special unusual ways: Stumbling and Limping. In the first case subjects were asked to walk in an unusual way, e.g. as if they were suffering a balance deficiency such as dizziness. In these situations person is unable to keep his balance or symmetry and synchrony of his movement. Body movements suggest the person is in a dubious condition. Whereas the case of limping may be caused by unequal leg lengths, experiencing pain when walking, muscle weakness, disorders of proprioception, or stiffness of joints. Someone taking a step with a limp appears to begin to kneel and then quickly rise up on the other leg; to bystanders the process may appear arduous and painful.

B. Performance Evaluation

The experimental results show that the system has a robust recognition rate in detecting occurrence of considered events. Table I represents the experimental results. aN refers to number of actions. cN is number of correctly detected events, fN is number of falsely detected events and R is the recognition rate.

TABLE I RECOGNITION RATE OF DIFFERENT BEHAVIORS

Events aN cN fN R

Walk 120 108 12 90.00 Run 120 104 16 86.66 Stumble 120 100 20 83.33 Limp 120 101 19 84.16 Forward Fall 120 109 11 90.83 Backward Fall 120 112 8 93.33 Sideway Fall 120 104 16 86.66 Bend down 120 102 18 85.00 Sit down 120 103 17 85.83 Lie down 120 114 6 95.00

Also, for better understanding wrong classification results, confusion matrix of classifier output is illustrated in Table II. Notice that E1-E10 are respectively representative of these events: Walk, Run, Stumble, Limp, Forward Fall, Backward Fall, Sideway Fall, Bend down, Sit down and Lie down.

DISTINGUISHING FALL ACTIVITIES USING HUMAN SHAPE CHARACTERISTICS 527

TABLE 2 CONFUSION MATRIX

V. CONCLUSIONS AND FUTURE WORK

In this paper a novel efficient approach for activity recognition, principally dedicated to fall detection is proposed. The combination of motion and change in the human shape gives crucial information on human activities. Our experiments indicate that fuzzy multi-class SVM method is more suitable for human motion recognition than the other methods. Our fall detection system has proven its robustness on realistic image sequences of ten different normal, abnormal and unusual human movement patterns. Reliable average recognition rate of experimental results (88.08%) underlines satisfactory performance and efficiency of our system. Moreover while existing fall detection systems are only able to detect occurrence of fall behavior, the proposed system is able to detect type of fall incident (forward, backward or sideway).

Future works will include the incorporation of multiple elderly monitoring which is able to monitor more than one person in the scene and also be able to handle occlusion. Using additional features is also a subject to be explored in the future work. Although the human shape has thus far proven to be a useful method for feature extraction; additional features may be necessary or useful.

REFERENCES [1] N Noury, A Fleury, P Rumeau, A Bourke, G Laighin, V Rialle, J

Lundy “Fall detection - Principles and Methods” Conf Proc IEEE Eng Med Biol Soc. 2007 ;1 :1663-1666 18002293 (P,S,E,B)

[2] Arie Hans Nasution and S. Emmanuel; Intelligent Video Surveillance for Monitoring Elderly in Home Environments, International Workshop on Multimedia Signal Processing (MMSP), Greece, October 2007.

[3] Sixsmith A, Johnson N. “Smart sensor to detect the falls of the elderly”, IEEE Pervasive Computing, vol. 3, no. 2, pp. 42–47, April-June 2004.

[4] Caroline Rougier, Jean Meunier, Alain St-Arnaud, Jacqueline Rousseau, “Fall Detection from Human Shape and Motion History Using Video Surveillance” Advanced Information Networking and Applications Workshops, 2007, AINAW '07. 21st International Conference on, Vol. 2 (2007), pp. 875-880.

[5] B. T¨oreyin, Y. Dedeoglu, and A. C¸ etin. HMM based falling person detection using both audio and video. In IEEE International Workshop on Human-Computer Interaction, Beijing, China, 2005.

[6] S.-G.Miaou, P.-H. Sung, and C.-Y. Huang, “A Customized Human Fall Detection System Using Omni-Camera Images and Personal Information” Proc of Distributed Diagnosis and Home Healthcare(D2H2) Conference, 2006

[7] R. Cucchiara, A.Pratti, and R.Vezani, “An Intelligent Surveillance System for Dangerous Situation Detection in home Environments” , in Intelligenza artificable, vol.1, n.1, pp. 11-15, 2004

[8] T. Lee and A. Mihailidis. An intelligent emergency response system: preliminary development and testing of automated fall detection. Journal of telemedicine and telecare, 11(4):194–198, 2005.

[9] Nait CH, McKenna SJ. “Activity summarisation and fall detection in a supportive home environment”, Int. Conf. on Pattern Recognition (ICPR), 2004.

[10] Tao, J., M.Turjo, M.-F. Wong, M.Wang and Y.-P Tan, “Fall Incidents Detection for Intelligent Video Surveillance” , ICICS2005

[11] C.Rougier, J.Meunier, A. St-Arnaud, J.Rousseau, “Monocular 3D Head Tracking to Detect Falls of Elderly People”, International Conference of IEEE Engineering in Medicine and Biology Society, Sept 2006

[12] Lin,C.-W.,et al.,"Compressed-Domain Fall Incident Detection for Intelligent Home Surveillance”. Proceedings of IEEE International Symposium on Circuits and Systems,ISCAS 2005,2005:p.2781-3784.

[13] L. Wang, From blob metrics to posture classification to activity profiling, Proceedings of the 18th International Conference on Pattern Recognition (ICPR, Poster), IV: 736-739, 2006

[14] A. Madabhushi and J.K. Aggarwal, "A Bayesian approach to human activity recognition," Proc. Second IEEE Workshop on Visual Surveillance, pp.25–32, June 1999.

[15] M.Shakeri, H.Deldari, H.Foroughi, "A Novel Fuzzy Background Subtraction Method Based on Cellular Automata for Urban Traffic Applications", 9th International IEEE Conference on Signal Processing, ICSP'08

[16] S.Abe, S.Singh, Support Vector Machine for Pattern Classification, Springer-Verlag London Limited 2005

Classified as

E1 E2 E3 E4 E5 E6 E7 E8 E9 E10

E1 108 6 4 2 0 0 0 0 0 0

E2 9 104 4 3 0 0 0 0 0 0

E3 10 6 100 4 0 0 0 0 0 0

E4 8 5 6 101 0 0 0 0 0 0

E5 0 0 0 0 109 0 6 4 1 0

E6 0 0 0 0 0 112 4 0 0 4

E7 0 0 0 0 8 4 104 4 0 0

E8 0 0 0 0 2 4 6 102 6 0

E9 0 0 0 0 5 0 4 6 103 2

E10 0 0 0 0 0 4 2 0 0 114

FOROUGHI ET AL. 528