CommonSens: Personalisation of complex event processing in automated homecare

14
CommonSens: Personalisation of Complex Event Processing in Automated Home Care * Jarle Søberg, Vera Goebel, and Thomas Plagemann Department of Informatics University of Oslo, Norway {jarleso, goebel, plageman}@ifi.uio.no June 2010 Abstract Automated home care is an emerging application domain for data management for sensors. We present a complex event processing (CEP) system and query language that provide the abstractions of capabilities and locations of interest to facilitate the task of the application pro- grammer through reuse, easier personalisation, and system supported sensor selection. To achieve these goals we have developed three inde- pendent models that represent the concepts an application program- mer can use: (1) event model, (2) environment model, and (3) sensor model. The sensor capabilities and locations of interest allow us to decouple the event specification from a particular instance. The sys- tem investigates the particular environment and chooses sensors that provide the correct capabilities and cover the locations of interest. The personalisation is achieved by updating the values in the conditions, the locations of interest and the time specifications in the queries. We demonstrate with a use-case how easy it is to personalise a query in two different environments, and use our proof-of-concept implementation to analyse the performance of our system. 1 Introduction Recent developments in sensor technology, video and audio analysis provide a good basis for application domains like smart homes and automated home care. These application domains involve reading sensor data from a great number of sensors that are placed in the home. We use established concepts from complex event processing (CEP) to process and analyse readings from * Technical Report #396, ISBN 82-7368-357-5, Department of Informatics, University of Oslo, June 2010 1

Transcript of CommonSens: Personalisation of complex event processing in automated homecare

CommonSens: Personalisation of Complex Event

Processing in Automated Home Care∗

Jarle Søberg, Vera Goebel, and Thomas Plagemann

Department of Informatics

University of Oslo, Norway

{jarleso, goebel, plageman}@ifi.uio.no

June 2010

Abstract

Automated home care is an emerging application domain for datamanagement for sensors. We present a complex event processing (CEP)system and query language that provide the abstractions of capabilitiesand locations of interest to facilitate the task of the application pro-grammer through reuse, easier personalisation, and system supportedsensor selection. To achieve these goals we have developed three inde-pendent models that represent the concepts an application program-mer can use: (1) event model, (2) environment model, and (3) sensormodel. The sensor capabilities and locations of interest allow us todecouple the event specification from a particular instance. The sys-tem investigates the particular environment and chooses sensors thatprovide the correct capabilities and cover the locations of interest. Thepersonalisation is achieved by updating the values in the conditions,the locations of interest and the time specifications in the queries. Wedemonstrate with a use-case how easy it is to personalise a query in twodifferent environments, and use our proof-of-concept implementationto analyse the performance of our system.

1 Introduction

Recent developments in sensor technology, video and audio analysis providea good basis for application domains like smart homes and automated homecare. These application domains involve reading sensor data from a greatnumber of sensors that are placed in the home. We use established conceptsfrom complex event processing (CEP) to process and analyse readings from

∗Technical Report #396, ISBN 82-7368-357-5, Department of Informatics, University

of Oslo, June 2010

1

these sensors. However, there are still many unsolved challenges that hin-der successful broad introduction of home care applications based on CEPsystems. In this paper we address three challenges.

First, there are many different kinds of sensors with many different ca-pabilities. A capability identifies the real world phenomena the sensor canmeasure. To complicate things, different types of sensors might have thesame capability and some sensors might have several capabilities. For in-stance, motion can be detected by cameras, motion detectors, or by aggre-gating information from several other types of sensors, e.g. by comparingsignal strength from a radio worn by the monitored person with a fixed setof base stations. On the other hand cameras can provide additional capabil-ities like object and face recognition. Therefore, it is important to decouplethe sensor from its capabilities in the queries. During instantiation, the sys-tem should automatically relate the capabilities addressed in the queries tothe particular instance.

Second, any kind of environment has an impact on the coverage area ofsensors, e.g. walls reduce the signal strength of radio waves and block light.This makes sensor placement a hard problem, both with respect to the initialsensor placement in the home and to check whether a particular applicationinstance has a proper sensor installation. The locations of interest, i.e., theplaces where events are expected to happen, also differ among the instances.

Third, to describe behavioural patterns of a monitored person a lot ofdomain knowledge is necessary. Storf et al. [4] report that they needed92 rules to describe toilet usage, personal hygiene and preparation of meals.However, the activities of interest are relevant for many people; for instance,fall detection and medication usage are in general the most important ac-tivities to recognise [6]. Reuse of queries that describe these activities, aswell as the ones investigated by Storf et al. would simplify the work of theapplication programmer considerably. Therefore, we propose utilising gen-eral queries, which address capabilities, locations of interest and temporalproperties, and which only need smaller adjustments to match the currentinstance.

In order to meet these challenges we define three independent models: (1)an event model to identify states and state transitions in the real world thatare of interest, (2) an environment model to describe the physical dimensionsof the environment and the impact it can have on various signal types, and(3) a sensor model to describe the capabilities of sensors, their coverage andthe signal types they are using. Our system implements the models. Thesensor model separates the sensors from their capabilities. We combine thesensor model and the environment model to address locations of interestand placement of sensors. In addition, the core contributions of this workinclude an event language that uses the models and supports reuse of themodel instances. The system is extensible so that any kind of emergingsensor and multimedia analysis algorithm can be integrated.

2

Conceptual model of the real world

EventsStates and state transitions

Sensors

Queries

Describe events

Detect events Detect states

Figure 1: The relation of the core elements in our conceptual model of thereal world.

The reminder of this paper is organised as follows: In Section 2 wedescribe our three models and the event language. To support our claims,Section 3 demonstrates a use-case evaluation and a performance evaluationof our proof-of-concept implementation. We discuss related work, concludeand address future work in Section 4.

2 Models

We define three models to identify and describe the concepts and semanticsof events, the environment where events happen, and the sensors that areused to obtain information about the environment.

2.1 Event Model

In our conceptual model of the real world, everything that happens can bemodeled through states and state transitions.

Definition 2.1 An event e is a state or state transition in which someonehas declared interest. A state is a set of typed variables and data values. Astate transition is when one or more of the data values in a state change sothat they match another state.

Which type of variables to use depends on the instantiation of the sys-tem. Not all states and state transitions in the real world are of interest.Therefore, we view events as subsets of these states and state transitions.Figure 1 relates the core elements of the approach to our conceptual modelof the real world. The application programmer uses declarative queries todescribe events.

Our event definition works well with existing event processing paradigms.For example, in publish/subscribe systems, events can be seen as publica-tions that someone has subscribed to.

To identify when and where an event occurs and can be detected, tempo-ral and spatial properties are important to specify. For addressing temporalproperties we use timestamps, which can be seen as discrete subsets of thecontinuous time domain and time intervals.

3

Definition 2.2 A timestamp t is an element in the continuous time domainT : t ∈ T . A time interval τ ⊂ T is the time span between two timestampstb (begin) and te (end). τ has a duration δ = te − tb.

To differ events from other states and state transitions, it is important tohave knowledge about the spatial properties of the events, i.e., where in theenvironment they happen. These spatial properties are specified throughlocations of interest.

Definition 2.3 A location of interest loi is a set of coordinates describingthe boundaries of an interesting location in the environment.

In order to simplify application development it is a common approach tospecify high level events. Higher level events, in turn, are composed out oflower level events [3]. These two event types are called atomic events andcomplex events. An event that cannot be further divided into lower levelevents is called an atomic event.

Definition 2.4 An atomic event eA is an event with time and a locationof interest: eA = (e, loi, tb, te). For the attributes, (tb, te ∈ T ∨ ∅) and(|loi| = 1 ∨ loi = ∅). If used, the timestamps are ordered so that tb ≤ te.

Atomic events can be related concurrently or consecutively.

Definition 2.5 Two atomic events eAi and eAj are concurrent iff

∃tu, (tu ≥ eAi.tb) ∧ (tu ≥ eAj.tb) ∧ (tu ≤ eAi.te) ∧ (tu ≤ eAj.te).

For two atomic events to be concurrent there is a point in time wherethere is an overlap between both atomic events. Two events are consecutivewhen they do not overlap.

Definition 2.6 Two atomic events eAi and eAj are consecutive iff eAi.te <

eAj .tb.

A set of atomic events can be part of a complex event.

Definition 2.7 A complex event eC is a set of N atomic events: eC ={eA0, . . . , eAN−1}.

2.2 Environment Model

The environment is described by a configuration of objects. These objectscould be walls, furniture, etc. Once an object is defined, it can be reused inany other instantiation. Objects have two core properties; their shape, andhow they impact signals, i.e., permeability.

4

Definition 2.8 A shape s is a set of coordinates:

s = {(x, y, z)0, . . . , (x, y, z)N−1}

The triplets in s describe the convex hull (boundary) of the shape. All tripletvalues are relative to (x, y, z)0, which is referred to as base.

While each object has one shape, it can have different permeability val-ues. For instance, a wall stops light signals while a radio signal might onlybe partially reduced by the wall. Therefore, it is important to identify howpermeable objects are regarding different types of signals.

Definition 2.9 Permeability p is a tuple: p = (val, γ). val ∈ [−1, 1] is thevalue of the permeability. γ denotes which signal type this permeability valueis valid for.

The lower the value for p.val (the value val in p) is, the lower the per-meability is. If the permeability value is 0, the signal does not pass. Whileif p.val is less than 0, the signal is reflected. The object is defined as follows.

Definition 2.10 An object ξ is a tuple: ξ = (P, s). P = {p0, . . . , pN−1} isa set of permeability tuples. s is the shape.

We use ξ.P to support that an object can have permeability values formany different signal types. Finally, the environment is defined as an in-stance of a set of related objects.

Definition 2.11 An environment α is a set of objects: α = {ξ0, . . . , ξN−1}.Every ξi ∈ α \ {ξ0} is relative to ξ0.

In the definition of the shape s we state that all the triplet values in theshape ξi.s of an object are relative to ξi.s.(x, y, z)0. In an environment αy

where ξi is located, ξi.s.(x, y, z)0 is relative to ξ0.s.(x, y, z)0, which is set to(0,0,0).

2.3 Sensor Model

Sensors read analogue signals from the environment and convert these intodata tuples. Hence, a data tuple is the information that a sensor has obtainedabout a state in the real world. In addition to this, the sensor model shouldachieve two objectives. First, when we have described the events of interest,the system has to determine which type of sensors to use. Second, eventsmight happen over time, so we want the sensors to utilise historical andstored data together with recent data tuples. Since a sensor has to producedata tuples, we use the terms sensor and tuple source interchangeably.

In order to meet the first objective, each sensor should provide a set ofcapabilities.

5

Definition 2.12 A capability c is the type of state variables a sensor canobserve. This is given by a textual description: c = (description).

Capabilities like temperature reading or heart frequency reading returnvalues of type integer. However, capabilities might be much more complex,like face recognition or fall detection. To capture all these possibilities inour model we use a string to describe sensors such that the description ina particular implementation can be anything from simple data types, suchas integers, to XML and database schemes. The application programmershould not address particular sensors, rather sensor capabilities. To enablethe system to bind capabilities to correct sensors it is necessary to describecapabilities based on a well defined vocabulary (or even on an ontology).

In order to meet the second objective, we define three distinct typesof sensors. The physical sensor is responsible for converting the analoguesignals to data tuples. The external source only provides stored data, andthe logical sensor processes data tuples from all the three sensor types.

Definition 2.13 A physical sensor φP is a tuple: φP = (cov, γ, f, C). cov

denotes the coverage of the physical sensor. γ is the type of signal thistuple source sends or receives. f is the maximal sampling frequency, i.e.,how many data tuples the physical sensor can produce every second. C ={c0, . . . , cN−1} is the set of capabilities that the sensor provides.

The physical sensor is limited to only observe single states in the home.The coverage of a physical sensor can either address specific objects or anarea. The latter is denoted coverage area. Usually, the producer of thephysical sensor defines the coverage area as it is in an environment withoutobstacles. When the physical sensor is placed in the environment, the cov-erage area might be reduced due to objects in the environment that havepermeability tuples that match the signal type of the current physical sensor.

The external source φE includes data that is persistently stored. Themain purpose of an external source is to return data tuples from the storage.

Definition 2.14 The external source φE is an attribute: φE = (C). C isthe set of the capabilities it provides.

In contrast to the physical sensor, the external source does not obtainreadings directly from the environment. Thus, we do not include attributeslike coverage. For instance, we allow DBMSs and file systems to act asexternal sources, as long as they provide data tuples that can be used byour system. The logical sensor performs computations on data tuples fromother tuple sources.

Definition 2.15 The logical sensor is a tuple: φL = (Cd, aggL, Cp, f). Cd

is the set of all the capabilities it depends on and aggL is a user defined

6

PS: Accelerometer(value)ES: User(personID) LS: FallDetected(personID)

ES: FallDetected(true/false)

ES: FaceRecognition(personID)

PS: Camera(matrix)

LS: FallDetected(personID)

PS: TakingMedication(type)

PS: PhysicalSensor, ES: ExternalSource, LS: LogicalSensor

PS: Camera(matrix)ES: MedicationTaken(type)

LS: TakingMedication(type)

Figure 2: Examples of capability hierarchies for detecting falls and takingmedication.

function that aggregates the data tuples from Cd. Cp is the set of capabilitiesit provides. f is defined as for physical sensors but depends on aggL andCd.

When the logical sensor receives a request to produce a data tuple, itrequests its tuple sources, processes the values it receives and creates a newdata tuple that contains the results. The logical sensor does not depend onspecific tuple sources, only the capabilities, i.e., any type of tuple source canbe part of φL.Cd as long as it provides the correct capabilities.

We illustrate the sensor model and capabilities with an example takenfrom automated home care. Figure 2 shows capabilities for detecting fallsand that medication is taken. FallDetected is provided by a logical sen-sor that either depends on the accelerometer capabilities together with theinformation about the monitored person (personID), or the combination ofthe Camera capability and external sources that provide capabilities for facerecognition and fall detection. TakingMedication is provided by a physicalsensor, which e.g., uses RFID to detect that the monitored person holds themedication, or a logical sensor that depends on the capabilities Camera andMedicationTaken.

2.4 Query Based Event Language

In this section we describe our CEP query language, which uses capabilitiesfrom the sensor model, and locations of interest and timestamps from theevent model to support reuse. In order to detect an event, the applicationprogrammer assigns conditions to the capabilities. The query that addressesone atomic event is called an atomic query.

Definition 2.16 An atomic query qA is described by a tuple: qA = (cond,loi, tb, te). cond is a triplet (c, op, val), where c is the capability, op ∈ {=, 6=

7

, <,≤, >,≥} is the operator, and val is the expected value of the capability.loi, tb, te specify the spatial and temporal properties.

When it is needed to describe complex events, two or more atomic querieshave to be used. These are called complex queries. Complex queries describecomplex events. In the following, the term query, denoted q, covers bothatomic and complex queries.

Definition 2.17 A complex query qC is an ordered set of atomic queries,and logical operators and relations ρi between them: qC = {qA0 ρ0 . . . ρN−2 qAN−1}.If the complex query only consists of one atomic query, ρ0 = ∅.

We use one temporal relation to describe that two atomic events areconsecutive, i.e., the followed by query relation (→).

Definition 2.18 The followed by relation →: {qz → qx| qz.te < qx.tb }.

The logical operators in the query language are ∧, ∨ and ¬. If any otherlogical operator is needed, these three operators can be used to define it.

With respect to the temporal properties of the events, the applicationprogrammer should be able to write two types of queries. The first typeshould explicitly denote when an event should happen, e.g. between 16:00hand 18:00h. The other type should denote how long an event lasts when itis detected. These two types of queries are called timed queries and δ-timedqueries. A timed query has concrete tb and te values, which means that thetwo timestamps denote exactly the duration of the event. δ tells how long theevents should last when they first start. These are denoted by overloadingthe semantics and setting only tb. The timed or δ-timed query can bepercent-registered (P -registered) or not. A query that is not P -registeredrequires that all the readings from the relevant tuple sources match the stateduring the time interval. Depending on the query, P can either indicate thatwithin the interval, the state should be satisfied minimum or maximum agiven percentage of the time window. A complex query can also be timedor δ-timed, and P -registered.

In addition, the application programmer can use the five concurrencyoperators to describe temporal dependency between queries.

Definition 2.19 The concurrency operators:

{Equals(qz, qx) | qz.tb = qx.tb ∧ qz.te = qx.te}{Starts(qz, qx) | qz.tb = qx.tb ∧ qz.te < qx.te}{Finishes(qz, qx) | qz.tb < qx.tb ∧ qz.te = qx.te}{During(qz, qx) | qz.tb > qx.tb ∧ qz.te < qx.te}{Overlaps(qz, qx) | qz.tb > qx.tb ∧ qz.te > qx.te}

8

3 Evaluation

We describe a use-case study to demonstrate the low effort required from theapplication programmer to personalise the queries. Furthermore, we reporton some first performance evaluation of our proof-of-concept implementa-tion. For the sensor model and the environment model we have implementeda GUI that allows the application programmer to move and place sensorsso that they cover locations of interest and objects in the environment. Theevent processor is a state machine that is generated from the query. Thesystem relates the capabilities in the implementation with the sensors inthe current environment. Depending on the data tuples the state machinereceives from the sensors, it either performs a state transition or remains inthe current state. The implementation is written Java and runs on a dualcore 1.2 GHz machine with 2 GB memory.

3.1 Use-case Study

In order to show that personalisation can be performed by simply changinga few parameters, we focus on queries related to detecting the activitiesfalling and taking medication (see Figure 2). These activities should bedetected in two different instances with minimal rewriting of the queries.The instances are excerpts from two environments taken from related work;the WSU smart home project [1] and MIT’s PlaceLab apartment [2]. Theinstances are shown in Figure 3 and are realised through our proof-of-conceptimplementation and are equipped with sensors from our sensor model. Thewalls are all objects with permeability value 0 for light and 0.01 for radiosignals.

For the fall detection, timing is not relevant because a fall can happenat any time of the day. Hence, the timing should not be specified. The fallcan also happen everywhere in the home and the system has to constantlypull the sensors that provide FallDetected. The query that detects thefall is then very simple: (FallDetected = personID). The value personIDidentifies the person, and must be updated for the particular instance. Thepersonalisation process is done when the system investigates the availablesensor configurations that provide the capability FallDetected and inves-tigates whether the current instance provides these sensors. If the sensorsare provided, the system instantiates the query and starts reading the datatuples from the relevant sensors. If not, the system informs the applicationprogrammer that the query can not be instantiated and shows the list ofsensors that need to be in the environment.

In order to detect that the monitored person takes medications, it issufficient to have sensors that provide the capability TakingMedication, andwhich return the type of medication that has been taken. If the monitoredperson should take several medications, it is sufficient to use the ∧ operator

9

Figure 3: Two environments with different setup.

between each of the medication types. TheDuring concurrency class can beused to describe the temporal relation. The first part of the query identifiesthat the medications are taken while the second part of the query identifiesthat the monitored person is within the location of interest related to themedication cupboard. This location of interest is called MedCupboard andis defined with different coordinates for the two environments. The querythat the application programmer has to write is based on this template:

during(((TakingMedication = Med1_0_0,timestamps) &&

... && (TakingMedication = MedN_0_0,timestamps)),

(DetectPerson = personID,MedCupboard,timestamps))

In order to show that two different types of sensors can provide the samecapabilities, we have placed two cameras in the kitchen (Figure 3 a)) andRFID tags in the bathroom (Figure 3 b)). The camera in the medicine cup-board covers the location of interest MedCupboard. The coverage area of thetwo cameras has an angle of 90◦, and the coverage area of the camera insidethe cupboard is reduced by the panels. In the bathroom, MedCupboard iscovered by three active RFID tags named Med1 0 0, Med2 0 0 and Med3 0 0.The three tags are also attached to the medication and provide the capa-bilities TakingMedication and DetectPerson. Affected by the walls in themedication cupboard, the coverage areas of the tags are shown as small ir-regular circles. This is automatically calculated by our system based on thesignal type and the permeability values of the walls. The wrist-worn RFIDreader of the monitored person returns the correct readings when it is withinthe coverage areas of the tags.

By using the query above, the application programmer only needs topersonalize the types of medications, the timestamps and the coordinates of

10

the locations of interest. For instance, the monitored person Alice shouldtake all her medication between 08:00h and 09:00h every day. This processshould take maximum six minutes and taking each medication should takeone minute. The last part of the query can be rewritten as (DetectPerson= Alice, MedCloset, 08:00h, 09:00h, min 6%). The timestamps in eachof the queries addressing TakingMedication are rewritten to for instance(TakingMedication = Med1 0 0, 1m). For the monitored person Bob, eachof the medications should be taken at different times during the day. Theapplication programmer needs to write one query for each of the medications,hence only one medication is supposed to be taken for each of the queries.For each of the queries the timestamps and coordinates of the locations ofinterest are simply updated to match the required temporal behaviour.

3.2 Performance Evaluation

It is important that the system detects the events in near real-time, i.e., thatthe system detects events when they happen. In this section we evaluatethe processing time and scalability of our system with respect to real-timedetection of events, performance and scalability. We want to answer twoquestions through our experiments:

1. How does the number of sensors to be evaluated influence the process-ing time? In some applications there might be need for a considerablenumber of sensors.

2. How does the complexity of queries affect the processing time?

To answer the first question, we need to increase the number of sensors.To ensure that the evaluated number of sensors is increasing at all evaluationsteps for each experiment, we add to each original sensor ten additionalsensors for each experiment. Thus, in the first experiment we start withthe two sensors that provide DetectPerson. We perform additionally fourexperiments with 22, 42, 62, and 82 sensors in total. The new sensors inheritthe shapes and capabilities from the sensors that are already there.

The second question is answered by increasing the number of concur-rent queries that are processed. We do this by increasing the number ofmedications that the monitored person has to take. We increase from 1 to51 medications. Even though it is not realistic that the monitored personshould take 51 medications, we show that our system manages to handlethat amount of concurrent atomic queries. We keep in this experiment thenumber of sensors fixed and use the original installation with two sensors.

We use synthetic sensors and workloads which are implemented thro-ugh predetermined data value sequences stored in files. This enables us torepeat experiments and to know the ground truth, i.e., the sensor readingsthat reveal the behavioural patterns of the monitored person. Since our

11

0

2

4

6

8

10

12

14

0 1 2 3 4 5 6 7 8 9 10 11 12

Tim

e (m

illis

econ

ds)

Time steps

Experiment #1Experiment #2Experiment #3Experiment #4Experiment #5

Figure 4: Processing time with 2, 22, 42, 62 and 82 sensors in the environ-ment.

implementation uses synthetic sensors, we cannot include the time it wouldtake to gather data tuples from the sensors over a network. Hence, wemeasure the event processing time as the duration from the point in timewhen all the relevant data tuples are collected until the system has evaluatedthe tuples. We call this a discrete time step.

In order to use a relevant environment we use the kitchen from Figure3 a) with a query based on the one that detects that the monitored persontakes medication. The first part of the query - taking medication - shouldtake 4 time steps. The monitored person should stand by the medicinecupboard in 8 time steps to satisfy the second part of the query. At timestep 3 the monitored person moves in the coverage area of the camera thatcovers MedCupboard. This causes the second part of the query to start. Attime step 4 the monitored person takes the medication. This starts thefirst part of the query. At time step 7 the first part of the query finishessuccessfully and at time step 10 the second part finishes successfully.

The average processing times for the first experiment are shown in Figure4. The x-axis shows the time steps of the simulation and the y-axis showsthe processing time in milliseconds (ms). The plot shows that the processingtime increases with the number of sensors in total. There are two peaks attime step 7 and time step 10 which are related to successfully finishing thetwo parts of the query. In total, the maximum processing time is 12.8 msfor the scenario handling the readings from 82 sensors.

Figure 5 shows the processing time and it is clear from the plot wherethe processing starts. The four time steps used by the first part of the queryclearly show a linear increase in the processing time between time steps 4and 7 when the number of concurrent queries increases. There is also aslight peak in the plot when the second part of the query starts at time step

12

0

10

20

30

40

50

60

0 1 2 3 4 5 6 7 8 9 10 11 12

Tim

e (m

illis

econ

ds)

Time steps

1 medication11 medications21 medications31 medications41 medications51 medications

Figure 5: Processing time with an increasing number of concurrent queries.

3 and stops at time step 10. The average processing time for 51 concurrentqueries is only 55.1 ms.

We have given the system workloads up to 82 sensors that report similarreadings in the first experiment, and 51 concurrent queries in the secondexperiment. The experiments show that even for the highest workload theevent processing part of our system handles real-time processing of datatuples very well.

4 Conclusion

In this paper we define three models and a language that simplify the taskof application programmers to match specific instances, i.e., homes, sen-sors and monitored persons. We support our claims by showing how thesame query applies to two different environments with two different sensorconfigurations.

Challenges related to personalisation of home care systems is addressedby Wang and Turner [5], where they use a policy-based system using event,condition, action (ECA) rules where certain variables can be changed foreach instance. They also provide the possibility of addressing sensor typesinstead of sensors directly, provided by a configuration manager that findsthe correct sensors. However, they do not provide separate models for theevents, sensors and the environment to show how one capability can beprovided by several different types of sensors in different environments.

Our event model is inspired by the semantics of atomic and complexevents as well as abstractions to higher level queries from Luckham [3].Our event language extends Luckham’s contributions by explicitly usingcapabilities and locations of interest in the atomic events.

13

To the best of our knowledge, there exist no system that combines allaspects and addresses them in a structured way related to complex eventprocessing from sensor data.

Event processing needs to be done in real-time and we can show withour prototype that it scales well with respect to number of sensors andconcurrent atomic queries. The event processor in our system only uses 12.8milliseconds to evaluate 82 sensors, and only 55.1 milliseconds to evaluate51 concurrent queries.

Future work is to extend our system with statistical models for person-alisation of timing. With respect to covering locations of interest, we willinvestigate algorithms for optimising sensor placement. In addition we arecurrently instrumenting our labs and offices with cameras and radio-basedsensors to create a real world test-bed for our system.

References

[1] D. J. Cook and M. Schmitter-Edgecombe. Assessing the quality of ac-tivities in a smart environment. Methods of Information in Medicine,2009.

[2] S. S. Intille, K. Larson, J. S. Beaudin, J. Nawyn, E. M. Tapia, andP. Kaushik. A living laboratory for the design and evaluation of ubiq-uitous computing technologies. In CHI ’05: CHI ’05 extended abstractson Human factors in computing systems, pages 1941–1944, New York,NY, USA, 2005. ACM.

[3] D. C. Luckham. The Power of Events: An Introduction to ComplexEvent Processing in Distributed Enterprise Systems. Addison-WesleyLongman Publishing Co., Inc., Boston, MA, USA, 2001.

[4] H. Storf, M. Becker, and M. Riedl. Rule-based activity recognition frame-work: Challenges, technique and learning. In Pervasive Computing Tech-nologies for Healthcare, 2009. PervasiveHealth 2009. 3rd InternationalConference on, pages 1 –7, 1-3 2009.

[5] F. Wang and K. J. Turner. Towards personalised home care systems.In PETRA ’08: Proceedings of the 1st international conference on PEr-vasive Technologies Related to Assistive Environments, pages 1–7, NewYork, NY, USA, 2008. ACM.

[6] D. H. Wilson. Assistive Intelligent Environments for Automatic In-HomeHealth Monitoring. PhD thesis, Robotics Institute, Carnegie MellonUniversity, Pittsburgh, PA, September 2005.

14