Improving Search and Rescue Using Contextual Information

18
UNCORRECTED PROOF RA P.1 (1-18) AR:m v 1.25 Prn:29/05/2009; 10:05 ar2671 by:Daiva p. 1 Advanced Robotics 0 (2009) 1–18 www.brill.nl/ar Full paper Improving Search and Rescue Using Contextual Information D. Calisi , L. Iocchi, D. Nardi, G. Randelli and V. A. Ziparo Dipartimento di Informatica e Sistemistica, ‘Sapienza’ University of Rome, Via Ariosto 25, 00185 Rome, Italy Received 14 November 2008; accepted 11 March 2009 Abstract Search and rescue (SAR) is a challenging application for autonomous robotics research. The requirements of this kind of application are very demanding and are still far from being met. One of the most compelling re- quirements is the capability of robots to adapt their functionalities to harsh and heterogeneous environments. In order to meet this requirement, it is common to embed contextual knowledge into robotic modules. We have previously developed a context-based architecture that decouples contextual knowledge, and its use, from typical robotic functionalities. In this paper, we show how it is possible to use this approach to enhance the performance of a robotic system involved in SAR missions. In particular, we provide a case study on exploration and victim detection tasks, specifically tailored to a given SAR mission. Moreover, we extend our contextual knowledge formalism in order to manage complex rules that deal with spatial and temporal aspects that are needed to model mission requirements. The approach has been validated through several experiments that show the effectiveness of the presented methodology for SAR. © Koninklijke Brill NV, Leiden and The Robotics Society of Japan, 2009 Keywords Context, search and rescue robots, exploration, victim detection 1. Introduction Modern robotic applications for search and rescue (SAR) provide new challenges for research in autonomous robotics. Indeed, one of the most compelling require- ments for rescue robots is the ability to adapt to the many different situations they encounter during a rescue mission. Several approaches try to meet this requirement by embedding contextual knowledge into the single functionalities of the robot. For example, it is possible to improve the map construction process by knowing that the robot is currently moving in the corridor of an office building. The ‘contextual approach’ has been shown to be an effective solution, although limited in its current * To whom correspondence should be addressed. E-mail: [email protected] © Koninklijke Brill NV, Leiden and The Robotics Society of Japan, 2009 DOI:10.1163/156855309X452539

Transcript of Improving Search and Rescue Using Contextual Information

UN

CO

RR

ECTE

D P

RO

OF

RA P.1 (1-18)AR:m v 1.25 Prn:29/05/2009; 10:05 ar2671 by:Daiva p. 1

Advanced Robotics 0 (2009) 1–18www.brill.nl/ar

Full paper

Improving Search and Rescue Using Contextual Information

D. Calisi ∗, L. Iocchi, D. Nardi, G. Randelli and V. A. Ziparo

Dipartimento di Informatica e Sistemistica, ‘Sapienza’ University of Rome, Via Ariosto 25, 00185Rome, Italy

Received 14 November 2008; accepted 11 March 2009

AbstractSearch and rescue (SAR) is a challenging application for autonomous robotics research. The requirements ofthis kind of application are very demanding and are still far from being met. One of the most compelling re-quirements is the capability of robots to adapt their functionalities to harsh and heterogeneous environments.In order to meet this requirement, it is common to embed contextual knowledge into robotic modules. Wehave previously developed a context-based architecture that decouples contextual knowledge, and its use,from typical robotic functionalities. In this paper, we show how it is possible to use this approach to enhancethe performance of a robotic system involved in SAR missions. In particular, we provide a case study onexploration and victim detection tasks, specifically tailored to a given SAR mission. Moreover, we extendour contextual knowledge formalism in order to manage complex rules that deal with spatial and temporalaspects that are needed to model mission requirements. The approach has been validated through severalexperiments that show the effectiveness of the presented methodology for SAR.© Koninklijke Brill NV, Leiden and The Robotics Society of Japan, 2009

KeywordsContext, search and rescue robots, exploration, victim detection

1. Introduction

Modern robotic applications for search and rescue (SAR) provide new challengesfor research in autonomous robotics. Indeed, one of the most compelling require-ments for rescue robots is the ability to adapt to the many different situations theyencounter during a rescue mission. Several approaches try to meet this requirementby embedding contextual knowledge into the single functionalities of the robot. Forexample, it is possible to improve the map construction process by knowing thatthe robot is currently moving in the corridor of an office building. The ‘contextualapproach’ has been shown to be an effective solution, although limited in its current

* To whom correspondence should be addressed. E-mail: [email protected]

© Koninklijke Brill NV, Leiden and The Robotics Society of Japan, 2009 DOI:10.1163/156855309X452539

UN

CO

RR

ECTE

D P

RO

OF

RA P.2 (1-18)AR:m v 1.25 Prn:29/05/2009; 10:05 ar2671 by:Daiva p. 2

2 D. Calisi et al. / Advanced Robotics 0 (2009) 1–18

form. In fact, most systems hardcode the contextual knowledge into each moduleresponsible for a specific robot functionality. This has two main drawbacks: (i) thereplication of knowledge introduces the risk of inconsistency and (ii) the implicitrepresentation of knowledge does not allow to reason about it, thus fully exploitingthe advantages achievable through it. A notable exception is the work by Turner [1]that specifically addresses contextual knowledge in robotic applications. In partic-ular, Turner provides a definition and a taxonomy for contextual knowledge thatcharacterizes context knowledge in terms of environmental, mission-related andagent-related features, but its use is limited to plan selection.

In our previous work [2], we have presented a contextual architecture that over-comes some of the limitations of current context-based systems by explicitly repre-senting contextual knowledge and by decoupling it from the robotic functionalities.This allows us to design the system in such a way that some of the processes that arerequired on the robot can be adapted based on knowledge, that we call ‘contextual’.The use of contextual knowledge leads to a cognition-driven design of the roboticsystem, which improves its performance and the scope of applicability.

In this work, we apply the methodology presented in Ref. [2] to a case studyon SAR. SAR operations involve the localization, extrication and initial medicalstabilization of victims trapped in confined spaces, using mixed teams of humanoperators and mobile robots. Even if it is still unrealistic to define a robotic systemable to autonomously accomplish such a task, contextual knowledge can improvethe performance. Indeed, robots designed for SAR operations can significantly ben-efit from this type of knowledge, because of the difficulties of the task and becauseof the environmental complexity.

While in our previous work we focused on low-level tasks (such as mappingand navigation) and environmental context, in this paper we focus on high-levelactivities (such as exploration and victim detection) and mission-related knowledge.We show how a rescue robot can adapt its behavior in harsh and heterogeneousenvironments, based on a priori knowledge that can be provided before the mission.In order to achieve this goal, we enrich our representation in order to deal withcomplex spatial and temporal properties. In particular, we include functions andspatio/temporal variables that allow for the definition of complex rules in a compactway.

The experimental validation of our approach has been performed in harsh sce-narios, which typically cause several kinds of robot failures. For example, robotscan get lost or get out of communication range with the base station. We developour experiments in the robotic simulator USARSim [3]. The experiments have beenperformed on very large and unstructured scenarios used for the RoboCup Rescuecompetition. The maps and the mission-related knowledge were provided by theNational Institute of Standard and Technologies (NIST). The experimental resultsshow that the use of contextual knowledge allows for a significant improvement ofperformance with respect to non-context-based systems.

UN

CO

RR

ECTE

D P

RO

OF

RA P.3 (1-18)AR:m v 1.25 Prn:29/05/2009; 10:05 ar2671 by:Daiva p. 3

D. Calisi et al. / Advanced Robotics 0 (2009) 1–18 3

The remainder of the paper is structured as follows. In Section 2, we presentthe state of the art of context-based robotic systems following Turner’s taxonomy,focusing on those that are more related to SAR robots. Then, after introducing ourcontext-based architecture in Section 3, we describe our context-based system forSAR in Section 4. Finally, we present the experimental results in Section 5 andconclude with a discussion in Section 6.

2. Related Work

Contextual knowledge has been exploited by several robotic systems for differ-ent tasks and in different environments. In the following, we summarize severaluses of contextual knowledge that have been proposed in the literature with a par-ticular focus on SAR applications. We classify approaches based on the type ofcontextual knowledge the robot exploits, i.e., mission-related, environmental andagent-related.

2.1. Mission-Related Contextual Systems

Context-driven choices are useful in robotic scenarios for adapting the robot be-haviors to the different situations that they may encounter during execution. Thisis typically addressed through hierarchical approaches to planning [4], meta-rules[5] and plan selection (RAPS [6], ESL [7], PRS [8]). Such approaches providevery general planning frameworks and can thus be used to manage mission-relatedknowledge (actually, some of these works also use environmental and introspectiveknowledge). However, the use of contextual knowledge is embedded in the planningprocess, and it does not address the relationship between the symbolic representa-tion and the underlying numerical data.

Another task where mission-related knowledge can improve the robot’s perfor-mances is multi-objective search and exploration; in this case, contextual knowl-edge can change the relative importance of one kind of subgoal with respect tothe other subgoals. For example, Calisi et al. [9] highlight that search and explo-ration requires a choice among, often conflicting, subgoals such as exploration ofunknown areas and search for features in known areas.

2.2. Environmental Contextual Systems

Environmental contextual knowledge can be used in robot mapping to describe theabstract structure of the environment. On the one hand, extending metric maps withcontextual knowledge (like rooms, corridors, surfaces) allows first-responders tointeract with a rescue robot in an easy way. On the other hand, they can be usedto improve autonomous systems (e.g., a robot aware of moving in a certain kind ofenvironment could go faster). Nüchter et al. [10] employ environmental knowledgeby using geometric knowledge to establish correspondences with data, thus makingthe association process more reliable and faster. The work by Rottmann et al. [11]proposes an approach to classify places within indoor environments using a hidden

UN

CO

RR

ECTE

D P

RO

OF

RA P.4 (1-18)AR:m v 1.25 Prn:29/05/2009; 10:05 ar2671 by:Daiva p. 4

4 D. Calisi et al. / Advanced Robotics 0 (2009) 1–18

Markov model, thus obtaining semantic labels useful for path-planning, localiza-tion and human–robot interaction. These works address the extraction of semanticinformation concerning structured environments, such as rooms or corridors, thatdoes not fully capture the features of an unstructured environment, as typical ofSAR activities.

A SAR robot-related task is the design of effective behaviors in rough terrainsthat has been pursued by exploiting terrain classification (see, e.g., Ref. [12]). Inparticular, Aboshosha and Zell [13] propose an approach for adapting robot behav-ior for victim search in disaster scenarios. The main idea is that the robot shouldadapt its behavior based on an obstacle distribution histogram, to be used as a con-text for two different adaptive controllers: one based on stochastic control theoryand another based on a fuzzy inference system. Usually, in these cases, ad hoc rep-resentations, such as behavior maps by Dornhege and Kleiner [14], are used forrepresenting features like the presence of ramps or open stairs. This type of contex-tual knowledge can clearly be viewed as environmental knowledge and can be usedto select or tune behaviors in the more general setting of contextual architecture.

2.3. Agent-Related Contextual Systems

Newman et al. [15] exploit introspective (as well as environmental) knowledge byusing two different algorithms for incremental mapping and loop closure: efficientincremental three-dimensional (3-D) scan matching is used when mapping open-loop situations, while a vision-based system detects possible loop closures. Also,the design of basic behaviors can benefit from the use of an agent-related context.In particular, it can be used for behavior specialization, i.e., to fine-tune the para-meters. The use of contextual knowledge for behavior specialization is suggestedby Beetz et al. [16], where introspective knowledge is used to obtain smooth transi-tions between behaviors by applying sampling-based inference methods. Grisetti etal. [17] define three different phases in robot mapping algorithms, i.e., exploration,localization and loop closure. They propose to use introspective knowledge by de-tecting those phases and by tuning the computation accordingly. While the aboveworks provide interesting examples of specific introspective capabilities, the ex-ploitation of introspection for self-diagnosis is one of the key features that we try toaddress systematically through a context-based architecture.

3. A Context-Based Architecture

A context-based architecture [2] resembles a feedback controller (Fig. 1). It can beformally defined as a quadruple 〈S,Td/c,R,Tc/d〉, where:

• S is the context-free system, which represents any conventional robotic systemcomposed of modules such as a motion controller, mapper, exploration module,localization module, etc.

UN

CO

RR

ECTE

D P

RO

OF

RA P.5 (1-18)AR:m v 1.25 Prn:29/05/2009; 10:05 ar2671 by:Daiva p. 5

D. Calisi et al. / Advanced Robotics 0 (2009) 1–18 5

Figure 1. Context-based architecture as a feedback controller.

• Td/c is a finite set of data/context transduction modules, which process numer-ical output from S to extract contextual knowledge, represented in whateversymbolic language.

• R is a block of contextual reasoning modules, used to infer new knowledge,useful for the tasks of the system.

• Tc/d is a set of context/data transducers, which transform any symbolic rep-resentation into numerical data in order to control modules in S (closing theloop).

Intuitively, based on the information extracted from the output of S, R can infercontextual knowledge which can be used to control the modules in S. The Tc/d andTd/c modules are required to interface S and R by transforming data into symbolsand symbols into data, respectively. The context system loop is executed at a muchlower frequency then the context-free loop, thus it does not affect the system reac-tivity. The definition of R does not specify how the contextual reasoning module R

should be implemented. Indeed, this is a design choice that requires a trade-off be-tween expressive power and computational complexity. In the following, we use arule-based system (i.e., first-order Horn clauses) that contains a set of facts concern-ing the environment and a set of rules. A rule is composed by a condition which, ifverified, causes an effect:

IF α THEN PARAMETER = value,

where α is a formula representing the condition of the rule.While a detailed discussion of the proposed architecture is provided in Ref. [18],

here we focus on those aspects that have been specifically introduced to cope with

UN

CO

RR

ECTE

D P

RO

OF

RA P.6 (1-18)AR:m v 1.25 Prn:29/05/2009; 10:05 ar2671 by:Daiva p. 6

6 D. Calisi et al. / Advanced Robotics 0 (2009) 1–18

SAR missions. In order to suitably model mission requirements, we need to rep-resent spatial and temporal variables. Consequently, we allow a rule to includevariables and function symbols. For example, for a given value of the spatial vari-able Pos, the function mobility(Pos) returns the difficulty in navigating in Pos. Inour previous work each acquired fact was limited to a localized perception, bothfrom a spatial and temporal point of view, e.g., the fact ‘robotStalled’ could onlydescribe a situation where the robot stalled in the current position and at the cur-rent time. On the other hand, through spatial and temporal variables, it is possibleto predicate conditions and events that happen in a certain place and at a certaintime. Consider, for example, to use the contextual knowledge to assess the mobilityhardness in frontier-based exploration. Frontiers [19] are regions on the boundarybetween open space and unexplored space. Given a frontier F located at PosF,the function mobility(PosF) retrieves the frontier’s mobility level. As for temporalvariables: if we had a function batteryLevel(T ), which returns the battery level ata given time T , we could set a battery threshold beyond which the robot is forcedto come back to its home position. It is also possible to imagine more complexsituations where, using spatial or temporal variables, the robot can behave in dif-ferent ways for different places or time intervals. Consider the following examplethat involves spatial variables: a robot is moving towards a target point in the envi-ronment, following a trajectory defined as a set of intermediate points. At a giventime t , the robot is localized in the environment at CurrentPose and it adapts itsspeed considering the mobility hardness of the next intermediate point to reach. Ifthe next intermediate point has an easy mobility level, then the robot sets a highspeed, otherwise it slows down, as described in the following rules:

IF robotPose(CurrentPose, t) ∧ plan(CurrentPose, NextPose)∧ mobility(NextPose) == hardMobility

THEN MAX_SPEED = lowSpeedIF robotPose(CurrentPose, t) ∧ plan(CurrentPose, NextPose)

∧ mobility(NextPose) == easyMobilityTHEN MAX_SPEED = highSpeed,

where plan(CurrentPose,NextPose) tells whether the navigation path CurrentPoseand NextPose is part of the current plan, given that at the current time t the robotis in position CurrentPose. Similarly, it is possible to define rules with temporalvariables. Consider a function robotStalled(T ), indicating a robot stall condition attime time T − �(T ). If at time T , which is the current time, the robot is not stalledanymore, knowing that it just escaped from a stall, we can moderately increase thespeed in order not to stall again:

IF robotStalled(T − �(T )) ∧ ¬robotStalled(T ) ∧ now(T )

THEN MAX_SPEED = mediumSpeed

Our contextual architecture is an example of heterogeneous layered architecture.Thus, on the one hand, uncertainty is managed at a numerical level by S modules,

UN

CO

RR

ECTE

D P

RO

OF

RA P.7 (1-18)AR:m v 1.25 Prn:29/05/2009; 10:05 ar2671 by:Daiva p. 7

D. Calisi et al. / Advanced Robotics 0 (2009) 1–18 7

based on specific methods to deal with uncertainty (e.g., simultaneous localizationand mapping techniques for localization and mapping uncertainty). On the otherhand, information management and decision making is handled at a symbolic levelthrough R modules. Concerning the robustness of the overall system, some con-siderations are in order. First of all, as already mentioned, the system reactivity isnot affected because the contextual loop is performed at a low frequency. On theother hand, in certain circumstances, the use of contextual knowledge can improvereactivity by allowing fast detection of environment changes (e.g., opening door),exploiting the information about the dynamics of objects. Furthermore, it should benoted that planning is done for few steps, thus avoiding long, failure-prone plans.Finally, symbolic representations are more robust than numerical representations,as their discrete characterization partially overbears numerical data uncertainty anddata noises.

4. A Context-Based System for Rescue

Rescue operations require several tasks, such as navigation, exploration, mapping,localization, victim detection, etc. In our previous work, we showed the effective-ness of a contextual architecture with respect to navigation and mapping tasks; inthis work, we extend the representation used through contextual rules dealing withspatial and temporal variables, and we focus on the improvement of the explorationand the victim detection tasks, given some contextual knowledge.

The aim of RoboCup Rescue Virtual Robots competitions is to challenge thecapabilities of autonomous robotic teams, providing competitors with an initialassessment of the environment. Hence, context-based systems can exploit thisknowledge to improve their performances in SAR activities. In RoboCup Rescue,contextual knowledge, which is known before the mission, is coded into a geo-referenced map, using a format standardized by the NIST (http://www.nist.gov)[20]. This ‘a priori’ map contains coarse-grained knowledge about the difficultylevels concerning mobility and victim detection. Nevertheless, a similar map couldbe assumed in realistic scenarios. In fact, it could be obtained from aerial views,possibly acquired by an unmanned aerial vehicle flying over the disaster area, or us-ing a previous map of the environment, or using well-known information about thedisaster area (e.g., population density, presence of public areas). This map reflectsthe real initial assessment that first-responder teams send to the control center aftera disaster. For example, a hard mobility region can contain slopes, ruined ground,holes, stairs, cluttered zones, etc. A hard victim detection area can contain vic-tims occluded by objects, moving victims or situations where the victim detectionsubsystem can detect many false positives (nothing is stated about the probabilityof finding victims). As we will see in the following, other contextual knowledgeis inferred from S modules (through Td/c modules) during the mission. The mainproblem of the exploration in unstructured environments is that it requires the abil-ity to avoid difficult areas, where the robot could stall (e.g., typical stall conditions

UN

CO

RR

ECTE

D P

RO

OF

RA P.8 (1-18)AR:m v 1.25 Prn:29/05/2009; 10:05 ar2671 by:Daiva p. 8

8 D. Calisi et al. / Advanced Robotics 0 (2009) 1–18

are objects blocking the robot’s motion, lack of reachable frontiers, rollovers, etc.).Even if those places could contain victims, a robot blocked in a hole cannot notifyany human operator about the presence of any victim. Exploiting contextual knowl-edge allows us to reduce risks and to implement smarter heuristics to detect victims(e.g., taking snapshots of hard zones, instead of moving the robot inside them).

Procedures for sensing victims based on artificial vision are prone to false pos-itives, which need further analysis by human operators in order to distinguish realvictims from false alarms. This is accomplished by taking pictures of the areaswhere the victim sensor detects something interesting to be reported to human op-erators. Moreover, in hard victim detection areas, it can be difficult also for a humanoperator to assess the presence of a victim, because of occluding objects or dustcovering everything; in such cases, one photo may not be enough. Given that therobot is in this situation, it is possible to take photos from different points of viewor to take a panoramic photo from a distance. All these problems, as well as someheuristics to solve them, can be represented in a context-based system such as theone proposed here, by defining the type of knowledge that is necessary to acquireand the rules to deal with it, in order to select the best parameters for S modules toimprove exploration and victim detection.

4.1. Contextual Knowledge: R Module

In this work, the reasoner R is a rule-based engine implemented in PROLOG (otherformalisms could be used, e.g., Refs [1, 6, 7]). PROLOG adopts a ‘first applicable’strategy, which means that the rule order is relevant. Table 1 reports the predicatesand the functions used to assert facts extracted from S modules and turned into sym-bolic knowledge by Td/c, additionally specifying whether the type of information isspatial or temporal. The facts acquired by S modules (reported in the third columnof Table 1) characterize the self-diagnosis capability of our system, which is the

Table 1.Functions used for contextual reasoning, with their spatial and temporal relevance

Contextual function Meaning Detected by Spatial/temporalinformation

smallRamp robot encounters a small ramp elevation mapper nomebigRamp robot encounters a big ramp elevation mapper nonerobotStalled robot stalled motion planner nonemobility(Pos) location Pos has a certain mobility a priori map spatial

difficultyvictims(Pos) location Pos has a certain victims a priori map spatial

detection difficultybatteryLevel(T) robot’s battery level at time T battery controller temporaldetectedVictim(Pos) victim sensor detected a possible victim sensor spatial

victim at position Pos

UN

CO

RR

ECTE

D P

RO

OF

RA P.9 (1-18)AR:m v 1.25 Prn:29/05/2009; 10:05 ar2671 by:Daiva p. 9

D. Calisi et al. / Advanced Robotics 0 (2009) 1–18 9

Table 2.Parameters produced by the R modules (they will be transduced by the Tc/d modules in numericparameters for the S modules)

Module Parameter Values

Navigator MOTION_PLANNER {finePlanner, coarsePlanner}MAX_SPEED {lowSpeed, mediumSpeed, highSpeed}MAX_JOG {lowJog, mediumJog, highJog}

Mapper MAP_ENABLED {true, false}SCAN_MATCH {on, off}ELEVATION_MAPPER {on, off}

Exploration EXPL_ENABLED {true, false}MOB_WEIGHT {lowMobWeight, highMobWeight}VICT_WEIGHT {highVictWeight, medVictWeight,

lowVictWeight, nullVictWeight}DIST_WEIGHT {lowDistWeight, highDistWeight}INVALIDATE_FRONTIER {true, false}

Victim manager MULTI_PHOTO {on, off}TAKE_SNAPSHOT {true, false}INCREASE_SNAP_DISTANCE {true, false}

feedback loop of the context-based architecture. Thus, the elevation mapper (whichbuilds a representation of the ground surface topography using two differently tiltedlasers) detects the presence of small or big ramps, which will be represented withthe symbolic formulas smallRamp and bigRamp. The victim sensor detects possiblevictims and the motion planner notifies any robot stall condition.

The contextual output of the R subsystem (as reported in Table 2) is then trans-duced into some numeric parameters (through Tc/d) for the modules of S.

4.2. Robot Functionalities: S Modules

The parameters MOB_WEIGHT, VICT_WEIGHT and DIST_WEIGHT are theweights that the reasoning modules R estimate using the robot’s position and thea priori map of the environment. Then, the exploration module in S will use themto select the ‘best’ unvisited frontier within a set of candidates. The criteria to statewhich is the best frontier are the distance from the robot estimated position, themobility hardness of the frontier location and the victim detection hardness. Thesethree parameters are related each other, depending on the contextual knowledgeabout the robot’s position, which determines the weights. For example, if the robotis located in an easy mobility area, then it is reasonable to give more importanceto those areas where mobility remains easy, but where victims would be easily de-tectable, because there is no mobility problem in the present circumstances. On theother hand, if the robot is in a hard mobility zone, then it is very important to leavethe area as soon as possible, hence mobility will have a greater weight.

UN

CO

RR

ECTE

D P

RO

OF

RA P.10 (1-18)AR:m v 1.25 Prn:29/05/2009; 10:05 ar2671 by:Daiva p. 10

10 D. Calisi et al. / Advanced Robotics 0 (2009) 1–18

Table 3.MOB_WEIGHT and VICT_WEIGHT parameters for the exploration module

Mobility Victims

easyIdVictims mediumIdVictims hardIdVictims

easyMobility (lowMobWeight, (highMobWeight, (highMobWeight,highVictWeight) highVictWeight) medVictWeight)

mediumMobility (lowMobWeight, (highMobWeight, (highMobWeight,medVictWeight) medVictWeight) lowVictWeight)

hardMobility (highMobWeight, (highMobWeight, (highMobWeight,lowVictWeight) nullVictWeight) nullVictWeight)

The weights described for mobility and victim parameters are reported in Ta-ble 3. If the contextual knowledge identifies some zones (e.g., holes or stairs)as critical, then setting INVALIDATE_FRONTIER to true tells the explorationmodule to discard a given frontier. If the victim sensor measures a possible vic-tim in a zone that the reasoning module recognizes as hard for detection, thenMULTI_PHOTO parameter is switched on to take photos from different pointsof view in order to avoid possible occluding objects. Furthermore, the parame-ter INCREASE_SNAP_DISTANCE is also enabled to take a panoramic snapshotfrom a distance in the case of elevated victims. Finally, if the robot encountersa hard mobility zone, while moving towards a possible victim, them it stops andtakes a snapshot from where it is (enabling TAKE_SNAPSHOT parameter), with-out stopping a failure-prone exploration of a difficult area. Some of the rules of thereasoning module that activate the heuristics introduced before are reported in Ta-ble 4. On the one hand, the context-based architecture allows for reasoning aboutcontextual information. On the other hand, complex behaviors, which typically re-quire a high interaction among several robotic modules, are controllable withoutthe need to hardcode either the interaction or the reasoning process. For example,the victim manager communicates the presence of a new potential victim and thereasoner asserts that the victim is localized in a hard mobility area (the last row ofTable 4). It then commands the navigator module to move towards the victim, mean-while acting on the victim manager to increase the snapshot radius to avoid the hardmobility zone. Nevertheless, the navigator communicates a robot stall condition, ex-pressed by the function robotStalled. Again, the reasoner activates a more precisenavigator to let the robot unstall, concurrently stopping the exploration module. Therobots manages to unstall, but its battery level (batteryLevel(t)) is low, thus the rea-soner module commands thenavigator to go to the closest communication point tosend the acquired information. This example shows both how the reasoning systemcontrols concurrently more then one robotic module, and how it manages the inter-action and the information exchange among them. The last rule of Table 4 reports atypical example of how a spatial function, such as mobility, is used on two different

UN

CO

RR

ECTE

D P

RO

OF

RA P.11 (1-18)AR:m v 1.25 Prn:29/05/2009; 10:05 ar2671 by:Daiva p. 11

D. Calisi et al. / Advanced Robotics 0 (2009) 1–18 11

Table 4.Some rules in the reasoning module R, with their associated modules and parameters

Contextual rules

IF THENmobility(RobotPos) == easyMobility AND navigator MAX_SPEED = highSpeedvictims(RobotPos) == easyIdVictims MAX_JOG = highJog

MOTION_PLANNER= coarsePlanner

mapper SCAN_MATCH = onMAP_ENABLED = onELEVATION_MAPPER = on

exploration DIST_WEIGHT = highDistWeightMOB_WEIGHT = lowMobWeightVICT_WEIGHT = highVictWeight

victim manager MULTI_PHOTO = false

mobility(RobotPos) == hardMobility AND navigator MAX_SPEED = lowSpeedvictims(RobotPos) == hardIdVictims MAX_JOG = lowJog

MOTION_PLANNER = finePlanner

mapper SCAN_MATCH = offMAP_ENABLED = offELEVATION_MAPPER = on

exploration DIST_WEIGHT = lowDistWeightMOB_WEIGHT = highMobWeightVICT_WEIGHT = nullVictWeight

victim manager MULTI_PHOTO = true

robotStalled navigator MOTION_PLANNER = finePlannerexploration EXPL_ENABLED = false

currentVictim(Victim) AND victim manager INCREASE_SNAP_RADIUS = truedetectedVictimPos(Victim, VictimPos) ANDmobility(RobotPos) ! = hardMobility ANDmobility(VictimPos) == hardMobility

locations (both the robot and the possible victim position), and applied to select aparticular heuristic (in this case, to avoid entry in the hard mobility area and takethe photo from a distance).

5. Experiments

5.1. Experiment Design

The proposed system has been tested with the USARSim simulator; the chosenenvironment is an indoor map used during RoboCup German Open 2008, that isreported in Fig. 2. The contextual architecture (i.e., the inclusion of Td/c, R and Tc/d

UN

CO

RR

ECTE

D P

RO

OF

RA P.12 (1-18)AR:m v 1.25 Prn:29/05/2009; 10:05 ar2671 by:Daiva p. 12

12 D. Calisi et al. / Advanced Robotics 0 (2009) 1–18

Figure 2. German Open 2008 map.

modules) has been added to a pre-existent system (hence taken as the S modules).Both the pre-existent S modules and the new R, Td/c and Tc/d modules have beenimplemented using the OpenRDK framework (http://openrdk.sf.net), developed atthe SIED Laboratory (http://sied.dis.uniroma1.it).

The experiments are performed in 20 runs, 10 with context-based architectureand 10 without, each 15 min long. The task is to explore the environment and lookfor possible victims. The robot used in the experiments is a differential driver mo-bile base, equipped with a horizontal laser scanner (used to build the 2-D map of theenvironment and to localize the robot) and another laser scanner, which is inclinedon the sagittal plane (this is used to build an elevation map in order to detect objectsnot lying on the horizontal laser plane). Moreover, the ‘victim sensor’ simulates animage processing module, whose goal is to find human victims. Both the contextual-based system and the non-contextual-based system run without any human control;thus, in the case of robot stall, the systems must solve the problem autonomously.Both systems are provided with the a priori map of the environment. It must benoted that the R module is totally decoupled from the S modules. The interactionbetween the reasoning and the robotic layer (R and S, respectively) is only based onthe feedback provided by the S subsystem and the parameters set by the reasoningmodule, according to our definition of context-based architecture (see Section 3).Concerning the context-free system, its exploration exploits a priori information,but the mobility and victims weights are fixed (they are set to highMobWeight andmidVictWeight, respectively, as these values turned out to be a good trade-off after

UN

CO

RR

ECTE

D P

RO

OF

RA P.13 (1-18)AR:m v 1.25 Prn:29/05/2009; 10:05 ar2671 by:Daiva p. 13

D. Calisi et al. / Advanced Robotics 0 (2009) 1–18 13

experimental evaluation). In fact, the robot does not consider any type of contextualknowledge to weight in different ways the reachable frontiers. Moreover, the para-meter configuration used in the context-free system adopts a single snapshot policy,because the multi-photo approach consists in taking the photo from different pointsof view, which could be difficult in hard mobility areas and requires more time tobe accomplished.

5.2. Experimental Results

The results of the experiments are shown in Table 5, where C denotes the context-based system, while NC denotes the context-free one. It can be noticed that, after15 min, the explored area with the context-based system is 20% greater than thearea covered with the context-free system and, because of a low variance, it sup-ports a systematic tendency, rather then an occasional result influenced by particularruns. Furthermore, it must be noted that the environment is so large that, in eachrun, the robot explored different areas and some of them were more difficult thanothers. Even in these cases, the robot using the context-based system managed toexplore a larger section of the environment. An example is the third run, when therobot entered in a very difficult room where, after entering, an obstacle fell down

Table 5.Explored area (m2) and number of detected victims

Run Check1 (5 min) Check2 (10 min) Check3 (15 min) Victims

C1 7146.27 10164.5 10276.7 3C2 7613.97 8538.87 8541.6 1C3 6918.8 7589 (stalled) 7589 (stalled) 2C4 4502.93 5686.27 5947.27 1C5 5657.4 8160.17 8766.03 1C6 6753.07 8109.8 8109.8 3C7 6663.03 9232.87 11071.57 3C8 6287.63 6943.8 7520.93 2C9 3908.43 4627.73 8094.13 1C10 7602.77 10395.73 11389.77 2Average 6305.43 (±1316.27) 7944.87 (±1386.54) 8730.68 (±973.06) 1.9 (±0.72)

NC1 5692.4 10030.2 10723.23 (stalled) 1NC2 6562.4 6641.47 (stalled) 6641.47 (stalled) 1NC3 3134.07 3721.5 4426.9 (stalled) 1NC4 7055.73 7342.63 (stalled) 7342.63 (stalled) 1NC5 6437.57 6853.93 9341.5 1NC6 7960.6 7968.27 (stalled) 7968.27 (stalled) 1NC7 5908.93 6103.77 (stalled) 6103.77 (stalled) 1NC8 4934.7 4935.9 (stalled) 4935.9 (stalled) 1NC9 6730.97 8267.7 (stalled) 8267.7 (stalled) 1NC10 6084.87 6088.57 (stalled) 6088.57 (stalled) 1Average 6050.22 (±906.16) 6795.39 (±1297.15) 7183.99 (±1544.67) 1

UN

CO

RR

ECTE

D P

RO

OF

RA P.14 (1-18)AR:m v 1.25 Prn:29/05/2009; 10:05 ar2671 by:Daiva p. 14

14 D. Calisi et al. / Advanced Robotics 0 (2009) 1–18

blocking the exit, thus blocking the access to a large part of the environment. Inorder to motivate these results, it is important to consider the evolution of the ex-ploration, both with and without context. Without any contextual knowledge, thesystem selects frontiers using fixed weights and the robot can easily enter in hardmobility areas, thus falling in a hole or rolling over. Conversely, the context-basedsystem dynamically adapts to the environment nearby the robot. If it is in a hardmobility area, the robot moves towards easier zones (weighting more the mobilityfactor rather then the victim detection and distance factors); as a consequence, thisapproach results in exploring first safer areas and to deferring the hard parts of theenvironment, thus trading robustness for a greater chance to detect victims. In SARactivities, it is very important for a robot to communicate to the base station whatit found, and, therefore, it is better to skip some zones and return in communica-tion with the base station. If a robot stalled in a section without any communicationlink, collected data would be lost. Avoiding any type of stall condition is also es-sential to improve the amount of explored area. In fact, if we consider larger timeintervals (e.g., 25 min instead of 15 min), the robot controlled by the context-basedsystem could continue to explore, because there has been no stall, as reported in thesame table. On the other hand, without any contextual knowledge the robot doesnot recognize any stall condition and does not react to solve it, so its explored areawould not increase. This can be quantified as reported in Table 6, through the meantime to live (out of a total amount of 15 min) that is significantly greater for thecontext-based robot than for the context-free robot.

Table 6 reports the number of correctly detected victims. However, it shouldbe noted that in SAR activities and, moreover, in RoboCup competitions, the onlyaccurate method to assess whether a possible victim is a false positive or not is totake a photo. In the 10 runs without context, the photo correctly captured the victim6 times. In all the other cases, the photo did not allow any detection, because thevictim was occluded or elevated over the ground. Using the context-based systemand taking into consideration the hardness of victim detection in some areas, it hasbeen possible to implement multi-photo heuristics from different points of view,thus improving victim detection. Furthermore, since the exploration tends to avoidhard mobility areas, if the victim sensor detects something in a critical part, therobot can move without entering in it and take a photo in the direction of the possiblevictim.

Table 6.Mean time to live for a robot before an unsolvable stallcondition

Context-based system Context-free system

14 min 20 s 8 min 12 s

UN

CO

RR

ECTE

D P

RO

OF

RA P.15 (1-18)AR:m v 1.25 Prn:29/05/2009; 10:05 ar2671 by:Daiva p. 15

D. Calisi et al. / Advanced Robotics 0 (2009) 1–18 15

In order to assess the statistical significance of the collected data, we used Lil-liefors test, which fits well for small data samples and tests whether they comefrom a normally distributed population with unknown mean. The test showed usthat both context-based data samples (P -value = 0.3692) and context-free datasamples (P -value = 0.5) can be assumed to follow a normal distribution. (P -valueis the probability of obtaining a result at least as extreme as the one that was actu-ally observed, given that the hypothesis is true. In this case, the hypothesis was thatdata samples belonged to a normal distributed population. If the P -value is greaterthan a given significance level (usually set to 0.05, which corresponds to 95% ofbelief), then it is not possible to reject the hypothesis, hence we cannot refuse thatdata samples belong to a normal distribution.)

The two data sets where further reduced in their size, applying Grubb’s test foroutliers: C4 (P -value = 0.4192, G = 1.6288, U = 0.6725) and NC1 (P -value =0.2384, G = 1.8126, U = 0.5944), where the two identified samples. (G representsthe difference between outliers and the mean divided by the standard deviation,while U is the ratio of sample variances with or without suspect outliers.) Finally,we applied Welch’s t-test on the two obtained data sets. The result is that we can as-sess with at least 95% of confidence that the two sets belong to two different normaldistributions (P -value = 0.007), hence they are statistically different (see Fig. 3b).Roughly speaking, this allows us to consider the difference of performances be-tween the context-based system and the context-free one as statistically relevant,and not due to casual factors.

6. Conclusions

In this paper, we presented an approach based on contextual knowledge for design-ing robotic systems accomplishing autonomously SAR missions. The architecturedecouples the contextual reasoner from the other components of the system — thishas the advantage of centralizing the collection and use of the contextual knowl-edge.

Specifically, the context-based architecture was applied to design and implementa SAR robotic system that is focused mainly on exploration and search tasks. Thereasoner, in this case, makes use of a set of rules that are written using a compact for-malism, which can describe spatial and temporal events. We performed several ex-periments that show how an effective representation and use of knowledge about thecontext can lead to significant improvements of the system performance in changingoperational conditions. Such a generality/robustness of performance is supportedby the proposed architecture, which decouples the typical robotic functionalitiesfrom the symbolic representation and inference about contextual knowledge, with-out hand-coding it within the robotic modules. Our architecture takes advantage ofcontextual information, which is typically acquired by environmental perceptions.However, this knowledge acquisition process is not always possible and it can bequite complex to be achieved.

UN

CO

RR

ECTE

D P

RO

OF

RA P.16 (1-18)AR:m v 1.25 Prn:29/05/2009; 10:05 ar2671 by:Daiva p. 16

16 D. Calisi et al. / Advanced Robotics 0 (2009) 1–18

(a)

(b)

Figure 3. Context-based and context-free data samples statistics and their relative normal distribu-tions. (a) Context-based and context-free data samples. Red lines represent the medians of the sets,while red crosses are outliers. (b) Data samples (without the two erased outliers) and their correspon-dent normal distributions. Circles are context-free samples, while squares are context-based samples.The two sets are sufficiently separable to assess that they belong to two different distributions.

References

1. R. M. Turner, Context-mediated behavior for intelligent agents, Int. J. Human–Comp. Studies 48,307–330 (1998).

2. D. Calisi, L. Iocchi, D. Nardi, C. M. Scalzo and V. A. Ziparo, Context-based design of roboticsystems, Robotics Autonomous Syst. (Special Issue on Semantic Knowledge in Robotics) 56, 992–1003 (2008).

3. S. Balakirsky, C. Scrapper, S. Carpin and M. Lewis, USARSim: providing a framework for multi-robot performance evaluation, in: Proc. Int. Workshop on Performance Metrics for IntellingentSystems, Gaithersburg, MD, pp. 00–00 (2006).

4. R. Simmons and D. Apfelbaum, A task description language for robot control, in: Proc. IEEE/RSJInt. Conf. on Intelligent Robots and Systems, Victoria, BC, Vol. 3, pp. 1931–1937 (1998).

UN

CO

RR

ECTE

D P

RO

OF

RA P.17 (1-18)AR:m v 1.25 Prn:29/05/2009; 10:05 ar2671 by:Daiva p. 17

D. Calisi et al. / Advanced Robotics 0 (2009) 1–18 17

5. A. Saffiotti, K. Konolige and E. Ruspini, A multivalued logic approach to integrating planningand control, Artif. Intell. 76, 481–526 (1995).

6. R. J. Firby, Adaptive execution in complex dynamic worlds, PhD Thesis, Yale University (1989).7. M. P. Georgeff and A. L. Lansky, Procedural knowledge, in: Proc. IEEE (Special Issue on Knowl-

edge Representation) 74, 1383–1398 (1986).8. E. Gat, ESL: a language for supporting robust plan execution in embedded autonomous agents,

in: Proc. IEEE Aerospace Conf., Aspen, CO, Vol. 1, pp. 319–324 (1997).9. D. Calisi, A. Farinelli, L. Iocchi and D. Nardi, Multi-objective exploration and search for au-

tonomous rescue robots, J. Field Robotics (Special Issue on Quantitative Performance Evaluationof Robotic and Intelligent Systems) 24, 763–777 (2007).

10. A. Nüchter, O. Wulf, K. Lingemann, J. Hertzberg, B. Wagner and H. Surmann. 3D mapping withsemantic knowledge, in: RoboCup 2005: Robot Soccer World Cup IX (2005).

11. A. Rottmann, S. Martnez, M. Cyrill and S. W. Burgard, Place classification of indoor environmentswith mobile robots using boosting, in: Proc. Natl Conf. on Artificial Intelligence, Pittsburgh, PA,pp. 1306–1311 (2005).

12. R. Triebel, P. Pfaff and W. Burgard, Multi-level surface maps for outdoor terrain mapping andloop closing, in: Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Beijing, pp. 00–00(2006).

13. A. Aboshosha and A. Zell, Adaptation of rescue robot behaviour in unknown terrains based onstochastic and fuzzy logic approaches, in: Proc. IEEE/RSJ Int. Conf. on Intelligent Robots andSystems, Las Vegas, NV, Vol. 3, pp. 2859–2864 (2003).

14. C. Dornhege and A. Kleiner, Behavior maps for online planning of obstacle negotiation and climb-ing on rough terrain, Technical Report 233, University of Freiburg (2007).

15. P. Newman, D. Cole and K. Ho, Outdoor slam using visual appearance and laser ranging, in: Proc.IEEE Int. Conf. on Robotics and Automation, Orlando, FL, pp. 1180–1187 (2006).

16. M. Beetz, T. Arbuckle, M. Bennewitz, W. Burgard, A. Cremers, D. Fox, H. Grosskreutz, D. Hahneland D. Schulz, Integrated plan-based control of autonomous service robots in human environ-ments, IEEE Intell. Syst. 16, 56–65 (2001).

17. G. Grisetti, G. D. Tipaldi, C. Stachniss, W. Burgard and D. Nardi, Speeding up rao blackwellizedslam, in: Proc. IEEE Int. Conf. on Robotics and Automation, Orlando, FL, pp. 442–447 (2006).

18. D. Calisi, L. Iocchi, D. Nardi, C. M. Scalzo and V. A. Ziparo, Contextual navigation and mappingfor rescue robots, in: Proc. IEEE Int. Workshop on Safety, Security & Rescue Robotics, Sendai,pp. 19–24 (2008).

19. B. Yamauchi, A frontier based approach for autonomous exploration, in: Proc. IEEE Int. Symp.on Computational Intelligence in Robotics and Automation, LOCATION?, pp. 00–00 (1997).

20. A. Jacoff, E. Messina and J. Evans, A standard test course for urban search and rescue robots,in: Proc. Performance Metrics for Intelligent Systems Workshop, Gaithersburg, MD, pp. 499–503(2000).

UN

CO

RR

ECTE

D P

RO

OF

RA P.18 (1-18)AR:m v 1.25 Prn:29/05/2009; 10:05 ar2671 by:Daiva p. 18

18 D. Calisi et al. / Advanced Robotics 0 (2009) 1–18

About the Authors

Daniele Calisi has been with the SIED Laboratory in Rome, from 2004, and since2006 he has also been a PhD Candidate at the Dipartimento di Informatica e Sis-temistica of ‘Sapienza’ University of Rome. He has worked on many projectsregarding robotics, in particular rescue robots, in collaboration with the ItalianMinistry of Research and University, the Italian National Research Council andthe Italian Ministry of Foreign Affairs. He has been a Visiting Scholar in theTadokoro Laboratory of Tohoku University, Sendai, Japan, and in the Neuroin-formatics Group of the University of Osnabrueck, Germany. His main research

topics are robot motion planning, obstacle avoidance, machine learning and software frameworks forrobotics.

Luca Iocchi received his Master (Laurea) degree, in 1995, and his PhD, in 1999,from ‘Sapienza’ University of Rome. He is currently an Assistant Professor at theDepartment of Computer and System Science, ‘Sapienza’ University of Rome,Italy. His main research interests are in the areas of cognitive robotics, actionplanning, multi-robot coordination, robot perception, robot learning, stereo vision,and vision-based applications. He is the author of more than 100 referred papersin international journals and conferences.

D. Nardi is Full Professor at the Facoltà Ingegneria, ‘Sapienza’ University ofRome, Dipartimento di Informatica e Sistemistica. Education: Laurea ElectronicEngineering, Politecnico Torino, 1981, Master Computer and System Engineer-ing, ‘Sapienza’ University of Rome, 1984. His research interests are artificialintelligence, cognitive robotics, and search and rescue robotics. He is a recipient of‘IJCAI-91 Publisher’s Prize’, ‘Intelligenza Artificiale 1993’, Trustee of RoboCupFederation, and Head of the research laboratory ‘Cognitive Robot Teams’.

Gabriele Randelli is a PhD student at the Dipartimento di Informatica e Sis-temistica, ‘Sapienza’ University of Rome, under the supervision of ProfessorDaniele Nardi, and is a member of the RoCoCo Lab. He obtained his MSc (Magnacum Laude) in May 2007 at Tor Vergata University of Rome. His main researchareas are perceptual anchoring problems, context-based systems and human–robotinteraction.

Vittorio Amos Ziparo is a Post-Doc at the Dipartimento di Informatica e Sis-temistica ‘Antonio Ruberti’ at ‘Sapienza’ University of Rome, and is a member ofthe RoCoCo Lab. His main research interests include cognitive and mobile robot-ics, multi-robot and multi-agent systems, petri nets, and game theory.