Utility-driven Evolution Recommender for a Constrained Ontology

12
Utility-driven Evolution Recommender for a Constrained Ontology Pramod Anantharam Kno.e.sis Center, Wright State University, USA [email protected] Biplav Srivastava IBM Research - India [email protected] Amit Sheth Kno.e.sis Center, Wright State University, USA [email protected] ABSTRACT Ontology evolution continues to be an important problem that needs further research. Key challenges in ontology evo- lution and creation of a highly consumable ontology include accomodating: (a) the subtle changes in the meaning of a model element over time, (b) the changing relevance of var- ious parts of the model to the user, and (c) the complexity in representing time-varying semantics of model elements in a dynamic domain. In this work, we address the challenge of evolving an on- tology to keep up with the domain changes while focusing on the utility of its content for relevance and imposing con- straints for performance. We propose a novel evidence ac- cumulation framework as a principled approach for ontology evolution, which is sufficiently expressive and semantically clear. Our approach classifies model elements (e.g., con- cepts) into three categories: definitely relevant (that must be included in the ontology), potentially relevant (that can be kept as backup), and irrelevant (that should be removed). Further, our approach dynamically re-classifies models based on external triggers like evidence or internal triggers, like the age of a model in the ontology. As a result, users will have an ontology which is both effective and efficient. We evalu- ate our approach based on two measures - ontology concept retention and ontology concept placement. This comprehen- sive evaluation in a single framework is novel and we show that our approach yields promising results. Categories and Subject Descriptors H.1.0 [MODELS AND PRINCIPLES]: General Keywords Ontology Evolution, Evidence Accumulation, Ontology Con- cept Retention, Evolution Strategy, Ontology Concept Place- ment 1. INTRODUCTION Domain ontologies and data are increasingly being published on the Linked Open Data (LOD) 1 cloud by the semantic web community. The LOD cloud contains over 50 billion facts from diverse domains such as media, government, ge- ography, and life sciences. Some of these domains are very dynamic requiring hundreds of domain experts to work col- laboratively curating the domain ontology. For instance, GO (Gene Ontology) has thousands of edits every month 2 . To address the problem of keeping domain ontologies reflective of the domain changes, the manual creation, use, and up- date of domain ontologies should be complemented by using machines for scalability and quickness. There have been several research efforts devoted to under- standing ontology evolution [8, 9, 13, 18, 25, 31]. Some of the questions central to ontology evolution include: When do we add a new concept to an ontology? What are the concepts to be added to/removed from an ontology? Where should we add these new concepts? When do we decide to remove concepts from an ontology? To help answer these questions, we propose a principled approach for evolution of an ontology, by providing a comprehensive framework for assimilating new incoming evidences. Evidences are terms that are representative of a domain obtained by analyzing resources (e.g. documents) related to a domain. The evo- lution process is cast in terms of utility of concepts in the domain satisfying ontology constraints. Utility of concepts refers to the importance of a concept in a domain. This can be quantified by metrics such as usage frequency and re- cency of concept in the domain, learning costs [18], etc. On- tology constraints can be (i) size constraints (e.g. number of concepts and relationships) or (ii) semantic constraints (e.g. property restrictions, language expressivity constraints). A practical question that arises while conceptualizing a do- main is, “how comprehensive should the ontology be?” Two of the key concerns when answering this question are: An increase in ontology size will slow down inferenc- ing. So, in response, real-world applications may ne- cessitate imposing constraints on the size of ontology such as limiting the number of concepts, expressivity of the representation language, etc. Ontology should have concepts that users care about 1 http://linkeddata.org/ 2 http://www.ebi.ac.uk/QuickGO/GHistory

Transcript of Utility-driven Evolution Recommender for a Constrained Ontology

Utility-driven Evolution Recommender for aConstrained Ontology

Pramod AnantharamKno.e.sis Center,

Wright State University, [email protected]

Biplav SrivastavaIBM Research - India

[email protected]

Amit ShethKno.e.sis Center,

Wright State University, [email protected]

ABSTRACTOntology evolution continues to be an important problemthat needs further research. Key challenges in ontology evo-lution and creation of a highly consumable ontology includeaccomodating: (a) the subtle changes in the meaning of amodel element over time, (b) the changing relevance of var-ious parts of the model to the user, and (c) the complexityin representing time-varying semantics of model elements ina dynamic domain.

In this work, we address the challenge of evolving an on-tology to keep up with the domain changes while focusingon the utility of its content for relevance and imposing con-straints for performance. We propose a novel evidence ac-cumulation framework as a principled approach for ontologyevolution, which is sufficiently expressive and semanticallyclear. Our approach classifies model elements (e.g., con-cepts) into three categories: definitely relevant (that mustbe included in the ontology), potentially relevant (that canbe kept as backup), and irrelevant (that should be removed).Further, our approach dynamically re-classifies models basedon external triggers like evidence or internal triggers, like theage of a model in the ontology. As a result, users will havean ontology which is both effective and efficient. We evalu-ate our approach based on two measures - ontology conceptretention and ontology concept placement. This comprehen-sive evaluation in a single framework is novel and we showthat our approach yields promising results.

Categories and Subject DescriptorsH.1.0 [MODELS AND PRINCIPLES]: General

KeywordsOntology Evolution, Evidence Accumulation, Ontology Con-cept Retention, Evolution Strategy, Ontology Concept Place-ment

1. INTRODUCTIONDomain ontologies and data are increasingly being publishedon the Linked Open Data (LOD)1 cloud by the semanticweb community. The LOD cloud contains over 50 billionfacts from diverse domains such as media, government, ge-ography, and life sciences. Some of these domains are verydynamic requiring hundreds of domain experts to work col-laboratively curating the domain ontology. For instance, GO(Gene Ontology) has thousands of edits every month2. Toaddress the problem of keeping domain ontologies reflectiveof the domain changes, the manual creation, use, and up-date of domain ontologies should be complemented by usingmachines for scalability and quickness.

There have been several research efforts devoted to under-standing ontology evolution [8, 9, 13, 18, 25, 31]. Some ofthe questions central to ontology evolution include: Whendo we add a new concept to an ontology? What are theconcepts to be added to/removed from an ontology? Whereshould we add these new concepts? When do we decide toremove concepts from an ontology? To help answer thesequestions, we propose a principled approach for evolutionof an ontology, by providing a comprehensive framework forassimilating new incoming evidences. Evidences are termsthat are representative of a domain obtained by analyzingresources (e.g. documents) related to a domain. The evo-lution process is cast in terms of utility of concepts in thedomain satisfying ontology constraints. Utility of conceptsrefers to the importance of a concept in a domain. This canbe quantified by metrics such as usage frequency and re-cency of concept in the domain, learning costs [18], etc. On-tology constraints can be (i) size constraints (e.g. number ofconcepts and relationships) or (ii) semantic constraints (e.g.property restrictions, language expressivity constraints).

A practical question that arises while conceptualizing a do-main is, “how comprehensive should the ontology be?” Twoof the key concerns when answering this question are:

• An increase in ontology size will slow down inferenc-ing. So, in response, real-world applications may ne-cessitate imposing constraints on the size of ontologysuch as limiting the number of concepts, expressivityof the representation language, etc.

• Ontology should have concepts that users care about

1http://linkeddata.org/2http://www.ebi.ac.uk/QuickGO/GHistory

Figure 1: Representation of the universe of a do-main’s concepts (U), ontology (O), residual ontology(Oresidual) and evidence (E). O ∪ Oresidual is trackedby the evolution system while E - (O ∪ Oresidual)represents uncertainity in evidence collection.

as concepts go out of vogue over time, and may comeback in later. That is, an ontology should be a repre-sentative view of what concepts its users look for.

If an ontology addresses the above two concerns, it will beappropriately sized (for performance) with relevant concepts(for effectiveness). Given incoming evidence, we place con-cepts in the universe of a domain (U) in one of the threecategories: highly relevant, possibly relevant, and irrelevant(Figure 1). Highly relevant concepts are kept in the ontology(O), possibly relevant in the residual ontology (Oresidual),and the irrelevant concepts are excluded. Ocore (Ocore ⊂O) is the core ontology and we may choose to keep thecore ontology intact, i.e., no concepts will be removed fromOcore. We add concepts to Ocore, and may remove onlythose newly added concepts. If a concept is of high utility,then, there will be a strong evidence for the concept’s inclu-sion in the ontology. Beyond a certain threshold of evidence,we may choose to move the concept to the Ocore, dependingon the domain of discourse. A concept may start in one cat-egory and over time, drift to another category. For example,the concept of computer has been around for over 50 yearsbut its associated concepts (e.g., computing trends like thickclient or thin client) and properties (e.g., physical attributeslike size) have changed. Further, the ontology and resid-ual ontology have constraints to ensure that performance isbounded.

We evaluate our approach by considering two important fac-tors: (i) Concepts retained/added and (ii) Concept place-ment in the hierarchy. Consequently, two novel componentsof our evaluation are:

Ontology Concept Retention: We evaluate how well the re-quired or obsolete concepts are retained or removed using adynamic disaster rescue scenario. We extend the RoboCupRescue3 simulator which is used for disaster simulation in anurban environment. We introduce different external causesfor injury of a victim which is usually the case in a realistic

3http://sourceforge.net/projects/roborescue/

disaster setting. For instance, a person may be injured dueto fire caused by the disaster or by a falling blunt heavyobject. We evaluate the retention of relevant concepts in aseed ontology of injury types4.

Ontology Concept Placement : We evaluate our frameworkby evolving the Smarter Cities Reusable Information modeland Business Events (SCRIBE) ontology that has over 600concepts and over 50 annotation types [24], that has beenwidely in the context of the Smarter Cities project. For clar-ity, we focus on evolving a subset of the ontology consistingof services offered by various departments in a city, whichincludes 62 concepts and associated relationships. We use awell known evaluation strategy in the incremental ontologyevolution literature [11, 5].

We assume that a change in the domain is detectable byobserving generated evidences. For example, a shift in con-cepts in computer domain is observable by analyzing doc-uments related to computers and generating evidences forconcepts encountered. We assume a canonical evidence rep-resentation containing concepts manifested and support forits association within the ontology. Evidence processing willeither result in update of support (described in the evidenceaccumulation section) for existing concepts in the ontologyor new concepts being added to the ontology (ontology ex-pansion) depending on ontology constraints. Our main con-tributions are as follows:

• we introduce the notion of utility of a concept in a do-main, and ontology constraints in the context of ontol-ogy evolution. We distinguish between concepts thatare obsolete and concepts that have stabilized in a do-main. Concept utility is quantified as support and usedto justify the retention of a concept in the ontology,

• we propose a principled approach for ontology evolu-tion which uses utility of concepts and ontology con-straints to justify changes,

• we show how accumulated evidences are used to decideon concept addition and removal, and

• we evaluate the proposed approach on a large real-world ontology.

The rest of the paper is organized as follows: Section 2 sum-marizes the state-of-the-art in ontology evolution along withtheir inadequacies. Section 3 motivates and introduces no-tations used in the rest of the paper. Section 4 presents thearchitecture and the algorithmic details related to the imple-mented ontology evolution system. Section 5 discusses theevaluation methodology and the results. Section 6 concludeswith pointers for future work.

2. RELATED WORKOntology evolution draws relevant research from augmentingexisting ontology with new incoming concepts and buildingontology from scratch using a text corpus. The past re-search focus has largely been on a single issue of ontologyevolution such as maintaining consistency of a knowledge

4http://bioportal.bioontology.org/ontologies/1484

base [13, 7], positioning of incoming concepts in an ontology[17], evolving domain hierarchies from text [28], or removalof concepts from an ontology [18]. None of these approachesare evaluated using the comprehensive guidelines for ontol-ogy evolution [11]. There is a need for a single comprehen-sive framework that evolves ontologies, and hence, addressissues and questions introduced earlier. We discuss the re-lated work and put our work in perspective according toevolution semantics, methods, and tools.

2.1 Different Semantics of EvolutionResearch in this category capture the semantics of evolutionby formalizing the details of (1) concept addition, and (2)concept removal. Belief revision based ontology evolution[13] focuses on maintaining the consistency of a knowledgebase while updating it with new incoming concepts. Thefoundation theory of belief revision deals with verificationthrough justification when new beliefs are added to the ex-isting beliefs, e.g., TMS (Truth Maintenance System) [7].Coherence theories of belief revision focuses on relationshipsbetween new and existing beliefs without focusing on verifi-cation. Belief revision based approaches do not exploit thelexical structure of the incoming concept names for suggest-ing ontology enhancements.

The problem of concept removal due to resource constraints[18] show that, reducing the size of the ontology results inperformance gain of a real time system as shown in theirevaluation using a rescue robot. This work has theoreti-cal underpinnings in concept removal (referred to as forget-ting) in Description Logics [27]. Recency and frequency ofusage, and concept acquisition costs, are considered beforeremoving any concept from the ontology. However, theseapproaches do not reflect usage statistics associated witha specific concept and its ancestors in a logically meaning-ful way. We propose to update and propagate these usagestatistics taking into account the superclass relationships.

2.2 Different Methods used for EvolutionOntology evolution methods need to accommodate incom-ing concepts into an existing ontology. This is done by com-paring the incoming concept and concepts in the ontologyat various levels using different methods categorized as [29]:(1) string-based, (2) corpus-based, and (3) knowledge-based.The following approaches use one or more of these conceptcomparison methods. Semi-structured (Semantic Web re-sources) and unstructured (text resources) data is used forevolution of ontologies in [8]. A semi-automatic extensionof the ontology using text mining techniques is carried outon OpenCyc [1] ontology by extending it with business re-lated concepts in [17]. Similar to this work, in [25], sen-sor observations are clustered and added as new concepts.The concept names (to be added to the ontology) are ob-tained from the search queries on the web. The conceptsfound in this technique are arranged in the form of a hier-archy using part-of-speech tagging. Evolva [31] uses bothstring-based and knowledge-based approaches for evolvingan input ontology. Work in [15] combines statistics (e.g.,co-occurrence analysis) and a knowledge base (WordNet) forontology evolution using spreading activation theory. Com-parison of concepts using a knowledge base like Wikipediais done by [9] and WordNet is used for comparison of con-cepts in [32]. ESA (Explicit Semantic Analysis) [9] moti-

vates the use of Wikipedia for computing semantic relat-edness between two concepts. Finding top-k semanticallysimilar words is done in [29] using (1) string based, (2) cor-pus based, and (3) knowledge based scores. Further, theindividual scores are aggregated to obtain the final seman-tic similarity score. However, the selection of weights forcombining the three scores is guided by application specificresources. Ontology evolution using conservative extensionsproposed by [2], maintains consistency during the ontologyrefinement process. This method is not applicable here sincethe addition of new relationships (except for subsumption)is out of the scope of the current work.

2.3 Tool for EvolutionEvolva [30], mentioned above, takes an ontology and textresources (e.g., Text corpus, terms file, RSS feeds) as input,and provides suggestions for ontology evolution based onthe relationship between the incoming terms and the con-cepts in the ontology. It uses an online relation discoveryengine called Scarlet [22] to go beyond subsumption rela-tionships. Though Evolva recommends concept additionsbased on the new terms, the role played by the history ofevidences (terms) is ignored completely. We propose to usethis historical knowledge in answering appropriate questionsregarding ontology evolution.

In summary, real world deployments of ontology results ina dynamic setting. None of these approaches evaluate on-tology evolution using a comprehensive evaluation strategysuch as ours, where support information of concepts in theontology are used for decision making. Though all theseapproaches recommend concept additions based on the newterms, the role played by the history of evidence (terms) isignored. We evaluate our approach in terms of (1) ontologyconcept retention, and (2) ontology concept placement.

3. PRELIMINARIESIn this section, we provide the motivation for addressingthe problem of ontology evolution and considerations for aprincipled approach. We later introduce the notations usedin the paper.

3.1 MotivationOur motivation stems from the emerging field of sustainabil-ity, also called Smarter Cities or Planet, where efforts aim toreduce waste of natural resources like energy, water, and air,by building cyber-physical systems. Such systems involvesensory data collection through physical instruments, inter-connection and integration of multiple sources, and analysesof intelligent patterns. Here, for a city, the flow of infor-mation coming from sensors, citizens, administrative units,etc., needs to be integrated to generate events that will trig-ger city services where several city departments need to co-ordinate. The data volume, complexity and heterogeneityare substantial, and they need to be shared with a diversestakeholder group, other governmental agencies, individuals(citizens), businesses, and international organizations.

In this context, the SCRIBE [24] effort seeks to semanti-cally model the common information elements and businessevents, align them to standards and drive their usage to gen-erate IT solution artifacts (e.g., database schemas, workflows

and visualization). The novelty in SCRIBE is the process bywhich we select concepts for modeling by harvesting publicdocuments [3] and reconciliation with standards [24]. Themodel includes physical objects (geospatially and temporallybased) in the city such as landmarks, roads, and sewagenetworks. Entities describing organizations, people, itemsand their roles, events based on Common Alerting Protocol,including external events and Messages, and internal workitems and system alerts, metrics called key performance in-dicators, organization of city administration and services of-fered. Additional modules include: models about weather,geo-spatial features, time, and resources.

Consider a specific case of the departments in a city andservices offered by them. The services offered by differentdepartments may change over time. Some examples are thatnew services may be added by traditional departments, ex-isting services may be removed, new departments may comeup with new services, and nomenclature for services maychange. Now when SCRIBE ontology is deployed in dif-ferent cities, there is a need for re-using the part of theontology that is general enough (core) and evolve the partof the ontology that needs extension. As an example, onemay want to know the services provided by a water depart-ment and may expect all water related services. However,sewage treatment service is sometimes not provided by waterdepartments that focus only on sourcing fresh water, and in-stead, may be offered by a separate sanitation department.Depending on the evidences accumulated, we can make arecommendation of adding this new service to the SCRIBEontology. Instead of adding new concepts in an ad-hoc man-ner, we add new concepts or remove existing concepts basedon the evidences we accumulate. Each addition or removalof a concept is explainable based on the accummulated evi-dences.

3.2 Terminology and IllustrationIn this sub-section, we describe the representation of an on-tology consisting of concepts and relationships along withsome constraints (size and semantic constraints). Let O bethe ontology under evolution, Oresidual is the ontology withconcepts and relationships having insufficient support infor-mation for its inclusion in O. Both O and Oresidual are partof the universal base U. Hence, O ∪ Oresidual ⊂ U. Also, O∩ Oresidual = ∅.

O = {C, R, KO} where, C = {c1, c2, .... cn} is a set ofconcepts in the ontology, R = {r1, r2, .... rm} is a set of re-lationships in the ontology, and KO is a set of constraints onthe ontology. KO for instance may contain size constraintssuch as | C | ≤ n and | R | ≤ m where n, m ∈ N are theconstraints on the number of concepts and relationships inO. Similarly, Oresidual = {Cresidual, Rresidual, Kresidual},where, Cresidual = {c1, c2, .... cn}, Rresidual = {r1, r2, ....rm}, and Kresidual for instance can be, | Cresidual | ≤ n and| Rresidual | ≤ m where n, m ∈ N are the constraints on thenumber of concepts and relationships in Oresidual. Eincoming

= {e1, e2, .... ek} represents the incoming evidences to beanalyzed by the ontology evolution process. Every evidenceis of the form ei = ≺elabel, wterms,ei , s� where, elabel is thename of the evidence, wterms,ei is a set of terms associatedwith elabel (wterms,ei left empty in the example for simplicityas shown in Table 1 and Table 2), and s is the support in-

Table 1: Notations and illustration of a sample on-tology under evolution

Terminology Example

O = {C, R, KO} C = {c1, c2, c3} where,c1 = ≺PublicUtilityService, {}, s�c2 = ≺WaterService, {}, s�c3 = ≺WaterDistributionService, {}, s�

R R = {≺c2, c1, subClassOf�, ≺c3, c2, subClassOf�}KO KO = {Nc ≤ 5}Oresidual = {Cr ,Rr , Kr}

Cresidual = {}

Rresidual Rresidual = {}Kresidual Kresidual = { Nc ≤ 5}

Table 2: Eincoming containing all the incoming evi-dences to be processed

Eincoming =

{e1,e2,e3,e4}e1 = ≺WaterTreatmentService, {}, s�

e2 = ≺WaterTreatment, {}, s�e3 = ≺WaterTreatmentPlant, {}, s�e4 = ≺WaterBillingService, {}, s�

formation associated with ei allowing us to express the con-fidence associated with the term extraction process furtherclarified in evidence accumulation. Every concept ci ∈ C ∪Cresidual, has the form ci = ≺ clabel, wterms,ci , s�, whereclabel is the name of the concept (textual representation ofthe concept), wterms,ci is a set of terms associated with theconcept ci, and s is the support information associated withthe concept for its inclusion in the ontology(O).

Table 2 shows evidences to be processed by the ontologyevolution process. The expected state after processing theseevidences is that all the sub-services related to water ser-vices (all the evidences in the table) should be placed as itssubclass.

4. SOLUTIONWe present our solution by presenting the system architec-ture and the building blocks. We propose a systematic wayof accumulating evidence, handling concept addition, andremoval using utility.

We consider a constrained ontology which has models in-cluded from the domain, and a residual ontology with possi-ble relevant models. Changes to the ontology are triggeredby external factors like evidences or internal factors like longperiods of stability. To handle these triggers, we take aleast-commitment approach. The evidence is checked withthe ontology, and included in the ontology if the ontologyconstraints permit expansion. If the constraints in O do notallow, but Oresidual constraints allow change (e.g., expan-sion), the evidence is still included. When the constraintsof both ontology and residual ontology disallow expansion,a cost-benefit analysis is done for the full formal represen-tation and a concept is removed (from O or Oresidual) if ithas lower support than the incoming evidence. If there isno evidence, cost-benefit analysis is still done periodically toensure the relevancy of concepts in the ontology and residualontology.

4.1 ArchitectureThe system architecture of the ontology evolution processis shown in Figure 2. Two important components of theevolution process are (i) evidence manager and (ii) ontologymanager.

Figure 2: Architecture of SCRIBE ontology evolu-tion process

Evidence Manager performs the following functions for everypiece of incomiong evidence: (i) compute relatedness of in-coming evidence to the existing concepts in the ontology. (ii)update and propagate support information for all the match-ing concepts resulting from the previous step. For comparingevidences and concepts, techniques such as (a) string based(e.g. Monge Elken Distance [16], edit distance), (b) con-text based (e.g. Jaccard coefficient between wterms,ci andwterms,ei), (c) corpus based (e.g. frequency of terms), and(d) knowledge based techniques (e.g. WordNet, Wikipedia)can be used. For every incoming evidence, the evidencemanager will decide on doing a support update or ontologyexpansion based on comparison of incoming evidence withexiting ontology concepts and invokes the Ontology Man-ager.

Ontology Manager is responsible for making changes to theontology. It moves concepts between O and Oresidual basedon the support information associated with them. If thereis sufficient support for a concept’s association with an on-tology, then the concept would stay in O, else, it would bemoved to Oresidual. Concepts whose support reach higherthan the threshold (fixed based on tolerable size of the on-tology O; the lower the threshold, the higher the numberof concepts in O) in Oresidual, are moved to O. Both theseoperations are bound by ontology constraints.

4.2 Solution Building BlocksTo build the solution, we need to devise how evidence willbe accumulated, and define the utility of concepts.

4.2.1 Evidence AccumulationAs evidence arrives over time for justifying a particular con-cept in the domain, we need a method to accumulate pres-ence or absence of evidence for the concept. The absence ofevidence for a concept acts as negative indication of the con-cept’s association with the domain. Here are a few desider-

ata for such a method: (i) the evidence accumulation frame-work should be able to express presence or absence of sup-port for a concept in terms of positive and negative evi-dences, (ii) changes to the ontology should not be abrupt(e.g. core ontology should not be changed), (iii) initializa-tion of support values should be intuitive, and (iv) supportvalues should be normalized so that we can compare supportvalues associated with different concepts.

A survey of evidence accumulation approaches using binary-valued logics [21], one-number Bayesian approach [19], Demp-ster Shafer theory [6, 23], and two-number Bayesian ap-proach [20, 14] to accumulate evidences are discussed in[26]. The study concludes the “Peirce-Keynes Thesis” whichstates that “For a general-purpose system to base its be-liefs on available evidence and to process novel evidence inreal time, it is necessary to use two numbers to measure thesystem’s degree of belief.” Along these lines, we adopt thebeta distribution function used in the trust and reputationsystems literature [10, 4, 12] which satisfies all the abovecriterion for an evidence accumulation framework. It hasthe following desirable properties: (i) it has two shape pa-rameters, α and β, where α counts the number of positiveevidences and β counts the number of negative evidences,(ii) the change in the mean and variance of a beta distri-bution is gradual, (iii) the support value is initialized suchthat a concept may be equally likely to be in or out of theontology, and (iv) the support values (e.g. mean of the dis-tribution along with variance) are comparable since theyrange between zero and one.

Support processing consists of components that realize thesedesirable properties:

Basic Support Update: When the incoming evidencematches a specific concept, we strengthen the support as-sociated with the concept in the ontology. The support up-date process includes an accumulation of both the quanti-tative measure, such as number of times the evidence for aconcept was found, and the qualitative measure such as theevidence name that caused the quantitative update. Thiscan be used to explain the changes made to the ontology bythe ontology manager.

Support and Update Propagation: Specific conceptsmay provide evidence for the more general concepts, e.g.,“traffic management” provides evidence for “traffic”. But,more general concepts may not provide evidence for themore specific concepts, e.g., presence of “department” maynot provide evidence for “police department”. Thus, thisstep will propagate the evidence strength from more specificconcepts to the more general ones avoiding situations wherea concept loses support (hence to be removed) while its chil-dren (sub-classes) have sufficient/high support for retention.

4.2.2 Utility of ConceptsFor defining utility, we base our discussion on the evaluationof ontology evolution [11]. We need to summarize additionalinformation about the ontology such as support for a con-cept’s inclusion in the ontology. Two criteria for ontologyevolution include [11]: (1) domain context, and (2) usagecontext. Domain context captures how well an ontology isreflective of a domain and usage context captures the inter-

action of users with the ontology or specifically, how well theconcepts in the ontology are utilized.

We formulate a context space containing support informa-tion χ and map the ontology elements O ∪ Oresidual to thecontext space called the ontology rating annotation [11]: r:O ∪ Oresidual → χ

Ontology change operation (OCO) is defined as the imple-mentation of changes to the ontology (e.g., OntologyMan-ager in the architecture). Ontology is changed based on theoptimization function that maximizes the confidence usingaccumulated evidences. For a Beta distribution functionused in evidence accumulation, the confidence is expressedin terms of the variance of the distribution. Our goal is tomaximize the confidence and hence minimize the variance.

min∑

oco∈OCO

var[C] =αβ

(α+ β)2(α+ β + 1)∀C ∈ O∪Oresidual

where OCO is a set of possible changes to the ontology. Rea-call that, α and β summarize the support for each conceptin the ontology as described in the evidence accumulationframework.

The OCO in our approach is moving concepts between O andOresidual and the ontology manager performs the rearrange-ment which results in lowest variance (highest confidence).

Addition of Concepts: Every concept addition is justifiedbased on the support for its inclusion in the domain ontol-ogy. The uniqueness of our approach is that we quantifysupport with a qualitative explanation for the addition ofconcepts to the ontology. When the incoming evidence doesnot match any existing concepts, instead of being ignored,we will retain it in the ontology if the constraints allow usto expand the ontology.

Removal of Concepts: Absence of evidence for a conceptduring ontology evolution may be due to: (i) reduced im-portance of the concept in the domain, or (ii) the concepthas reached a stable state. To distinguish between these twocases, we use the propagation of evidence from the childrento all of its ancestors. By doing this, we are accounting forthe explicit presence of concepts and its implicit presence.The support associated with concepts in the ontology maybe reduced due to (i) the absence of evidence for a concept inthe domain, or (ii) the frequency and recency of evidence fora concept in a domain. Thus, we may end up with conceptsin the ontology with lower support resulting in its removalwhile retaining stabilized concepts.

Learning Strategy: The rate of addition and removal ofconcepts depends on the learning strategy. With an aggres-sive learning strategy, there is no limit on the new conceptsbeing added or removed. This allows the knowledge base tolearn any new concept quickly and results in quick expansionof the ontology. At the same time, any concept from the on-tology can be removed at any level, and as a result, reducingthe size of the ontology. With a conservative learning strat-egy, the addition or removal of concepts is done only aftersufficient accumulation of evidence (higher thresholds). Thisallows for an ontology to remain in a stable state withoutdrastic changes in the size and concepts in the ontology.

4.2.3 AlgorithmsWe now present algorithms used in our system for clarity,along with a running example taken from Table 2, withincoming evidence e1 and Table 1 as the ontology beingevolved. We recognize that there can be various choicesmade in terms of specific techniques used in the algorithmsas discussed in Table 3 at the end.

Algorithm 1 UtilityDrivenOntologyEvolver(O, Eincoming,Match())Require: φm, θOEnsure: Evolved ontology O, and Oresidual along with updated support in-

formation.1: Global Oresidual = {}

2: for every incoming evidence, ej ∈ Eincoming do

3: if Match(ej , ci, φm) where, ∀ci ∈ C ∨ Cresidual then

4: SupportUpdate(O, ej , Match())

5: else6: TryOntologyExpansion(O, ej , Match())

7: end if8: end for9: // Periodically call these two functions

10: UpdateOntology(O, θO)

11: ManageUtilityOfConcepts(O, Eincoming)

Algorithm 1 shows the ontology evolution process at an ab-stract level. This function invokes SupportUpdate or Try-OntologyExpansion depending on the extent of match be-tween incoming evidence and the concepts in the ontology.φm refers to the matching threshold used by the match-ing function (returns a boolean). θO refers to the mini-mum support required for a concept to enter the ontology,O. θO should be chosen to strike a balance between preci-sion and recall, and size of the ontology. For example, ife1 = ≺WaterTreatmentService, {}, s� does not match anyconcept in the ontology above the threshold φm, and if theconstraints on the ontology allow expansion, TryOntology-Expansion is invoked with e1, O, and the match function.The complexity of this algorithm grows linearly with thenumber of concepts in the ontology since each evidence iscompared with all the concepts in the ontology.

Algorithm 2 SupportUpdate(O, ej , Match())Require: φmEnsure: ∀ci = ≺clabel, wterms,ci

, s� ∈ O, update s, where s is the support

information associated with the concept inclusion in the ontology, O.

1: for every ci ∈ O do

2: if Match(ej , ci, φm) then

3: update s in ci = ≺clabel, wterms,ci, s� by incrementing α

4: propagate support to parents of ci

5: end if6: end for

Algorithm 2 is invoked based on the matching result betweenevidence and ontology concepts in Algorithm 1. The match-ing threshold φm (determined based on the expected size ofontology O) may apply to simple string matching as wellas semantic matching. Our focus is on evidence and utilityaspects of evolution rather than proposing a new seman-tic matching algorithm. Every concept in O and Oresidual

has support s, which has two numbers α and β, the shapeparameters for a Beta distribution function. Support infor-mation will enable the ontology evolution system to explainthe changes made to the ontology.

Algorithm 3 is invoked when all the matches between con-cepts in the ontology and the incoming evidence is less than

Algorithm 3 TryOntologyExpansion(O, ej , Match())Require: φmEnsure: Expand O or Oresidual depending on the constraints KO or

Kresidual.

1: if SatisfyConstraints(O) then

2: Expand(ej , O, Match())

3: else if SatisfyConstraints(Oresidual) then

4: Expand(ej , Oresidual, Match())

5: else6: ReAdjustOntology(ej , Oresidual ∪ O)

7: end if

φm. This algorithm verifies constraints on the ontology be-fore expanding it. If the constraints do not permit the ex-pansion, the support information associated with the incom-ing evidence and concepts in the ontology are compared. Ifthere is a concept with lower support compared to the sup-port of the evidence, it will be removed and the incomingevidence will be accommodated. In the example, since thenumber of concepts in the ontology is three and Nc ≤ 5 isthe only constraint (from Table 1), SatisfyConstraints wouldreturn true. The expand function is invoked with e1 and thecorresponding ontology (O or Oresidual).

Algorithm 4 Expand(ej , O, Match())Require: φmEnsure: expanded ontology O ∪ ej

1: matches = MatchAndReturnConcepts(ej , O, Match())

2: specMatch = MostSpecificConcept(matches)

3: if ej is specific compared to specMatch then

4: Add ej as a child to specMatch

5: else6: Add ej as a parent of specMatch

7: end if

Algorithm 4 is responsible for expanding the ontology withnew incoming evidences. In order to introduce a new con-cept into an ontology, we need to find the position wherethe new concept should be placed. We compare two con-cepts and find which one is more general/specific than theother. A concept that is more specific becomes the childof the more general concept. Upon invocation of MatchAn-dReturnConcepts with e1 and O (shown in Table 1), thecall returns with all the matching concepts present in O. Inthis case, the only matching concept is WaterService. Sincethere is only one matching concept, the MostSpecificConceptinvocation would just return WaterService. SpecMatch con-tains the concept WaterService which is compared with ev-idence e1, WaterTreatmentService. WaterTreatmentServiceis more specific compared to WaterService and hence e1is added as a child of WaterService (e.g. based on stringlength).

Algorithm 5 ReAdjustOntology(ej , O ∪ Oresidual)Require: φmEnsure: re-adjusted ontology O ∪ Oresidual

1: if ∃ci ∈ O ∪ Oresidual such that support of ci < support of ej then

2: replace ci by ej

3: end if

Algorithm 5 is invoked when the ontology constraints are notsatisfied for expansion. This algorithm checks for conceptswith lower support information than the incoming evidence.If there are many concepts with lower support, the conceptwith the least support will be removed and the incomingevidence with higher support will be accommodated.

Algorithm 6 ManageUtilityOfConcepts(O ∪ Oresidual,Eincoming)Require: Eincoming , Cu, φmEnsure: ∀ci = ≺clabel, wterms,ci

, s� ∈ O ∪ Oresidual, not matching with

∀ej = ≺elabel, wterms,ej, s� ∈ Eincoming , decrease support s

1: for ci ∈ O ∪ Oresidual do

2: for ∀ ej ∈ Eresidual and Match(ej , ci, φm) returning false do

3: Decrease support s in ci = ≺clabel, wterms,ci, s�

4: end for5: end for

Table 3: Options and choices for implementationNo. Option Choices Used?

1. Support represen-tation (s)

number

Special representation with α, β, ex-planations

2. Matching Monge Elken Distance√

Edit distanceKnowledge base based matching

3. Strengthen sup-port

Increment α√

decrement β

4. Decay of support Increment β√

decrement αdecay α exponentially

5. Find Specific Con-cept

String length√

String composition

Algorithm 6 looks for concepts that do not have any sup-portive evidence during the ontology evolution process. Toindicate an absence of support, we increase β by one for eachof these concepts. This serves as a way of tracking conceptsthat may have lost its importance in a domain. We can alsouse recency and frequency of evidence found for a conceptas mentioned in [18].

Algorithm 7 UpdateOntology(U, θO)

//Ontology Manager for updating the ontology

Require: U, θO (threshold support for a concept to stay in the ontology O)Ensure: The ontology change operation (oco) is chosen as discussed in the util-

ity section based on minimum variance (i.e. maximum confidence)

1: for every concept ci = ≺clabel, wterms,ci, s� ∈ Cu do

2: if s ≥ θO and ci /∈ C then

3: Cresidual = Cresidual - ci

4: C = C ∪ ci

5: else if s < θO and ci ∈ C then

6: C = C - ci

7: Cresidual = Cresidual ∪ ci

8: end if9: end for

Algorithm 7 is used to update the ontology. Cu ranges overall the concepts in O and Oresidual. The concepts with lowersupport will be moved out of O. This algorithm searches fora concept with lower support than the incoming evidence re-sulting in worst case complexity linear to the size (number ofconcepts) of the ontology. We demonstrated the processingof a single evidence e1 for clarity and the rest of the evidenceis processed in a similar manner.

Table 3 summarizes the rationale for choosing one techniqueover the other and this would vary depending on the con-text in which the implementation is carried out. Supportrepresentation choice has been explained in the evidence ac-cumulation section. The decay support is the same as thecase of absence of evidence explained in evidence accumula-tion.

5. EVALUATIONWe carry out a comprehensive evaluation of the proposedapproach in two steps: (1) ontology concept retention, and(2) ontology concept placement. The first step evaluates theretention of relevant concepts in the ontology. The secondstep evaluates how well the concepts are placed within theontology. We describe each of these in detail in the next twosubsections.

5.1 Ontology Concept RetentionA good evaluation of our approach should scrutinize howwell the (1) relevant concepts are retained, and (2) irrele-vant concepts are removed. Both of them can be verifiedin a context where the ontology is used for decision making.When both issues are addressed properly, the decisions madeusing the evolved ontology should lead to a better outcomes(e.g. civilian lives saved in the context of a rescue scenario).Ontology for decision making and dynamic domain changesare natural in a disaster scenario. We use a simulation en-vironment called RoboCup Rescue5, which is a fine-graineddisaster simulator in an urban setting. The simulator allowsfor setting the number of civilians, rescue units, wind direc-tion, road blockages, etc. There are four types of agents inthe simulation (1) ambulance agent, (2) police agent, (3) firefighting agent, and (4) civilian agent. We focus only on theambulance and civilian agents. We extend the simulator byintroducing different types of injuries that a civilian agentmay have in a disaster scenario. We use vocabulary from theInternational Classification of External Causes of Injuries6

with over 2000 injury types. Each ambulance agent will haveaccess to the ontology which holds the knowledge of injurytypes (from the vocabulary mentioned above). We use ourapproach to evolve the knowledge of the ambulance agent.When an ambulance agent encounters a civilian agent, theinjury type of the civilian agent is passed on to the am-bulance agent. Ambulance agent queries its local ontologywith the injury type and treats the civilian only if it findsthe injury type in the local ontology, e.g., an ambulance willtreat a civilian with a burn injury only if it has burn injuryin the local ontology else, it queries a global ontology (localontology initialized with random concepts from global ontol-ogy to start with) to retrieve the injury type (burn injury)before treating the civilian. Thus, ambulance agents with alesser number of relevant injury types in the local ontologyhave to query the global ontology for injury type (which hasover 2000 concepts) leading to increased processing time. Ifthe local ontology has mostly relevant concepts (unused con-cepts removed), then the processing by an ambulance agentwill be faster. Removal of relevant concepts in a particularcontext will penalize the ambulance agent.

5.1.1 Explanation Of ResultsA plot of the number of civilians alive verses time is shown inFigure 3. WOC stands for Without Controller. Each agenthas a controller for carrying out their actions according to analgorithm provided by the simulator. Without a controller,each agent would remain static, and hence incur greaterdamage due to the disaster (worst case scenario). WITHOU-TONTEVOL is the case where the ambulance agent does notevolve its local knowledge (baseline). WONTEVOL stands

5http://www.robocuprescue.org/6http://bioportal.bioontology.org/ontologies/1484

Figure 3: Plot of civilians alive vs. time throughoutthe simulation for (a) without controller for eachagent, (b) with ontology evolution, and (c) withoutontology evolution

for With ONTology EVOLution, i.e., the ambulance agentevolves its local ontology with concepts of injury encoun-tered when rescuing civilians. The graph shows that, morecivilians were alive after the simulation that evolves the lo-cal ontology of the ambulance agent. Specifically, the ambu-lence agent was able to save 30% more lives with evolutioncompared to the case without evolution.

5.1.2 Evaluation DiscussionWe have evaluated the solution for answering three of thefour questions that we started with: When do we add anew concept to an ontology? What are the concepts to beadded to/removed from an ontology? and When do we de-cide to remove the concepts from an ontology? Followinga principled ontology evolution approach based on utilityand constraints, new injuries that are not found in the coreontology are added, and unused injuries are removed.

5.2 Ontology Concept PlacementIn this part of the evaluation, we use a single SCRIBE ontol-ogy fragment for (i) ontology evolution with Evolva and (ii)evaluating our approach when evidence is generated inter-nal and external to the ontology. We validate our approachusing a widely accepted evaluation strategy for incrementalontology evolution described in [11, 5], which uses the con-cept of Lexical and Taxonomic Precision discussed in Section5.2.1.

5.2.1 Dataset and Evaluation StrategySCRIBE ontology has over 600 concepts and over 50 annota-tion types [24]. We used a subset of it (with 62 concepts andcorresponding relations) dealing with the city’s departmentand services for our experimentation.

We present the evaluation in two parts: (1) Using Inter-nal evidence where some of the concepts from the ontologyare removed and provided as evidences for the ontology evo-lution process. The original fragment, before removal of

Figure 4: Ontology Evolution with Internal Evidences

concepts, serves as the reference ontology. (2) Using Exter-nal evidence that are generated from 10 public documentsdealing with transportation services explained in [3]. Here, acommercial unsupervised content extractor is used on publicPDFs to extract phrases statistically representing the doc-uments. Since the documents deal with transportation andpolice services, they represent concepts dealing with the do-main of interest.

For comparing two ontologies (evolved and the reference on-tology), we evaluate our approach for concept addition andconcept position (location where these concepts are added).These two aspects are quantified using Lexical Precision andLexical Recall (concept addition), and Taxonomic Precision(concept position) [5].

LexicalPrecision, LP (OC , OR) =|CC ∩ CR||CC |

LexicalRecall, LR(OC , OR) =|CC ∩ CR||CR|

Where, CC and CR are concepts in the computed (OC) andthe reference ontology (OR), respectively. CC ∩ CR repre-sents the concepts in both computed ontology and the re-frence ontology. While lexical precision quantifies the miss-ing concepts from the computed ontology, it does not quan-tify how correctly the concepts are placed in the ontology.We use Taxonomic Precision to compute how close the com-puted ontology is when compared to the reference ontology.Taxonomic Precision, is computed using

TPsc(OC , OR) =1

|CC |∑

c∈CC

{tpsc(c, c, OC , OR) if c ∈ CR

0 if c /∈ CR

Where, tpsc is the local taxonomic precision[5] given by

tpsc(c, c, OC , OR) =sc(c,OC) ∩ sc(c,OR)

|sc(c,OC)|Given a concept c ∈ C, where C is a set of all the conceptsin the ontology O, semantic cotopy [5] is defined by sc(c,O)= { ci|ci ∈ C ∧ (ci ≤ c ∨ c ≤ ci) }. For external evidence,terms representative of the domain (e.g., obtained by someextraction process) act as evidences. The reference ontologyis obtained by human judgement which is compared againstthe output (computed) ontology from ontology evolution.

5.2.2 State-of-the-Art Ontology Evolution ToolWe explain the shortcomings of Evolva for solving the prob-lem of SCRIBE ontology evolution based on the incomingevidences: (i) Evolva works well when there is a mapping be-tween the concepts in the ontology and the concepts in theknowledge bases (used by Evolva). Such a mapping gener-ally does not exist and we found that mapping concepts inthe ontology to concepts in other knowledge sources is nota trivial problem. (ii) When the ontology under evolutioncontains concepts already present in other knowledge bases,good suggestions are made for evolving the ontology. Thisassumption may not hold in domains with unknown newconcepts (beyond the concepts in the knowledge base). (iii)Every term extracted is treated as a concept, but the possi-bility that it can be evidence for a concept is not considered.(iv) Concept removal cannot be suggested by this tool, sinceit does not track the utility of concepts in a domain.

5.2.3 Internal EvidenceWe remove some concepts from the ontology, and providethem as evidence to the ontology evolution process. The on-tology evolution process takes this evidence (terms) and theSCRIBE ontology as input (after removing concepts selected

Figure 5: Ontology Evolution with External Evidences

as evidence), and produces an evolved ontology as output.The evolved ontology is compared against the original ontol-ogy (before the removal of concepts), which is the referenceontology. We call the original ontology as reference ontology(OR) and the evolved ontology the computed ontology (OC)as in [5]. The lexical precision and taxonomic precision arecomputed using the equations mentioned above.

Figure 4 depicts the ontology evolution process. Ontologybefore evolution is on top and the modified ontology accom-modating incoming evidence is shown in the bottom of thefigure. The concepts and relationships made of dotted linesin the figure along with the rest of the ontology, constitutethe reference ontology (OR), which acts as the gold stan-dard. The input ontology excludes the concepts connectedby the dotted line. In the resulting ontology, newly addedconcepts are highlighted using double lined concept ovals.Since all incoming evidence appears as concepts in the on-tology, we have a Lexical Recall and Precision of 100%. TheTaxonomic Precision computed using the equation in Sec-tion 5.2.1 is 78%.

5.2.4 External EvidenceIn this case, evidence obtained from a process external tothe ontology act as input to the ontology evolution process.The ideal evolution leading to a reference ontology in thiscase is derived by human judgement which best describes thedesired placement of concepts in the ontology. Taxonomicprecision is computed for reference ontology (OR) and thecomputed ontology (OC).

Figure 5 depicts the ontology evolution process for the ex-ternal evidence case. The result of accommodating all theincoming evidence is shown in the output ontology. As wecan see, not all incoming evidence appears as concepts in the

output ontology, leading to the Lexical Recall of 92%, whilethe Lexical Precision is still 100%. The lower Lexical Recallmay be due to (i) incoming evidence resulting in a supportupdate and hence would not directly appear in the ontology,or (ii) evidence being ignored. The resulting ontology in ourexperiment has a Taxonomic Precision of 89%.

5.2.5 Evaluation DiscussionWe answered the question: “Where should we add the newconcepts?” in this evaluation. Our principled utility andconstraint driven approach to ontology evolution resulted inhigher Lexical and Taxonomic Precision when compared tothe state-of-the-art tool. Table 4 summarizes the evalua-tion. We observed that the matching thresholds used forcomparing incoming evidence and concepts in the ontology,influence Lexical and Taxonomic Precision. The thresholdincrease resulted in higher precision and lower recall, whilethe threshold decrease resulted in lower precision and higherrecall. We would like to explore this relationship as a futurework. The snapshots of the ontology evolution process canbe found on the resource page7.

6. CONCLUSION AND FUTURE WORKIn this paper, we proposed a principled approach for ontol-ogy evolution guided by its constraints for performance andutility of its content for relevance. The changes can be trig-gered due to external evidence or internal factors. We pro-posed an evidence accumulation framework with two num-bered bayesian representation (α and β) borrowed from trustliterature that allows us to be mathematically precise, com-putationally light, and intuitively satisfactory. We proposed

7http://wiki.knoesis.org/index.php/Ontology Evolution

Table 4: Evaluation summary of our approach forinternal and external evidence case. Evolva couldnot process the same evidences used for evaluatingour approach.

Evidence type LP LR TP Comments

Internal Evidence 100% 100% 78% All expected concepts wereincluded, but not all of themincluded at the right place.

External Evidence 100% 92% 89% Not all expected conceptswere included, hence lowerLR.

a comprehensive evaluation strategy for concept retentionand concept placement. The evaluation of our frameworkgave us insights into a realistic ontology evolution settingwhere ontology is used for decision making. Concept re-tention resulted in relevant concepts being retained in theontology with greater life savings in a disaster simulation.Concept placement was evaluated on the SCRIBE ontologywith promising results. As a future work, the current ap-proach can be extended by exploring options sketched outin Table 3. One can also extend this approach to other do-mains and evidence evaluation metrics (like cost of obtainingevidence).

7. ACKNOWLEDGMENTSWe would like to thank Rosaro Uceda-Sosa and Bob Schlossfor their ideas on shaping the SCRIBE ontology and its evo-lution, and Dr. T. K. Prasad for his insights into trustliterature that motivated our evidence management scheme.

8. REFERENCES[1] OpenCyc. http://www.opencyc.org/, 2006.

[2] G. Antoniou and A. Kehagias. On the refinement ofontologies. International Journal of IntelligentSystems, 15:2000, 2000.

[3] S. Bhat, K. Brown, A. Jain, B. Joshi, S. Tamilselvam,B. Srivastava, and T. White. A business contentexplorer for intelligent traffic projects. In Proc. IEEEITSC 2011, Washington DC, USA, Oct 5-7, 2011.

[4] B. E. Commerce, A. Jøsang, and R. Ismail. The betareputation system. In Proc. 15th Bled ElectronicCommerce Conf., 2002.

[5] K. Dellschaft and S. Staab. Strategies for theevaluation of ontology learning. In P. Buitelaar andP. Cimiano, editors, Bridging the Gap between Textand Knowledge Selected Contributions to OntologyLearning and Population from Text, Amstedam,2008-01. IOS Press.

[6] A. Dempster. Upper and lower probabilities inducedby a multivalued mapping. In R. Yager and L. Liu,editors, Classic Works of the Dempster-Shafer Theoryof Belief Functions, volume 219 of Studies in Fuzzinessand Soft Computing, pages 57–72. Springer Berlin /Heidelberg, 2008.

[7] J. Doyle. A truth maintenance system. In Read.Nonmonotonic Reasoning, pages 259–279, SanFrancisco, USA, 1987. Morgan Kauf. Publ.

[8] M. Fernandez, Z. Zhang, V. Lopez, V. Uren, andE. Motta. Ontology augmentation combining semanticweb and text resources. In Intl. Conf. KnowledgeCapture, 2011.

[9] E. Gabrilovich and S. Markovitch. Computing

semantic relatedness using wikipedia-based explicitsemantic analysis. In Proc. 20th IJCAI, pages1606–1611, San Francisco, CA, USA, 2007.

[10] S. Ganeriwal and M. B. Srivastava. Reputation-basedframework for high integrity sensor networks. In Proc.2nd ACM Work. SASN, pages 66–77, New York, NY,USA, 2004. ACM.

[11] P. Haase and Y. Sure. Incremental ontologyevolution-evaluation. Sekt deliverable d3, 1, 2005.

[12] A. Jøsang, R. Ismail, and C. Boyd. A survey of trustand reputation systems for online service provision.Decis. Support Syst., 43:618–644, March 2007.

[13] S. H. Kang and S. K. Lau. Ontology Revision Usingthe Concept of Belief Revision. Proc. 8th Intl. Conf.KES-04, 2004.

[14] J. Keynes. A treatise on probability. Diamond,3(2):12, 1909.

[15] W. Liu, A. Weichselbraun, A. Scharl, and E. Chang.Semi-automatic ontology extension using spreadingactivation, 2005.

[16] A. Monge and C. Elkan. The field matching problem:Algorithms and applications. In In Proc. Intl. Cond.KDD, pages 267–270, 1996.

[17] I. Novalija and D. Mladenic. Ontology extensiontowards analysis of business news. Informatica(Slovenia), 34(4):517–522, 2010.

[18] H. S. Packer, N. Gibbins, and N. R. Jennings. Anon-line algorithm for semantic forgetting. In IJCAI,pages 2704–2709, 2011.

[19] J. Pearl. Probabilistic reasoning in intelligent systems:networks of plausible inference. Morgan KaufmannPublishers Inc., San Francisco, CA, USA, 1988.

[20] C. Peirce. The probability of induction. PopularScience Monthly, 12:705–718, 1878.

[21] R. R. Nonmonotonic reasoning. Annual Review ofComputer Science, 2:147–186, 1987.

[22] M. Sabou, M. d’Aquin, and E. Motta. Scarlet:Semantic relation discovery by harvesting onlineontologies. In S. Bechhofer, M. Hauswirth,J. Hoffmann, and M. Koubarakis, editors, ESWC,volume 5021 of Lecture Notes in Computer Science,pages 854–858. Springer, 2008.

[23] G. Shafer. A mathematical theory of evidence,volume 76. Princeton university press Princeton, 1976.

[24] R. Usceda-Sosa, B. Srivastava, and R. Schloss.Building a highly consumable semantic model forsmarter cities. In IJCAI Work. AI for an IntelligentPlanet, Barcelona, Spain, 2011.

[25] G. D. Vries, M. V. Someren, P. Adriaans, andG. Schreiber. Semi-automatic ontology extension inthe maritime domain. Lighthouse, 2008.

[26] P. Wang and U. Schmid. Formalization of evidence: Acomparative study. Journal of Artificial GeneralIntelligence, 2009.

[27] Z. Wang, K. Wang, R. Topor, and J. Z. Pan.Forgetting concepts in dl-lite. In Proc. 5th ESWC,pages 245–257, Berlin, Heidelberg, 2008.Springer-Verlag.

[28] F. Wu and D. S. Weld. Automatically refining thewikipedia infobox ontology. In Proc. 17th WWWConf., pages 635–644, New York, NY, USA, 2008.

ACM.

[29] Z. Yang and M. Kitsuregawa. Efficient searching top-ksemantic similar words. In IJCAI, pages 2373–2378,2011.

[30] F. Zablith. Evolva: A comprehensive approach toontology evolution. In ESWC, pages 944–948, 2009.

[31] F. Zablith, M. Sabou, M. D’Aquin, and E. Motta.Ontology evolution with evolva. In Proc. 6th ESWC,pages 908–912, Berlin, Heidelberg, 2009.

[32] F. Zablith, M. Sabou, and E. Motta. Usingbackground knowledge for ontology evolution. toappear in. In Proc. ISWC Intl. Work. OntologyDynamics (IWOD), 2008.