Representing, Learning and Extracting Temporal Knowledge from Neural Networks: A Case Study

10
Representing, Learning and Extracting Temporal Knowledge from Neural Networks: A Case Study Rafael V. Borges 1 , Artur d’Avila Garcez 1 , and Luis C. Lamb 2 1 Department of Computing City University London 2 Institute of Informatics Federal University of Rio Grande do Sul Abstract. The integration of knowledge representation, reasoning and learning into a robust and computationally e ective model is a key challenge in Artifi- cial Intelligence. Temporal models are fundamental to describe the behaviour of computing and information systems. In addition, acquiring the description of the desired behaviour of a system is a complex task in several AI domains. In this paper, we evaluate a neural framework capable of adapting temporal models ac- cording to properties, and also learning through observation of examples. In this framework, a symbolically described model is translated into a recurrent neural network, and algorithms are proposed to integrate learning, both from examples and from properties. In the end, the knowledge is again symbolically represented, incorporating both initial model and learned specification, as shown by our case study. The case study illustrates how the integration of methodologies and prin- ciples from distinct AI areas can be relevant to build robust intelligent systems. Keywords: Neural Symbolic Integration, Temporal Learning, Temporal Models. 1 Introduction The integration of knowledge representation, reasoning and learning in a robust and computationally e ective intelligent platform is one of the key challenges in Computer Science and Artificial Intelligence (AI) [4]. The representation and learning of tempo- ral models in Software Engineering (SE) is an ongoing research endeavour, with sev- eral applications widely used in industry [3,5]. Integrating these di erent dimensions of temporal knowledge aims not only at responding to the challenge put forward in [4], but also at developing a clear abstract representation of dynamic systems, complement- ing incomplete specifications with observed examples of a system’s behaviour. Further, the availability of information about desired properties in a system allows automated evolution of its description, optimizing the processes of specification and verification. This paper describes a framework that robustly integrates di erent sources of tem- poral knowledge: (i) the symbolic knowledge model, (ii) learned observed examples of the system’s behaviour, and (iii) an abstract description of properties to be satisfied by the system specification. The paper builds upon principles from two recent appli- cations of machine learning: the first consists in learning an abstract description of a K. Diamantaras, W. Duch, L.S. Iliadis (Eds.): ICANN 2010, Part II, LNCS 6353, pp. 104–113, 2010. c Springer-Verlag Berlin Heidelberg 2010

Transcript of Representing, Learning and Extracting Temporal Knowledge from Neural Networks: A Case Study

Representing, Learning and Extracting TemporalKnowledge from Neural Networks: A Case Study

Rafael V. Borges1, Artur d’Avila Garcez1, and Luis C. Lamb2

1 Department of ComputingCity University London

�������������� ������������������2 Institute of Informatics

Federal University of Rio Grande do Sul��������������

Abstract. The integration of knowledge representation, reasoning and learninginto a robust and computationally e�ective model is a key challenge in Artifi-cial Intelligence. Temporal models are fundamental to describe the behaviour ofcomputing and information systems. In addition, acquiring the description of thedesired behaviour of a system is a complex task in several AI domains. In thispaper, we evaluate a neural framework capable of adapting temporal models ac-cording to properties, and also learning through observation of examples. In thisframework, a symbolically described model is translated into a recurrent neuralnetwork, and algorithms are proposed to integrate learning, both from examplesand from properties. In the end, the knowledge is again symbolically represented,incorporating both initial model and learned specification, as shown by our casestudy. The case study illustrates how the integration of methodologies and prin-ciples from distinct AI areas can be relevant to build robust intelligent systems.

Keywords: Neural Symbolic Integration, Temporal Learning, Temporal Models.

1 Introduction

The integration of knowledge representation, reasoning and learning in a robust andcomputationally e�ective intelligent platform is one of the key challenges in ComputerScience and Artificial Intelligence (AI) [4]. The representation and learning of tempo-ral models in Software Engineering (SE) is an ongoing research endeavour, with sev-eral applications widely used in industry [3,5]. Integrating these di�erent dimensionsof temporal knowledge aims not only at responding to the challenge put forward in [4],but also at developing a clear abstract representation of dynamic systems, complement-ing incomplete specifications with observed examples of a system’s behaviour. Further,the availability of information about desired properties in a system allows automatedevolution of its description, optimizing the processes of specification and verification.

This paper describes a framework that robustly integrates di�erent sources of tem-poral knowledge: (i) the symbolic knowledge model, (ii) learned observed examplesof the system’s behaviour, and (iii) an abstract description of properties to be satisfiedby the system specification. The paper builds upon principles from two recent appli-cations of machine learning: the first consists in learning an abstract description of a

K. Diamantaras, W. Duch, L.S. Iliadis (Eds.): ICANN 2010, Part II, LNCS 6353, pp. 104–113, 2010.c� Springer-Verlag Berlin Heidelberg 2010

Representing, Learning and Extracting Temporal Knowledge from Neural Networks 105

system through the observation of examples its behaviour [12]; the other consists in theevolution of a specification through the use of examples or abstract descriptions of asystem’s desired behaviour [1]. Unifying both ideas allows reasoning and adaptation tobe integrated into several applications regarding specification of temporal models.

The paper will focus on the evaluation of the framework on a benchmark SoftwareEngineering case study. In the framework, symbolic knowledge is represented by a frag-ment of a temporal logic, then computed and learned through a connectionist engine.This leads to the construction of an e�ective, intelligent structure that can be used inthe process of development and analysis of a general class of systems. The case studywill illustrate the e�ectiveness of the framework, evaluate its performance in integrat-ing di�erent sources of information, and its applicability to learning from examples andproperties, where reasoning and learning are used to evolve temporal models. The pa-per is organized as follows. Section 2 contains background material. Section 3 describesthe temporal reasoning and learning framework. Section 4 discusses the case study indetail. Section 5 concludes and discusses future work.

2 Preliminaries: Temporal Learning and Reasoning

Temporal logics have been highly successful in the representation of temporal knowl-edge about computing systems [5]. For example, LTL (Linear Temporal Logics) andCTL (Computation Tree Logics) are broadly used in computer science, to analyze mod-els and properties of a system [3,5]. However, adding a temporal dimension to knowl-edge models imposes several challenges to the learning task. Symbolic learning systemssuch as Inductive Logic Programming (ILP) [11] can in principle be adapted to appli-cation in temporal domains, but are considered too brittle for such task [10]. Neuralnetwork learning presents itself as an alternative, where a quantitative approach is usedin the learning task, which can then be applied to temporal learning through the userecurrent networks or by incorporating memory into the networks [8,9].

One traditional approach to build unified reasoning and learning systems is by trans-lating knowledge from one representation to another. For instance, initial knowledgerepresented by a symbolic language can be translated into a semantically equivalentneural network. This target network can then be subject to learning through the presen-tation of examples. In turn, one can then explain the learned knowledge by extractingknowledge from the network into a symbolic representation [2].

Regarding temporal knowledge, the robust integration between learning and reason-ing can be used in several ways, as in the modeling and verification of specifications. InBlack Box model checking, a symbolic machine learning algorithm is used to acquirean abstract model of a system, and this model is then used to perform automated modelchecking of the system with no knowledge about its internal structure being provided[7]. Another interesting use of learning in SE consists in applying learning strategies toadapt a model in order to satisfy certain properties or constraints, as done in [1] where atemporal logic description of a model is translated into event calculus; the semantics ofthe temporal operators is represented through predicates in first-order logics, allowingthe application of ILP techniques. In the case study below, we shall discuss the pros andcons of the ILP approach in comparison with our neural network-based approach.

106 R.V. Borges, A.d’. Garcez, and L.C. Lamb

3 A Framework for Learning and Reasoning

In order to implement a robust framework to describe, adapt and learn new specifica-tions, we consider three di�erent sources of information: an initial symbolic description,observed examples of the desired behaviour, and properties (or constraints) to be satis-fied by the specified system. In the definition of the framework, these di�erent sourcesof information can be used in the learning task (though none of them is mandatory).This flexibility allows the framework to fulfill the requirements for Black Box checking[7], where a set of examples illustrates how the system works without having an abstractdescription of its general behaviour. For this purpose, the learning module must be ableto build an initial representation of the system from the observed examples. The modelis then subject to the presentation of properties that should be satisfied. The frameworknot only identifies if the model satisfies the properties, but also adapts the model in or-der to meet a specification. In addition, the presentation of examples and properties tothe learning engine can be e�ected simultaneously. The learning procedure we use canbe applied to a model with or without background knowledge of the system.

Consider the diagram of Fig. 1. The core of the system is defined by the learningengine (1), which requires di�erent resources to allow the integration of the di�erentsources of information. The initial model description (2) is converted into a neural net-work through the use of a translation algorithm. After that, the network can be subjectto learning, considering the information given by the observed examples (3) and thesystem’s properties (4). In turn, refined knowledge is extracted into a symbolic repre-sentation (5), facilitating its analysis. In our system, knowledge is extracted in the formof a state transition diagram, which can be converted into a logic program, if needed.

(1)Learning Engine(Neural Network)

(3)Observed System

(Examples)

(5)State Diagram

(Refined Knowledge)

(2)Model Description(Logic Program)

(4)SpecifiedProperties

KnowledgeExtraction

KnowledgeTranslation

Integrated

Learning

Fig. 1. Diagram of our proposed framework

3.1 Representing the Model

In order to represent the models, we will use a temporal logic language, similarly to [9].In this work, we adapt their syntax for the sake of clarity. First, each atom (propositionalvariable) used in the description of a model will be either an input or a state variable.Input variables are those whose value is set externally to the model, while state variables

Representing, Learning and Extracting Temporal Knowledge from Neural Networks 107

have their values defined according to the model’s behaviour. To represent temporalsequences, we use the � (next time) operator.

The logical representation can be used to generate the initial architecture of the neu-ral network through an algorithm proposed in [6], which translates a propositional logicprogram� into a correspondent neural network� . This translation consists basically inusing a hidden neuron to represent the conjunctions of literals in the body of each rule.The input and output neurons represent atoms, where the output neurons compute thedisjunction of the rules in which the related atom appears as head. Positive assignmentsto the variables are represented by positive values close to 1 and negative assignmentsby values close to -1. The propagation of information through time, to allow the compu-tation of the temporal operators, is represented through recurrent links from the outputneuron representing �� to the input neuron representing �.

Let us illustrate the integration of the di�erent representations of knowledge througha simple example. Consider the monitor of a resource that should allocate such resourcebetween two processes A and B. Each process communicates with this monitor througha signal to request the resource (Req) and a signal to release it (Rel). These signals areconsidered as input variables to the monitor, that also has two state variables represent-ing if each process has the resource allocated. In Fig. 2, we show a logic program thatillustrates how the inputs a�ect the states, and the network representing this programgenerated by the translation. Below, a rule of the form �B �� B� relB means that ifrelB is true and B is false at timepoint t then B should be true at timepoint t � 1.

A reqAA ~A, relA

B reqBB ~B, relB

A B

reqA relA reqB relB A B

Fig. 2. Representation of the monitor example

3.2 Learning and Evolving the Model

Our system allows learning from examples and specified properties. Each observed ex-ample has values assigned to all the input variables, and to a subset of the state variables.This allows the use of learning from examples when only some of the state variables areobservable, such as in the case where these represent the actual outputs of an observedsystem. We use standard backpropagation [8], as follows. When learning from exam-ples, the input values given by an example are applied to the corresponding input neu-rons, together with the information about the current state. Information is propagatedforward through the network to obtain the output values, and through the network’s re-current connection to obtain the network’s next state. If the output information is notpresent (i.e. not observable), we will consider a null error for this output. Otherwise,the error is calculated using the example and the weights are changed in the usual way.

108 R.V. Borges, A.d’. Garcez, and L.C. Lamb

When learning from properties, instead of standard examples (i.e. input�output pat-terns), the system is presented with an entire sequence of inputs I0� I1� ���, states andnext states S 0� S 1� etc. Hence, the desired output for each timepoint in the backpropaga-tion process must be defined to allow the training of the network. For this purpose, theframework keeps a record of active properties, as well as an index k� for each activeproperty�. At every timepoint t, if the current state corresponds to the initial conditionS �

0 of a property� then � is inserted into the list of active properties, and k� is set to 0.When an input is applied, the framework verifies if it corresponds to the current positionI�k of each active property, eliminating from the list all the properties not following thiscondition. When an active property ends, the state values given by the final state condi-tion S �

n are used to define the desired output values for the learning process. In Table 1,we can see an example of execution regarding the learning of two properties�1 and�2shown on the left. In the table, 1 represents true and �1 represents false. Consideringthat the execution starts with an empty list of active properties, we can notice how theinputs and states a�ect this list, as well as the definition of the desired output accordingto property �2. The desired output defined by the property will then be integrated withthe information from the examples (to avoid conflicts) and then used to define the actualvalues applied to the network.

Table 1. Defining target output values (right) according to specified properties (left)

Properties Exec State Inputs Active Desired�1 �2 Count A B ReqA RelA ReqB RelB Prop. Output

S 0 �B �A 1 �1 �1 1 �1 �1 �1 ��1(1)� �

I0 ReqA ReqB 2 1 �1 �1 1 �1 �1 � �

I1 ReqB��RelA ReqA��RelB 3 �1 �1 �1 �1 1 �1 ��2(1)� �

S 2 �B �A 4 �1 1 �1 �1 �1 �1 ��2(2)� ��A�

3.3 Extracting Knowledge about the Model

We use a pedagogical approach [2] to obtain a symbolic representation of the trainednetwork. Input and output values of the network are sampled and used to infer a gen-eral behaviour. The samples used in this procedure can be the same used for learning,but di�erent sets can also be considered to allow a better generalization. In the case oflearning through properties, the randomness of the input selection allows di�erent setsof data to be applied to each training epoch; therefore, it is not necessary to create a dif-ferent procedure to generate the examples for extraction. In this process, for each inputapplied to the network, a transition � is stored containing information about the currentstate S �

0 , the applied input I� , and the obtained next state S �

f . All the occurrences of

transitions � with the same S �

0 , I� and S �

f are grouped into a unique transition � �.

Extra variable count� is also used to give a more quantitative measure when analyzingthe extracted knowledge: count�

is the number of transitions grouped into � �.These transitions can be shown in the form of a diagram, but also represented back

as a revised temporal logic program. In order to do so, we filter the group of transitionsto be used, according to the number of times they appear. Each remaining transition is

Representing, Learning and Extracting Temporal Knowledge from Neural Networks 109

then rewritten as a set of rules - one rule for each state variable. The body (right-handside) of all the rules representing� contains all the input and state variables, in positiveor negative form according to the assignments S � �

0 and I��

of � �. The head (left-handside) of each rule will be given by one of the variables ��, if S �

assigns � to true, or��� otherwise. To allow a better understanding, this set of rules can also be simplified,through symbolic manipulation of the logic program.

4 Validation and Experiments: Case Study

A pump system example is used in [1] as a case study to evaluate symbolic strategiesto adapt requirements according to properties. Below, we use a version of this problemto verify representation and learning of temporal knowledge from di�erent informationsources using neural networks. The pump system monitors and controls the levels ofwater in a mine to avoid the risk of overflow, through the use of three state variables:CMet indicating a critical level of methane, HiW indicating a high level of water, andpOn, indicating that the pump is turned on. In order to turn on and o� such indicators,six di�erent signals are considered, as shown below in the rules representing the system:

– �CMet � CMet�� sCMetO f f– �CMet �� sCMetOn– �HiW � HiW;� sLoW– �HiW �� sHiW– �POn � POn�� tPO f f– �POn �� tPOn

Our initial experiments consider three di�erent cases. First, in experiment a1, wetranslated the knowledge described above into a network with 9 neurons in the inputlayer (representing input and state variables), 6 hidden and 3 output neurons. Next, inexperiment a2, we considered learning of such relations through the presentation of asequence of 1000 examples, using a network without background knowledge, but withthe same distribution of neurons as in a1. Finally, in a3, we expressed such relationsas properties, and ran the framework again with a similar network. In a1 and a2, weconsidered 500 epochs of 1000 presentations, and in a3 we used 50 epochs of 10,000presentations. In the definition of the examples, we considered that only one input ispositive at each timepoint, as in [1]. On the other hand, the automatic generation ofinputs for learning properties in a3 does not have this restriction, requiring a largersample to get an accurate representation of the possible input configurations.

In Fig. 3, we show a state diagram for the networks before the learning process, thechart depicting the evolution of the root mean square error (RMSE) on output duringlearning, and a state diagram representing the learned knowledge after training. Thestate variables CMet, HiW and POn are represented by C, H and P, respectively, inthe diagrams. In a1, the initial and final diagrams are the same since the knowledge isalready built in the network. When learning through examples only (a2), the diagramsclearly show that new information was learned about the transitions between states.

110 R.V. Borges, A.d’. Garcez, and L.C. Lamb

a1

a3

C

CH CP

CHP

HP

H P

100 200 300 400 500

0.2

0.4

0.6

0.8

1

0

RMSE

Epochs

a2

O

100 200 300 400 500

0.2

0.4

0.6

0.8

1

0

RMSE

Epochs

10 20 30 40 50

0.2

0.4

0.6

0.8

1

0

RMSE

Epochs

C

CH CP

CHP

HP

H P

O

C

CH CP

CHP

HP

H P

O

C

CH CP

CHP

HP

H P

O

C

CH CP

CHP

HP

H P

O

C

CH CP

CHP

HP

H P

O

Fig. 3. Transition diagram before learning, error evolution and diagram after learning

However, in the learning of properties, in a3, the use of many input configurationsled to a somewhat confusing diagram. To give a better understanding of the extractedknowledge, we converted such representation into a logic program, also to illustratethe flexibility of our system when presenting extracted information. Below, we show asubset of the program learned for rules regarding the next state of CMet.

– �CMet � sCMetOn�� sCMetO f f– �CMet �� CMet� sCMetOn– �CMet � CMet�� sCMetO f f– ��CMet �� sCMetOn� sCMetO f f– ��CMet �� CMet�� sCMetOn– ��CMet � CMet� sCMetO f f

4.1 Integrating Knowledge Sources

Next, we extend the case study to illustrate the framework performance in applicationsinvolving incremental learning and interaction between di�erent knowledge sources.We use a state variable trP such that a positive assignment to trP at a timepoint t impliesthat POn will be true at t � 1, independently of the other variables. This is representedthrough a property �. Again, three di�erent cases were considered to evaluate the op-tions on learning: in experiment b1, the network was generated by the same translationas in a1, extended with a single extra hidden neuron and an extra input and output neu-rons to represent trP. In experiment b2, the same extension was applied to the networkgenerated through learning of examples only as in a2. In the b1 and b2 cases, the net-works were subject to learning of properties representing the trigger condition. In b3we used a network without background knowledge to learn from the set of examplesand property � simultaneously. This network also had ten input, seven hidden and fouroutput neurons. In Fig. 4, we depict the evolution of RMSE in all three cases.

Representing, Learning and Extracting Temporal Knowledge from Neural Networks 111

0.1

0.2

0.3

0.4

0.5

0 10 20 30 40 50

RMSE

Epochs

b1b2b3

Fig. 4. Error evolution when learning the property regarding trP

In these experiments, the insertion of an extra variable doubled the number of statesin the diagram, making its understanding more complex. Therefore, the extracted logicprograms provide a clearer option to analyze the learned knowledge. While the errorcharts of b1 and b2 depict a good learning performance, the extracted logic programsdo not correspond to the expected behaviour: In b1, POn was always true and, in b2,a large number of specific rules were generated, without an apparent relation to theknowledge given to the network. Our conjecture as why this has happened is that theoriginal knowledge of the network fades away during the learning of the new property,because only the information about � is given to the network during training. Hence,we have decided to improve the framework with the possibility to reinforce the currentknowledge of the network during training, by setting the desired output value applied toa neuron to 1 (resp. �1) when the obtained output is above a positive threshold u (resp.below �u), and no information is given about � by the properties nor the examples.With this modification, we ran b1 and b2 again, obtaining similar error charts, but.

Table 2. Temporal knowledge learned in b1, b2 and b3 - redundant rules omitted in b3

Experiment b1 Experiment b2 Experiment b3�POn← tPOn �POn← tPOn,∼ tPO f f �POn← tPOn�POn← trP �POn← trP �POn← trP

�POn← POn,∼ tPO f f �POn← POn,∼ tPO f f �POn← POn,∼ tPO f f�¬POn←∼ trP,∼ tPOn, tPO f f �POn← POn, tPOn ...

�¬POn←∼ POn,∼ trP,∼ tPOn �¬POn←∼ trP,∼ tPOn, tPO f f �¬POn←∼ trP, tPO f f�¬POn←∼ POn,∼ trP, tPO f f �¬POn←∼ POn,∼ trP,∼ tPOn�¬POn←∼ POn,∼ trP,∼ tPOn

4.2 Case Study Discussion

To analyze the importance of the results shown in last section, we will compare ourframework to the system proposed in [1] that approach, where the same testbed wasused to evaluate the a purely symbolic learning technique to refine a temporal specifica-tion. In terms of learning performance, we were able to verify that both approaches weresuccessful in the considered applications. Our case study gave good evidence that our

112 R.V. Borges, A.d’. Garcez, and L.C. Lamb

approach is capable of learning, and that the advantages of the use of neural networks,such as noise tolerance, can be verified for the di�erent learning scenarios.

However, a direct comparison between the results of the techniques is diÆcult dueto di�erences between their structure. The main di�erence between the approaches re-gards the actual goal of the learning task: In symbolic techniques such as [1], learning isapplied to the task of refinement, i.e., generating a set of hypotheses capable to comple-ment original incomplete knowledge according to the properties to be learned. In ourwork, the incomplete knowledge is represented into a numeric processor (neural net-work), that will define an actual (deterministic) transition function even to those casesnot specified in the symbolic description. In that way, the learning task will perform arevision of this knowledge, instead of incremental refinement.

This gives to our framework a new and di�erent range of applications. Our systemis capable to deal with incorrect symbolic knowledge, instead of just incrementing anexisting incomplete specification. The experiments above shown exactly that the net-works were capable to change the underlying transition diagram, therefore being usingthe examples or properties to learn not only how to complement the original incompleteknowledge, but also to correct errors in such description.

It is also important to consider the language used for knowledge representation, whenanalyzing our framework in comparison with purely symbolic approaches. In our firstexample, we can see that the learning from properties resulted in a di�erent diagramthan the one obtained from learning from examples. This happened because of the lim-itations of our propositional logic programming language, which do not provide anyresource to represent certain relations between variables at the same time point. Otherlimitation of our representation language is its deterministic nature, which might needto be tackled depending on the focus of the application.

This approach is still clear and powerful enough to the representation of a broad setof cases. Representation systems based on predicate logics might have more represen-tation power, but often falls in issues like decidability and computational complexity.When comparing with the event-calculus based system in [1], one can notice that thedi�erent representation structures reflect the very purpose of the application. Whileevent calculus provide powerful constructs to abduction and inductive learning, ourlogic programming systems present a clear definition of input and state variables, allow-ing a better integration with the core neural network used for the learning purposes. Thesimplicity of our language, together with the capacity of the neural networks to performsupervised learning, also caters for the possibility of learning from observed examples,which is an important aspect towards the implementation of Black Box Checking [12].

The numeric representation of the knowledge also allows some interesting possibil-ities. The association of numeric weights into the extracted transitions allows a prob-abilistic approach to overcome the deterministic limitation of the representation. Also,the incremental correction of weights in the learning process can be parameterized togive priority to the background knowledge or to the information to be learned, accord-ing with the configuration of the problem. In the last experiment, we have shown asimple example of how this can be done, by changing the desired values used on back-propagation according to the obtained output values.

Representing, Learning and Extracting Temporal Knowledge from Neural Networks 113

5 Conclusions

This paper outlined a framework and presented a case study for representing, adaptingand learning temporal knowledge in neural networks. The framework provides integra-tion of di�erent knowledge sources, allowing observed examples and desired propertiesto be used in the evolution of an initial temporal model, or in learning a completely newmodel. A case study has shown that the framework can achieve the desired tasks withgood performance. The use a neural network caters for noise-tolerance in the learningprocess, which is useful when treating di�erent sources of information.

We believe the methodology proposed in this work may serve as foundation for thedevelopment of richer models for the analysis and evolution of computing systems.Extensions to the formalisms used here to represent models and properties can enhancethe applicability of the framework. As further work, we plan to integrate the frameworkwith existing formal verification systems, such as the NuSMV model checker, and applyit to larger-scale testbeds on both reasoning and learning tasks.

Acknowledgments. Research supported by the Brazilian Research Council CNPq.

References

1. Alrajeh, D., Ray, O., Russo, A., Uchitel, S.: Using abduction and induction for operationalrequirements elaboration. Journal of Applied Logic 7(3), 275–288 (2009)

2. Andrews, R., Diederich, J., Tickle, A.B.: A survey and critique of techniques for extractingrules from neural networks. Knowledge-based Systems 8(6), 373–389 (1995)

3. Clarke, E.M., Emerson, E.A., Sifakis, J.: Model checking: algorithmic verification and de-bugging. Commun. ACM 52(11), 74–84 (2009)

4. Feigenbaum, E.A.: Some challenges and grand challenges for computational intelligence.Journal of ACM 50(1), 32–40 (2003)

5. Fisher, M., Gabbay, D., Vila, L. (eds.): Handbook of temporal reasoning in artificial intelli-gence. Elsevier, Amsterdam (2005)

6. d’Avila Garcez, A.S., Zaverucha, G.: The connectionist inductive learning and logic pro-gramming system. Applied Intelligence 11(1), 59–77 (1999)

7. Groce, A., Peled, D., Yannakakis, M.: Adaptive model checking. In: Katoen, J.-P., Stevens,P. (eds.) TACAS 2002. LNCS, vol. 2280, pp. 357–370. Springer, Heidelberg (2002)

8. Haykin, S.: Neural Networks: A Compreensive Foundation, 2nd edn. Prentice Hall, Engle-wood Cli�s (1999)

9. Lamb, L.C., Borges, R.V., d’Avila Garcez, A.S.: A connectionist cognitive model for tempo-ral synchronization and learning. In: AAAI 2007, pp. 827–832 (2007)

10. Mitchell, T.M.: Machine Learning. McGraw-Hill, New York (1997)11. Muggleton, S., Raedt, L.: Inductive logic programming: Theory and methods. J. Logic Pro-

gramming 19-20, 629–679 (1994)12. Peled, D., Vardi, M.Y., Yannakakis, M.: Black box checking. J. Autom. Lang. Comb. 7(2),

225–246 (2001)