Artificial Intelligence Theory (Bаsic concepts

8
Science and Information Conference 2014 August 27-29, 2014 | London, UK 1 | Page www.conference.thesai.org Artificial Intelligence Theory (Bаsic concepts) Vitaliy Yashchenko Artificial intelligence. Institute of Mathematical Machines and System Problems NANU, IMMSP, Kiev, Ukraine, [email protected] AbstractTaking the bionic approach as a basis, the article discusses the main concepts of the theory of artificial intelligence as a field of knowledge, which studies the principles of creation and functioning of intelligent systems based on multi-layer neural-like growing networks. The general theory of artificial intelligence includes the study of neural-like elements and multi- layer neural-like growing networks, temporary and long-term memory, study of the functional organization of the “brain” of the artificial intelligent systems, of the sensor system, modulatory system, motor system, conditioned and unconditioned reflexes, reflector arc (ring), motivation, purposeful behavior, of “thinking”, “consciousness”, “subconscious and artificial personality developed as a result of training and education”. Keywordsbionic approach; multi-layer neural-like growing networks; sensory system; modulatory system; motor system; conditioned and unconditioned reflex; reflex arc I. INTRODUCTION This work briefly discusses the basic concepts of the theory of artificial intelligence based on multi-layer receptor- effector neural-like growing networks. "Analysis of the problems in the field of artificial intelligence shows that at present time, on the one hand, intensive division of its subfields continues, while on the other hand, one may perceive certain integration of research in an endeavor to build a general theory. Integration of research is forced by the necessity to consolidate the whole research system in the field of artificial intelligence into a single unit, based on a universal concept or idea, aspiring to its functional prototype: intelligent and functional human being" [1]. In artificial intelligence theory such universal concept is represented by multi-layer receptor-effector neural-like growing networks, which aspire to their functional prototype - biological neural networks. II. BASIC CONCEPTS OF ARTIFICIAL INTELLIGENCE A. Artificial intelligence Artificial intelligence - is а field of knowledge, which studies the structure and functioning of intelligent systems based on multi-layer receptor-effector neural-like growing networks. Artificial intelligence theory includes the study of neural-like growing elements and multi-layer neural-like growing networks, temporary and long-term memory, the study of the functional organization of the "brain" of the artificial intelligent systems, of sensory system, modulatory system, motor system, conditioned and unconditioned reflexes, reflector arc, motivation, purposeful behavior, of "reasoning", "consciousness", "subconscious and artificial personality developed as a result of learning and training". Axiom 1. Artificial intelligence theory is based on the analogy with the nervous system of human. The core of human intelligence is the brain, consisting of multiple neurons interconnected by synapses. Interacting with each other through these connections, neurons create complex electric impulses, which control the functioning of the whole organism and allow recognition, learning, reasoning, structuring of information through its analysis, classification, location of connections, patterns and distinctions in it, associations with similar information pieces etc. [2]. The functional organization of the brain. In the works of physiologists P.K. Anohin, A.R. Luriya, E. N. Sokolov [3, 4] and others the functional organization of the brain includes different systems and subsystems. The classical interpretation of the interactive activity of the brain can be represented by interactions of three basic functional units: 1) information input and processing unit - sensory systems (analyzers); 2) modulating, nervous system activating unit - modulatory systems (limbic-reticular systems) of the brain; 3) programming, activating and behavioral acts controlling unit - motor systems (motion analyzer). Brain sensory systems (analyzers). Sensory (afferent) system is activated when a certain event in the environment affects the receptor. Inside each receptor the physical factor affecting it (light, sound, heat, pressure) is converted into an action potential, nervous impulse. Analyzer is a hierarchically structured multi-layer system. Receptor surface serves as the base of the analyzer, and cortex projection areas as its node. Each level is a set of cells, whose axons extend to the next level. Coordination between sequential layers of analyzers is organized based on divergence/convergence principle. Brain modulatory systems are an instrument of regulation of the level of activity, performing also selective modulation, and stressing urgency of a certain function. The initial source of activation is intrinsic activity and the needs of the organism. A second source of activation is related to environmental irritants.

Transcript of Artificial Intelligence Theory (Bаsic concepts

Science and Information Conference 2014 August 27-29, 2014 | London, UK

1 | P a g e www.conference.thesai.org

Artificial Intelligence Theory (Bаsic concepts)

Vitaliy Yashchenko

Artificial intelligence. Institute of Mathematical Machines and

System Problems NANU, IMMSP,

Kiev, Ukraine,

[email protected]

Abstract— Taking the bionic approach as a basis, the article

discusses the main concepts of the theory of artificial intelligence

as a field of knowledge, which studies the principles of creation

and functioning of intelligent systems based on multi-layer

neural-like growing networks. The general theory of artificial

intelligence includes the study of neural-like elements and multi-

layer neural-like growing networks, temporary and long-term

memory, study of the functional organization of the “brain” of

the artificial intelligent systems, of the sensor system, modulatory

system, motor system, conditioned and unconditioned reflexes,

reflector arc (ring), motivation, purposeful behavior, of

“thinking”, “consciousness”, “subconscious and artificial personality developed as a result of training and education”.

Keywords—bionic approach; multi-layer neural-like growing

networks; sensory system; modulatory system; motor system;

conditioned and unconditioned reflex; reflex arc

I. INTRODUCTION

This work briefly discusses the basic concepts of the theory of artificial intelligence based on multi-layer receptor-effector neural-like growing networks.

"Analysis of the problems in the field of artificial intelligence shows that at present time, on the one hand, intensive division of its subfields continues, while on the other hand, one may perceive certain integration of research in an endeavor to build a general theory. Integration of research is forced by the necessity to consolidate the whole research system in the field of artificial intelligence into a single unit, based on a universal concept or idea, aspiring to its functional prototype: intelligent and functional human being" [1]. In artificial intelligence theory such universal concept is represented by multi-layer receptor-effector neural-like growing networks, which aspire to their functional prototype - biological neural networks.

II. BASIC CONCEPTS OF ARTIFICIAL INTELLIGENCE

A. Artificial intelligence Artificial intelligence - is а field of knowledge, which

studies the structure and functioning of intelligent systems based on multi-layer receptor-effector neural-like growing networks. Artificial intelligence theory includes the study of neural-like growing elements and multi-layer neural-like growing networks, temporary and long-term memory, the study of the functional organization of the "brain" of the artificial intelligent systems, of sensory system, modulatory

system, motor system, conditioned and unconditioned reflexes, reflector arc, motivation, purposeful behavior, of "reasoning", "consciousness", "subconscious and artificial personality developed as a result of learning and training".

Axiom 1. Artificial intelligence theory is based on the analogy with the nervous system of human.

The core of human intelligence is the brain, consisting of multiple neurons interconnected by synapses. Interacting with each other through these connections, neurons create complex electric impulses, which control the functioning of the whole organism and allow recognition, learning, reasoning, structuring of information through its analysis, classification, location of connections, patterns and distinctions in it, associations with similar information pieces etc. [2].

The functional organization of the brain. In the works of physiologists P.K. Anohin, A.R. Luriya, E. N. Sokolov [3, 4] and others the functional organization of the brain includes different systems and subsystems. The classical interpretation of the interactive activity of the brain can be represented by interactions of three basic functional units:

1) information input and processing unit - sensory

systems (analyzers);

2) modulating, nervous system activating unit -

modulatory systems (limbic-reticular systems) of the brain;

3) programming, activating and behavioral acts

controlling unit - motor systems (motion analyzer).

Brain sensory systems (analyzers). Sensory (afferent) system is activated when a certain event in the environment affects the receptor. Inside each receptor the physical factor affecting it (light, sound, heat, pressure) is converted into an action potential, nervous impulse. Analyzer is a hierarchically structured multi-layer system. Receptor surface serves as the base of the analyzer, and cortex projection areas as its node. Each level is a set of cells, whose axons extend to the next level. Coordination between sequential layers of analyzers is organized based on divergence/convergence principle.

Brain modulatory systems are an instrument of regulation of the level of activity, performing also selective modulation, and stressing urgency of a certain function. The initial source of activation is intrinsic activity and the needs of the organism. A second source of activation is related to environmental irritants.

Science and Information Conference 2014 August 27-29, 2014 | London, UK

2 | P a g e www.conference.thesai.org

Brain motor (motion) systems. Fusion of excitations of different intensity with biologically significant signals and motivational influences are characteristic of motor cortex areas. It is distinctive of them to accomplish a complete transformation the afferent influences into a qualitatively new form of activity, directed toward the fastest output of afferent excitations to the periphery, i.e. to the instruments of realization of the final stage of behavior organization.

The core of artificial intelligence is the system "brain", representing an active, associative, homogeneous structure - multi-layer receptor-effector neural-like growing network, composed of a host of neural-like growing elements, interconnected by synapses. Neural-like elements perceive, analyze, synthesize and save information, allowing the system to learn, train, reason, systematize and classify information, to locate connections, patterns and distinctions in it, and to produce signals for the control of external facilities.

B. The definitions of Artificial Intelligence Axiom 2. The basic functional unit of the "nervous system"

of intelligent systems is the artificial neuron (neural-like unit).

Definition 1. The artificial neuron is a simplified model of the biological neuron, a device (analogous to the cell body) with many excitatory and inhibitory inputs, modulating input and one output. The output (analogue of the axon) consists of a set of conductors and a set of endings. The input is fed the information (codes, impulse bundles). The device processes information according to the concepts of the neural-like growing network, generates the codes (bundles of impulses) and simultaneously or periodically transmits them down the axon to the outputs of other neurons. Neuron inputs (synapse analogue) are receptors, reacting to or ignoring a certain piece of code fed to them, by this increasing or decreasing the level of excitation of the neural-like element and the intensity of its feedback. The range and frequency of the signal are subjects to adjustment.

Axiom 3. All data-free neural-like elements are novel neural-like units.

Axiom 4. All neural-like elements, carrying (holding) a certain piece of information are equivalent neural-like elements.

Axiom 5. At the lack of information on the receptors of the novel neural-like elements they continue in the mode of light arbitrary background excitation.

Axiom 6. Background excitation is a fluctuating arbitrary excitation value of the neural-like element.

Definition 2. Neural-like elements of emotion are the elements, whose excitation threshold increases or decreases depending on the condition of the inner subsystems of the system, or the result of the function being executed. Neural-like elements of emotion have connections with action controlling motor neurons.

Definition 3. Temporary memory. Time required by the novel neural-like element for information analysis and maintenance. On receiving information (unknown to the system) on the receptors of the sensory area, the nearest novel

neural-like elements (whose excitatory level is not high, but higher than those of the nearest novel neural-like elements) and sensory area receptors establish connections, the latter being assigned weights, while the neural-like elements are assigned a certain excitatory threshold. At repeated replication of this information the excitatory threshold increases. On reaching the maximum excitement the neural-like element becomes the equivalent neural-like element and is transported into the long-term memory.

Definition 4. Long-term memory contains all of the equivalent neural-like elements. Definition 5. Neural network is a parallel connected network of simple adaptive units, which interacts with the objects in the environment similarly to the biological nervous system. Definition 6. Neural-like growing network is a set of interconnected neural-like units, set up for reception, analysis and processing of information during interaction with the objects of the real world, moreover in the process of reception and processing of information the network changes its own structure. C. Neural-like growing networks

Neural-like growing networks (n-GN) – is a new type of neural-like networks, which includes the following classes: multi-connected (receptor) neural-like growing networks (mn-GN); multi-connected (receptor) multi-layer neural-like growing networks (mmn-GN); receptor-effector neural-like growing networks (ren-GN); multi-layer receptor-effector neural-like growing networks (mren-GN).

N-GN are described as a directed graph, where the neural-like elements represent the nodes of the graph, and connections between the elements - its edges.

So, the network is an unparalleled dynamic system with the topology of a directed graph, which performs information processing by changing own state and structure in response to environmental stimuli.

The theory of neural-like growing networks operates the basic concepts of structure and architecture, which demonstrate the principles of connection and interaction between the elements of the network:

the topological (dimensional) structure - is a directed graph, representing connections between the elements of the system;

the logical structure sets the rules and principles of arrangement of connections and network elements, as well as the logic of its operation.

the physical structure is a system of connections of the physical elements of the network (in the event of the mechanical implementation of a neural-like growing network).

The system's architecture is defined by the set of connections of the physical elements of the network and the principles of arrangement of connections and elements, as well as the logic of its operation.

The theory of neural-like growing networks uses some principles of the graph theory.

Science and Information Conference 2014 August 27-29, 2014 | London, UK

3 | P a g e www.conference.thesai.org

Fig. 1. Topological structure of mn-GN

Рис.1. Топологическая структура мн-РС

Directed or oriented graphs are the graphs, where direction of the edges matters. The arc of a graph can be regarded as an ordered pair of vertices or as a directed edge, connecting the vertices.

The vertices of a graph DA,S are called adjacent, if

they are connected by an arc. Adjacent arcs are defined as

arcs imd , jmd , which have a common vertex ma .

The arc is called outgoing, when it it is directed away from

the node ma , i.e. if the node ma is the tail, but not the head of

the arc mid . Incoming arc is the arc imd , directed toward the

node ma , i.e., if the node ma is the head of the arc imd , and

not its tail mid .

The topological structure of n-GN is represented by an oriented connected graph (fig. 1). Using graphs, the mn-GN theory studies the processes of information flow and storage in the network.

Neural-like growing networks are formally defined in the following way:

NP,M,D,A,R,S , where irR , i 1,n –

is a finite set of receptors; ir aA , i 1,k – finite set

of neural-like elements; idD , i 1,e – finite set of

arcs, connecting receptors with the neural-like elements and the neural-like elements with each other; N - connection

variables receptor areas; iPP , i 1,k where P –

excitation threshold of the node ia , 0i PmfP ( 0P –

minimum allowed excitation threshold), provided that the set

of arcs D , associated with the node ia , is correspondent to

the set of weights imM , w1,i , and im can take

both positive and negative values.

Definition 7. Multiconnected (receptor) neural-like growing network is an acyclic graph, where the minimal

number of the incoming arcs to the node of the graph ia

equals the variable n , and each arc id , associated with the

node ia , is correspondent to a certain weight im . Each node

ia is assigned a certain excitation threshold. The nodes,

which don't have the incoming arcs, are called the receptors, the rest are called the neural-like elements.

Rule 1. If on receiving information excitation arises in a

subset of nodes F out of the set of nodes having direct

connection with the node ia , and hF , then connections

of the node ia with the nodes of the subset F are terminated,

and the network is joined by a new node 1ia , whose inputs

are connected to the inputs of all the nodes of the subset F ,

and the output of the node 1ia is connected to one of the

outputs of the node ia , so that the incoming connections of

the node 1ia are assigned weights gm , correspondent to the

weights of the terminated connections of the node ia , and the

node 1ia is assigned an excitation threshold iP , equal to the

sum of the weights of the connections, incoming to the node

1ia , or assigned an excitation threshold iP , equal to imf ,

(a function of the weights of the connections, incoming to the

node 1ia ).

The outgoing edge of the node 1ia is assigned the weight

1im . Receptor outgoing edges are assigned the weight rim .

Rule 2. If on receiving information excitation arises in the

subset of nodes G and hG , the network is joined by a

new associative node 1ia , which by incoming arcs is

connected to all the nodes of the subset G . Each of the

incoming arcs are assigned the weight im , and the new node

1ia is assigned an excitatory threshold 1aiP , equal to the

sum of the weights im of the incoming arcs, or assigned an

excitatory threshold iP , equal to imf (a function of the

weights of the connections, incoming to the node 1ia ). The

new node 1ia stays in the excited state.

Fig. 2. Topological structure of mren-GN

Science and Information Conference 2014 August 27-29, 2014 | London, UK

4 | P a g e www.conference.thesai.org

Definition 8. Informational dimension is the area if the neural-like growing network, which consists of the set of nodes and arcs, joined into a single informational structure of one of the reflections.

Definition 9. A set of interconnected acyclic graphs, representing neural-like growing networks in different dimensions of information, are called multi-connected multi-layer neural-like growing networks (mmn-GN).

Rule 3. If on receiving information, offered in different

informational dimensions, excitation arises in the subset Q of

endpoints, then the endpoints are connected with each other by arcs.

Receptor-effector neural-like growing networks are formally defined in the following way:

) N MP DA E N PD A (R=S ee, e,e, e,,r,r, r,r,, ,

irR , i 1,n – a finite set of receptors, ir aA ,

i 1,k – a finite set of neural-like elements of the receptor

area, ir

dD , i 1,e – a finite set of arcs of the

receptor area, i

eE , i 1,e – a finite set of effectors,

ie

aA , i 1,k – a finite set of neural-like elements of

the effector area, ie

dD , i 1,e – a finite set of arcs

of the effector area, ir

PP , ie

PP , i 1,k ,

where iP – excitation threshold of the node ira , iea

ii mfP , provided that the set of arcs r

D , e

D ,

associated with the node ira , iea , is correspondent to the set

of weights ir

mM , ie

mM , i 1,w , and

im can take both positive and negative values. r

N , e

N –

connectivity variables of the receptor and effector areas.

Definition 10. Receptor-effector neural-like growing network is a symmetric acyclic graph, where the minimum number of the incoming arcs to the newly formed nodes of the

graph are equal to the variable n , each arc, associated with the nodes of the receptor area, is accorded a certain weight, and each node - a certain excitatory threshold; each arc, associated with the nodes of the effector area, is accorded a certain weight, and each node - an excitatory threshold. The nodes, which don't have incoming arcs, are called receptors, the nodes without outgoing arcs, are called effectors, the rest of the nodes being neural-like elements [5-8].

Definition 8. A set of interconnected symmetric acyclic graphs, representing the state of an object and actions produced by it in different informational dimensions, are called multi-layer receptor-effector neural-like growing networks.

The topological structure of the multi-layer receptor-effector neural-like growing network (mren-GN) is represented by a graph (fig. 2).

In formal terms, mren-GN are defined in the following way:

;) N MP DA E N PD A (R=S ee, e,e, e,,r,r, r,r,,

;ts,v, R R R R ;ts,v,r A A A A

;ts,v,r D D D D ;ts,v,r P P P P

;ts,v,r M M M M ;ts,v,r N N N N

;d1dr E,E,E E ;d2d1re A,A,A A

;d2d1re D,D,D D ;d2d1re P,P,P P

;d2d1re M,M,M M ;d2d1re N,N,N N here ts,v, R R R– is a finite set of receptors, ts,v, A A A – finite set of neural-

like elements, ts,v, D D D – finite set of arcs, ts,v, P P P –

finite set of excitatory thresholds of the neural-like elements of the receptor area, belonging, for example, to the informational

visual, acoustic, tactile dimensions, Nr – finite set of

connectivity variables of the receptor area, d1dr E,E,E –

finite set of effectors, d2d1r A,A,A – finite set of neural-like

elements, d2d1r D,D,D – finite set of arcs of the effector

area, d2d1r P,P,P – finite set of excitatory thresholds of the

neural-like elements of the effector area, belonging, for example, to the informational speech dimension and the action

dimension, Ne – finite set of connectivity variables in the effector area.

III. THE MATHEMATICAL APPARATUS OF THE FUNCTIONAL

ORGANIZATION OF THE "BRAIN" OF THE ARTIFICIAL

INTELLIGENT SYSTEMS

The theory of neural-like growing networks studies binary relations, defined by a set of nodes (neural-like elements)

i21 aaa ,...,, , where ia is a finite-dimensional Boolean

vector, ki aa , - a set of pairs of these elements. The pair

ki aa , is related to the subset R if and only if the vector ia

is related R to the element ka .

Basic properties of the vector pairs, based on the conjunction operation, applied to the vector components, i.e.

ki aa ( a(1) b(1) , a(2) b(2), ... , a(n) b(n) ), here –

conjunction, – conjunction "vector" operation.

These basic conjunction properties of the vector pairs

ca

, are as follows:

1. .aca

2. aca

.

3. a c с . 4.

a c с .

5. .0ca 6.

a c 0 .

Combinations of three from the basic properties of the vector pairs give us eight mutually negating relations:

1. a cR1 (

a c a ) (

a c с ) (

a c 0).

2. a cR2 (

a c a )(

a c с )(

a c 0).

3. a cR3 (

a c a )(

a c с ) (

a c 0).

4 a cR4 (

a c a ) (

a c с ) (

a c 0).

Science and Information Conference 2014 August 27-29, 2014 | London, UK

5 | P a g e www.conference.thesai.org

5. a cR5 (

a c a ) (

a c с ) (

a c 0).

6. a cR6 (

a c a )(

a c с ) (

a c 0).

7 a cR7 (

a c a )(

a c с ) (

a c 0).

8 a cR8 (

a c a )(

a c с ) (

a c 0).

Here - logical AND. Obviously, relations R6, R7, R8 are trivial, as one or both

vectors in them are equal to null. Basing on the analysis of basic conjunctive properties of the vector pairs, let's introduce the following affirmations:

Affirmation 1. On the set of vector pairs a a, ' A five

basic mutually negating relations R1, R2, R3, R4, R5 can be defined. Based on the affirmation 1, the following basic operations of construction of n-GN are defined.

Let the external information coming into the receptor field

be represented by the set Wr ={rij} , i Ir, j Jr, and

excitations coming into the effector field, by the set We ={dij},

iIe, jJe.

For all pairs of vectors a a, ' Wr,

a a, ' We,

where Wr – is a set of vector lines of a length k of the receptor area, and We – a set of vector lines of a length l of the effector area, let's introduce the mutually negating relations Rr

i, Re i for the receptor and effector areas accordingly.

),0),1 aaaaaaaaaaaRa 1j

i

j

i

1j

i

1j

i

j

i

j

i

1j

i

j

i

1j

i

j

i

'

r(((: A

here i

j

i

j

a a

1 – conjugation of vectors i

j

a

and i

j

a1

, ∩ –

logical AND;

),0),111111'

aaaaaaaaaaaRa j

i

j

i

j

i

j

i

j

i

j

i

j

i

j

i

j

i

j

ir:A

(((

a R a a a a a a a a a a ar ij

ij

ij

ij

ij

ij

ij

ij

ij

ijA :2 0

1 1 1 1 1', ) ) );

( ( (

a R a a a a a a a a a a ar i

jij

ij

ij

ij

ij

ij

ij

ij

ijA :3 0

1 1 1 1 1', ) ) );

( ( (

aiRa r '

a R a a a a a a a a a a ar i

jij

ij

ij

ij

ij

ij

ij

ij

ijA :4 0

1 1 1 1 1', ) ) );

( ( (

a R a a a a a a a a a a ar i

jij

ij

ij

ij

ij

ij

ij

ij

ijA :5 0

1 1 1 1 1', ) ) );

( ( (

a R a a a a a a a a a a ae i

jij

ij

ij

ij

ij

ij

ij

ij

ijA :1 0

1 1 1 1 1', ) );

( ( (

a R a a a a a a a a a a ae i

jij

ij

ij

ij

ij

ij

ij

ij

ijA :2 0

1 1 1 1 1', ) ) );

( ( (

a R a a a a a a a a a a ae i

jij

ij

ij

ij

ij

ij

ij

ij

ijA :3 0

1 1 1 1 1', ) ) );

( ( (

a R a a a a a a a a a a ae i

jij

ij

ij

ij

ij

ij

ij

ij

ijA :4 0

1 1 1 1 1', ) ) );

( ( (

a R a a a a a a a a a a ae i

jij

ij

ij

ij

ij

ij

ij

ij

ijA :5 0

1 1 1 1 1', ) ) ) .

( ( (

These relations are denoted as aiRa r

'

.

A. Let be a set of vectors ri ri ri rik

a a a a1 2 3

, , ,..., and

ei ei ei eik

a a a a1 2 3

, , ,..., for the receptor and effector areas

accordingly.

B. Let's check by which relation Rr1, Rr2, Rr3, Rr4, Rr5 the

pair of vectors a a, ' of the set of pairs of the receptor area

are related

( , ),( , ),( , ),...,( , )ri rik

ri rik

ri rik

rik

rik

a a a a a a a a1 2 3 1 and simultaneously, by

which of the relations Re1, Re2, Re3, Re4, Re5 the pair of

vectors a a, ' of the set of pairs of the effector area

( , ),( , ),( , ),...,( , )еi еi

kеi еi

kеi еi

kеik

еik

a a a a a a a a1 2 3 1 are related,

here k runs from 2 to gk , where g is a number of the

new vectors

Receptor area. If the pair of vectors ( , )ri rik

a a1

are

related by Rr1, Rr2, Rr3, Rr4, Rr5, the operations Qrj1, Qrj

2,

Qrj3, Qrj

4 or Qrj5 are performed:

);(),(,:

,:,0:,0:,:),,,(),(

2

2

1

1

2

1

00

11111'1

1

mPmPbm

bmaaaaaaaaaQ

afa

afa

a

a

i

i

i

i

i

i

kkkk

kk

k

ri

k

ririri

k

ri

k

ririr

);(),(,:

,:,0:,:,:),,,(),(

2

2

1

1

2

1

00

11111'2

1

mPmPbm

bmaaaaaaaaaaQ

afa

afa

a

a

i

i

i

i

i

i

kkkk

kk

k

ri

k

ri

k

ririri

k

ri

k

ririr

);,(),,(),(

),(:,:,:,:,)

(:,)(:),,,(),(

000

011

111111'3

1

11

1

1

1

1

,11

mmPmmPmP

Pmbmbmaaaca

aaacaaaaaaaaaQ

aafa

aafa

afa

afaaa

k

i

k

ik

i

ii

i

k

ik

i

k

i

k

i

k

ii

ckckk

ckkkk

k

riri

k

rirj

k

ri

k

riri

k

rirjri

k

ririri

k

ri

k

ririr

);,(

),(),(:,:,:,0:

,:,)(:),,,(),(

11

1

1

0

001

11111'4

1

mmP

mPPmbmbma

aacaaaaaaaaaQ

aafa

afaa

faaa

ii

i

k

ik

i

k

i

k

i

k

ii

ck

kckkkk

k

ri

k

ri

k

rirjri

k

ririri

k

ri

k

ririr

.),(),(),(:,:,:

,0:,)(:,:),,,(),(

000

111111'5

1

1

11

1

mmPmPPmbmbm

acaaaaaaaaaaaQ

aafa

afaa

faaak

i

k

ik

i

i

ii

k

i

k

ii

ckkckkkk

k

rirj

k

ri

k

riri

k

ririri

k

ri

k

ririr

Operations Qr1

1, Qr1

2, Qr1

3, Qr1

4 or Qr15 are true, if hr n1,

otherwise, if ri rik

a a1 , then

ri ri rik

rik

rik

a a a a a1 1 1 0

: , : , : ,

k k k ki i i ia r a r

m b m b1 1 1 1

: , : ,

i

i

i

i

a

a

a

aP f m P f mk k1

1

1

10 0 ( ), ( ) ,

if ri ri

ka a1 , then ri

krik

k ka a m bia : , : , : 0 01 1

,

i

i

a

aP f mk1

10 ( ) .

1, if operation Qrj1 was performed,

k = 2, if operation Qr12, Qr1

4, Qr1

5 was performed, 3, if operation Qrj

3 was performed.

If the pair of vectors ( , )ri ri

ka a2

are related by Rr1, Rr2,

Rr3, Rr4, Rr5, the operations Qrj1,

Qrj2, Qrj

3, Qrj

4 are Qrj5

performed.

Science and Information Conference 2014 August 27-29, 2014 | London, UK

6 | P a g e www.conference.thesai.org

Furthermore, if the pair of vectors ( , )ri ri

ka a3

are

related by Rr1, Rr2, Rr3, Rr4, Rr5, operations Qrj1, Qrj

2, Qrj

3,

Qrj4 or Qrj

5 etc. are performed till the set of pairs

( , ),( , ),( , ),...,( , )ri ri

kri ri

kri ri

krik

rik

a a a a a a a a1 2 3 1 is exhausted.

These operations are denoted as ),('

aaQi

ri

Effector area. Operations are carried out similarly receptor area.

So, descriptions of concepts, objects, conditions or situations and connections between them, indicating mutual dependence of their informational representations, i.e. sensory system, modulatory system, conditioned and unconditioned reflexes, reflector ring, temporary and long-term memory - all are developed in the receptor area of the multi-layer ren-GN. The effector area in its turn generates action sequences and produces signals, operating the executive mechanisms, i.e. motivational system of purposeful behavior, and motor system. Parallel functioning of these systems allow an artificial intelligent system to perceive, analyze, remember and synthesize information, learn and perform purposeful actions [5-7].

IV. THE FUNCTIONAL ORGANIZATION OF THE "BRAIN" OF

THE ARTIFICIAL INTELLIGENT SYSTEMS

The "brain" of the system of artificial intelligence consists of a set of neural-like interconnected elements. Interacting with each other, neural-like elements establish controlling signals, which regulate the cognitive and reflective activity of the whole system.

A. Sensory system Function of Perception - information from the outside

world travels to the receptor area, activates the receptors, which in their turn activate the neural-like elements of different levels of information processing - unconditioned reflex levels - primary automation systems, conditioned reflex levels - secondary automation systems, classification, generalization and memorization levels. In formal terms, we define relations and perform operation

),(''

aaQaiRai

rir

.

Unconditioned reflexes are set at creating the system.

aiRaaaQaaaQiaiRa eeirir

ik

i

''

)()',('

Conditioned reflexes are acquired in the process of functioning of the system. During the functioning of the system they develop as a reaction to a "specific" to each of them irritant, by this providing for orderly execution of the most important system functions irrespective of arbitrary, changing environment.

eaaaQaiRa k

i

i

rir

),(''

UR.

Conditioned reflexes are acquired in the process of

functioning of the system. Started by indifferent irritant, excitation arises in the corresponding receptors, and impulses

travel to the sensory system. Under the influence of unconditioned irritant, there arises a specific excitation of the corresponding receptors. So two centers of excitation emerge simultaneously. A temporary reflector connection appears between the two centers. At the appearance of the temporary connection, an isolated effect of the conditioned irritant produces an unconditioned reaction.

),(

),(

...

),(

),(

''

2

''

1

''12

''11

aaQaiRat

aaaQaiRat

aaaQaiRat

aaQaiRat

in

k

i

in

k

i

i

i

rir

rir

rir

rir

e

e

Development of the conditioned reflex

),(

),(

''12

''11

aaQaiRat

aaaQaiRati

k

i

i

rir

rir e

CR – Conditioned reflexes

Conditioned reflexes are a universal adaptation mechanism allowing for flexible behavior patterns.

Primary automatisms (AU1) – unconditioned reflexes tnt AUUR 1

1

1 .

Secondary automatisms (AU2) – established conditioned

reflexes tnt AUCR 2

1

1 .

B. Modulatory system Modulatory system regulates the level of excitation of the

neural-like elements and performs selective modulation of a particular function. The initial source of activation is the priority of the inner activity of the subsystems of the main system. It is embedded at the creation of the system similarly to unconditioned reflexes. Any deviation from vitally important system values leads to the activation (modification of the excitatory threshold) of certain subsystems and processes. A second source of activation is related to environmental irritants. The priority of a certain activity is determined in the process of "life cycle", similarly to the development of conditioned reflexes.

Motivation is a mechanism, which contributes to the satisfaction of the needs: it connects the memory of a certain object (for example, lack of energy) with the action for satisfying this need (search for energy). Here, then is developed a purposeful behavior, which consists of three blocks: a search for a goal, interaction with the identified goal, rest after achieving the goal.

A purposeful behavior - motivational goal setting - excitation, actions, directed at the search of an algorithm for the solution of the target task, achievement of the goal - release of excitation.

C. Motor system Fusion of excitations of different intensity with significant

signals and motivational influences are characteristic of motor

Science and Information Conference 2014 August 27-29, 2014 | London, UK

7 | P a g e www.conference.thesai.org

system. It is distinctive of them to accomplish a complete transformation the afferent influences into a qualitatively new form of activity, directed toward the fastest output of efferent excitations to the periphery, i.e. to the neuron chains at the final stage of behavior formation.

The motor system consists fully of an ensemble (chains) of neurons of efferent (motor) type and is exposed to constant flow of information from the afferent (sensor) area. Unlike the afferent area, in the launch and behavioral act control area activation processes flow in the top-down direction, taking start at the most elevated levels. The chains of command neurons (motor programs) created at the highest levels, then move to the neural chains of the lower motor levels and motor neurons - effectors of the motor efferent impulse areas.

Function of Action - information comes out of the effector area and through effectors and the motor area affects the

environment еM 1 .

Function of Motion – a sequence of actions (M), discovered accidentally (a child has learned to walk by himself) or with the help of a teacher (a child has learned to walk with the help of his parents):

MMMMtnttt

1

3

1

2

1

1

1...

.

Psychic function or behavioral act – a sequence of automatisms is carried out in a system, functioning according to the reflector principle, in which central and receptor-effector (peripheral) areas are interrelated and whose joint activity produces an integral reaction. The system has a multi-layer structure, where each level from the receptor to the effector formations makes a "specific" contribution to the "nervous" activity of the system.

AUAUAUAUtnttt

1

3

2

2

2

1

1...

.

Function of Thought is an ensemble of excited neural-like elements at the subconscious level (intrinsic model of the outside or abstract world, strengthened by the motivational function at a given moment without exit to the outside world).

Function of Reflection is a sequential interaction of ensembles of excited neural-like elements at the subconscious level (intrinsic models), regulated by excitatory levels of neural-like elements, strengthened or weakened by the motivational function. Information circulates in a closed circuit - sensory area, information processing levels (analysis, classification, generalization, memorization, motor area, sensory area) without exit to external environment.

.),('' eaaaQaiRa k

i

i

rir

To think, to reflect is to realize. In this sense "mental uttering" - cycles of transferring of the internal active information to the system's input - can be viewed as a model of artificial consciousness of the intelligent computer, while cycles of transferring of the internal active information to the

input of the system without turning on the "utterance" can be regarded as a model of artificial subconscious.

Function of Consciousness is the propagation of excitement through the active ensembles of neural-like elements (intrinsic models of the outside world), strengthened by the motivational function, reflecting the most important connections in the "subject – environment" system.

Function of Subconscious is the propagation of excitement through the active ensembles of neural-like elements (intrinsic models of the outside world), weakened by the motivational function. Performs preparation of the models for realization, recognition of acquired images and execution of habitual motions.

Function of Unconscious Reaction – external information at the subconscious level produces a feedback to the outside world (unconditioned and conditioned reflexes, routine actions, secondary automatisms).

Function of Conscious Reaction – external information at the conscious level produces a feedback to the outside world (conscious actions at the stage of developing conditioned reflexes and secondary automatisms).

Function of Intuition – searching for new information, developing hypotheses and analogies, establishing temporary new connections, activating new ensembles of neural-like elements and producing out of them new combinations, which automatically appear in the subconscious, the most active of them later coming through to the conscious area [15 - 19].

Function of Imitation – observing the actions of other objects (such as a child watching his parents), the subject internally by micro movements repeats their actions (arises a subtle excitation of the ensembles of the neural-like elements involved in performing these actions). Further on, by multiple repetition of this sequence in plays, one learns it (repetition leads to the growth of the threshold rates of excitation of the neural-like elements), which gives rise to behavioral stereotypes.

D. Individuality of the system Individual distinctions of the system are revealed through

its activity and behavioral functions, and conditioned by the constructive nature of its organization, as well as its "life" experience, gained in the process of training and functioning [20].

V. PRACTICE

A simplified virtual artificial robotic personality was produced in the project "VITROM".

Model has been implemented vision (Figure 3). Implemented recognize different objects (Figure 4). Implemented recognition route through the city streets and traffic on a given route (Figure 5) [17]. The project was demonstrated at the exhibition CeBIT in Hanover in 2000-2002. A model of thinking was accomplished in the intellectual system "Dialogue" (2005).

Science and Information Conference 2014 August 27-29, 2014 | London, UK

8 | P a g e www.conference.thesai.org

The system is implemented "Dialogue" perception, analysis and synthesis of information. Implemented thinking and logical deduction, etc.

The system was demonstrated at the international conference "Knowledge-Dialog-Solution 2007" in Varna and at the international sci-tech multi-conference "Contemporary problems of computer information technology, mechatronics and robotics engineering 2009" in Russia [8-22].

REFERENCES

[1] A.I. Shevchenko, Contemporary Problems of Artificial. – Kiev: IAIP

«Science and Education», 2003. – 228 p.

[2] Nervous System. – Access Mode: galactic.org.ua.

[3] E.N.Sokolov, The principle of vector coding in psychophysiology // Moscow University Messenger. Series 14: Psychology. –1995.– № 4.–

P.3 – 13.

[4] А.R. Luria, Neuropsychology Basics / Luria A.R. – М., 1973. – 173p.

[5] V.A. Yashchenko, Receptor-effector Neural-like Growing Networks – effective tool for modeling intelligence. I // Cybernetics and Systems

Analysis. – 1995. – № 4. – P. 54 – 62.

[6] V.A. Yashchenko, Receptor-effector Neural-like Growing Networks –

effective tool for modeling intelligence. II // Cybernetics and Systems Analysis. – 1995. – № 5. – P. 94 – 102.

[7] V.A. Yashchenko, Receptor-effector Neural-like Growing Network –

efficient tool for building intelligence systems // Proc. of the second international conference on information fusion, (California, July 6–8

1999). – Sunnyvale Hilton Inn, Sunnyvale, California, USA, 1999. –Vol. II. – Р. 1113 – 1118.

[8] V.A. Yashchenko, Secondary automatisms of intelligent systems //

Artificial Intelligence. – 2005. – № 3. – P. 432 – 447.

[9] V.A. Yashchenko, A.I. Shevchenko, Can computer think? // Artificial Intelligence. – 2005. – № 4. – P. 476 – 489.

[10] V.A. Yashchenko, Thinking computers // Mathematical Machines and

Systems. – 2006. – № 1. – СP. 49 – 59.

[11] V.A. Yashchenko, А.I. Shevchenko, From artificial intelligence to artificial personality, // Artificial Intelligence. – 2009. – № 3. – P. 492 –

505.

[12] V.A. Yashchenko Some aspects of the «nervous activity» of intelligent

systems and robots // Artificial Intelligence. – 2009. – № 4. – P. 504 – 511.

[13] V.A. Yashchenko, A.I. Shevchenko, Aspects of development of artificial

personality // Intern. sci.-tech. Multi-conf. «Contemporary problems of computer information technology, mechatronics and robotics–2009»,

(CITMR-2009), (Divnomorsk, Russia, 28 Sep – 3 Oct, 2009). – Divnomorsk, Russia, 2009. Report thesis.– P.10-17.

[14] V.A. Yashchenko Some aspects of the «nervous activity» of intelligent

systems and robots / ko // Intern. Sci.-Tech. Multi-conf. «Contemporary problems of computer information technology, mechatronics and

robotics–2009», (CITMR-2009),

[15] Ященко В.А. От многомерных рецепторно-эффекторных нейроподобных растущих сетей к электронному мозгу роботов/

Математичні машини і системи. - № 4. - 2013 с.14-19

[16] ЯщенкоВ.А. Некоторые аспекты «нервной деятельности» интеллектуальных систем и роботов // Международная научно-

техническая мультиконференция. Актуальные проблемы информационно-компьютерных технологий, механотроники и

робототехники – 2009, (ИКТМР-2009). (Дивноморск, Россия, 28 сентября – 3 октября 2009г.)

[17] Ященко В.А. К вопросу восприятия и распознавания образов в

системах искусственного интеллекта. / Математичні машини і системи. - № 1. - 2012 с.16-27.

[18] Ященко В.А. Размышляющие компьютеры // Международная конференция KDS 2007, Knowledge - Dialogue - Solution, Varna,

Bulgaria, june 18-24, 2007. с.673-678.

[19] ЯщенкоВ.А., Шевченко А.И. Может ли компьютер мыслить? // Международная научно-техническая конференция

Интеллектуальные и много процессорные системы – 2005 (ИМС-2005). Тез. Докл. Дивноморск, Россия, 26 сентября – 1 октября

2005г.

[20] ЯщенкоВ.А., Шевченко А.И. От искусственного интеллекта к искусственной личности // Искусственный интеллект, №3, 2009.

с.492-505.

[21] Морозов А.А., Ященко В.А. Ситуационные центры информационные технологии будущего. – Киев: СП

«Интертехнодрук», 2008. – 332с.

[22] Ященко В.А. Искусственный интеллект. Теория. Моделирование. Применение. – К. Логос. 2013. – 289с. – Библиогр. с.283-289

Fig. 5. Traffic on designated route

Fig. 3. Model technical vision

Fig. 4. Detection in real-time