Distributed Computing and Memory Within Parallel Architecture of Cellular Type

13
Distributed Computing and Memory Within Parallel Architecture of Cellular Type Neven Dragojlovic EYEYE Sarl, 9 Rue El Farabi, Tangier 90000, Morocco [email protected] Abstract Intelligent distributed computing needs to use nodes (agents) capable of independent computation, and capable of receiving and sending information to other nodes. Such nodes need to use a common language (here binary code 2D patterns) and be housed in an architecture that permits each node to remember all of its potential states, all possible states of its environment, and use heuristics imbedded in its local program to reach decisions independently. The “architecture” can be distributed between physically separated nodes interacting with Wi-Fi or similar means, and variety of inputs can be translated into common language. This article suggests how memory associations at each level, and at each node, can function together to create an evolving meaning and classification system. (Based on US 7426500 and US 864598 and articles by author found in the bibliography.) Keywords: parallel computer architecture, distributed memory, multi-agent system, asynchronous parallel computation, binary cellular system, small-world system, complex nonlinear system. 1. Introduction At some point or another everyone experienced staring at a mottled surface without any thought in mind, and found how their visual system started forming patterns like faces, animals or objects out of that mottled background. This spontaneous organization of input into recognizable molds starts with just a few patches of visual field that seem to activate some already present archetype in the memory structure. Such parallel processing occurs in interconnected units, where each unit has to decide independently if it fits or not in the developing pattern according to its locally stored memory and network position. In order to be functional, such a system must be able to take in any possible input and memorize it in a node-based distributed memory 6 . It also needs to be a closed system with self-referencing 2 , have a closed phase-space at each calculating node 10 , and local and long distance connections. In other words, a small-world system made out of simple intelligent agents following simple rules 9 , just like ants in an anthill. 2. A quick outline of the system This system is modular, but given the limitations of space and inherent complexity permits me to outline only one module. It should be kept in mind that the system was made for multi-modular complex system with no limitations to the size of the input fields. Dissatisfied with neuromorphic machine technology, which does not show how our brain treats information, prompted me to search for a system that is capable of forming analogies and “meaning”. A system capable of continuous creation and integration of meaningful networks is what this article is attempting to present. Such a system is at the basis of thought 1, 3 . The essential idea behind this system can be analogically compared to a puzzle, where the puzzle determines the shape of its pieces. The puzzle pieces can be considered equivalent to nodes. Each node has an architectural position (place in the puzzle) and a shape (determined by the surrounding nodes activity, local memory, and local software activity). The position is a definite place in parallel computer’s architecture, with a unique address on a given level, and fixed connections to surrounding nodes on the same level and higher and lower levels 8 . The shape is represented by the node’s state. Each state depends on node’s activation from higher, lower, and the same level

Transcript of Distributed Computing and Memory Within Parallel Architecture of Cellular Type

Distributed Computing and Memory Within Parallel Architecture of Cellular Type

Neven Dragojlovic

EYEYE Sarl, 9 Rue El Farabi, Tangier 90000, Morocco [email protected]

Abstract

Intelligent distributed computing needs to use nodes (agents) capable of independent computation, and capable of receiving and sending information to other nodes. Such nodes need to use a common language (here binary code 2D patterns) and be housed in an architecture that permits each node to remember all of its potential states, all possible states of its environment, and use heuristics imbedded in its local program to reach decisions independently.

The “architecture” can be distributed between physically separated nodes interacting with Wi-Fi or similar means, and variety of inputs can be translated into common language. This article suggests how memory associations at each level, and at each node, can function together to create an evolving meaning and classification system. (Based on US 7426500 and US 864598 and articles by author found in the bibliography.)

Keywords: parallel computer architecture, distributed memory, multi-agent system, asynchronous parallel computation, binary cellular system, small-world system, complex nonlinear system.

1. Introduction

At some point or another everyone experienced staring at a mottled surface without any thought in mind, and found how their visual system started forming patterns like faces, animals or objects out of that mottled background. This spontaneous organization of input into recognizable molds starts with just a few patches of visual field that seem to activate some already present archetype in the memory structure. Such parallel processing occurs in interconnected units, where each unit has to decide independently if it fits or not in the developing pattern according to its locally stored memory and network position. In order to be functional, such a system must be able to take in any possible input and memorize it in a node-based distributed memory6. It also needs to be a closed system with self-referencing2, have a closed phase-space at each calculating node10, and local and long distance connections. In other words, a small-world system made out of simple intelligent agents following simple rules9, just like ants in an anthill.

2. A quick outline of the system

This system is modular, but given the limitations of space and inherent complexity permits me to outline only one module. It should be kept in mind that the system was made for multi-modular complex system with no limitations to the size of the input fields. Dissatisfied with neuromorphic machine technology, which does not show how our brain treats information, prompted me to search for a system that is capable of forming analogies and “meaning”. A system capable of continuous creation and integration of meaningful networks is what this article is attempting to present. Such a system is at the basis of thought1, 3.

The essential idea behind this system can be analogically compared to a puzzle, where the puzzle determines the shape of its pieces. The puzzle pieces can be considered equivalent to nodes. Each node has an architectural position (place in the puzzle) and a shape (determined by the surrounding nodes activity, local memory, and local software activity). The position is a definite place in parallel computer’s architecture, with a unique address on a given level, and fixed connections to surrounding nodes on the same level and higher and lower levels8. The shape is represented by the node’s state. Each state depends on node’s activation from higher, lower, and the same level

(surrounding nodes), as well as acquired memory and the activity of node’s software (inner activation). Each node runs its software independently and asynchronously, and communicates with surrounding nodes and levels in a handshake manner.

Each node can be part of many different puzzles represented by different node states, and the same state can also be part of multiple puzzles. Each node can represent complete local phase space in local non-volatile memory, where a memory-space-address matrix represents each state. The memory can be stored, retrieved, searched and modified according to simple set of rules contained in local software, and executed by local CPU or GPU. All nodes at the same level execute the same software. Software updates can use diffusion or percolation systems to spread throughout the system. The activity of local software depends on inputs from surrounding nodes and levels, is usually asynchronous with the activity of surrounding nodes and levels, but with feedback it can become synchronous. In general, such a system can produce swarm intelligence2.

A set of nodes triggered by the same input form a network that is memorized and represents “archetypal spaces” (information envelopes with specific expectations from the input). Two simultaneous networks are formed from the same input. One network represents the unified “shape”, and the other represents the unified “surround” or “context”, a place in which the shape exists. The same shape-network can co-exist with different surround-networks (ex: the same word found in different sentences) thus acquiring different meaning.

When an input activates a node, node-memory may have several networks associated with that node state, in which case analogies can be made between those networks, or the networks can compete for greatest fitness in the puzzle according to node’s software. This leads to non-deterministic functioning, with an advantage of being able to use incomplete input, or variation of the input, for recall, recognition, or classification of the input pattern. If parts of various networks are triggered by the same input, a new unified network can be formed, thus creating a new “perception”, ”insight”, or “thought”.

Each module of this system has an input layer and an input/output layer that interacts with other modules. The input to another module is an analyzed and abstracted form of the original input that can accommodate variations and incompleteness, in other words, a “meaning” of the original input. An input from another module serves as a “search focus” or goal.

2.1.1. Input Layer Honeycomb input layer is composed of Processing Cell (PC(level)

(address)) tiles, a roughly hexagonal structure consisting of 7 hexagonal elements - one central hexagon (H(level)

(address)) that connects exclusively with the 6 peripheral hexagons (h (level)

(binary)), and with the overlapping upper and lower honeycomb layers tiled by the same type of PCs (larger or smaller in size and slightly skewed, so that each hexagonal unit in the upper layer overlaps exactly with a PC in the layer immediately below it). Each PC is identified by its central hexagon address (for example PC000 has central hexagon H000). The value (0 or 1 corresponding to absence or presence of a given type of input) of each peripheral hexagons (h1 – h32) is recorded at specific binary place in central hexagon H based on its position in the tile7 (Figure 1).

Fig.1 (a) Represents a PC; (b) Represents an example of activation that results in binary form 111010, or decimal form 58 PCs are grouped into Patches of 7 PCs that output their activation to a hidden layer PC, and to the system’s

Memory Unit (MU).

Fig. 2 (a) represents the example Patch, where PC output 23 represents position 1 on hidden layer, 30 represents position 2, and so on. Each of these PC outputs is sent to all Dedicated Cells (DC) in MU; (b) The PC1 of the hidden layer overlaps another Patch on the right, where black numbers indicate the positions of PC0s from the input layer. H1 would calculate the Active Patch Number for the example Patch APN=(1+2+8+16+32+64)=123, representing all the active PCs in the Patch.

2.2. Memory Unit

The 3D structure of MU is made of 8 interconnected truncated octahedrons storing 64 primitives (possible patterns), represented by hexagonal DCs (Figure 3). Each edge of a DC also represents a binary position direction, and connects through it so that DC on one side of the edge has 1 and DC on the other 0. The whole set of 64 DCs forms one MU, whose input comes from one Patch.

Fig. 3 represents two MUs that fit together at their edges. The same goes with MUs that would expand the MU-complex into 3D. MU is a lattice starting with an empty set (DC 0), and ending up with a top set (DC 63) that contains all the primitive patterns. The Patch outputs activate DCs, which in turn stimulate DCs connected to their binary 0s and inhibit DCs connected to their binary 1s. The MU structure also includes octagonal nodes of two types, Inner Nodes (IN), and Outer Nodes (ON) that are activated by DCs (Figure 4 (a)). Each active DC (either activated directly from the Patch, or by stimulation or inhibition by activated DCs) sends output to one IN and one ON node. DC output goes to octagonal node’s binary position (one for each hexagonal surface). That permits nodes to calculate their DC Activation (DCA) number, in other words, which DC pattern is present in the input. All DCs that activate a given node have common characteristics in 3 out of 6 binary places, so that each node represents a hash bin where similar DCs are stored. Furthermore, each IN or ON node interacts with 6 node neighbours, where each neighbouring node is located in a specific binary direction as shown in Figure 4 (b).

Fig. 4 (a) Shows INs in blue, and ONs in yellow. Both INs and ONs are of 8 different types, labeled from 000 to 111, following the xyz 3D scheme. Each group of DCs activating a given node have some binaries that are absent or present, an equivalent of a hash code for some missing and some present binaries; (b) represents connections between IN nodes, here representing connections of IN 000 with IN 100 in binary direction 4 and 32; IN 010 in binary directions 2 and 16; and IN 001 in binary directions 1 and 8

When the input from the example Patch (Figure 2 (a)) is run through in simulation, the outcome is one IN

network, one ON network (both determining “form”, see Figure 5 (a)), and the rest of ON nodes forming a “surround” network (Figure 5 (b)).

Fig. 5 (a) shows on the left the IN node network, and on the right the corresponding ON network for the example patch shown in this article. The Node Activation (NA) numbers show the sum of directions in which the node made connections. Connection Number (CN) shows the strength of the bond between two adjacent nodes. DCA show which DCs activated the node; (b) shows how the ONs that start fully connected (111111), get inhibited by activated ONs (part of “form”), and then joins disparate parts of “surround” leaving empty spaces for the “form”. The surround network calculates a Network Number (NN), which is then used to trigger the whole network during search, recall or recognition procedures. Node networks tie the whole input field together.

2.3. Memory associations at node level

Each node has 6 neighbors where the direction of those nodes is identified with a binary position number (Figure 4 (b)). The central node represents active surrounding nodes by a Node Activation (NA0) number, equivalent to a 6 digit binary number. NA0 can thus represent all possible activation states of surrounding nodes. For example, if IN 001 in binary direction 8, IN 001 in binary direction 1, and IN 100 in binary direction 32 were active, NA for IN 000 would be 41. Out of those 64 possibilities, each surrounding node can be actively included only 32 times in NA0, the

times that its binary direction is included in NA0. In order to clarify, let us assume that NA0 includes binary direction 4 represented by W4. Then NA0 values could only be (4-7, 12-15, 20-23, 28-31, 36-39, 44-47, 52-55, 60-63). To simultaneous also include W1 can occur only when NA0 = (5, 7, 13, 15, 21, 23, 29, 31, 37, 39, 45, 47, 53, 55, 61, 63), that is in 16 possible cases.

Total possible number of surround activations is 64 + 326 = 1,073,741,888. All of these states can be represented in a 3D matrix Ni,j,k = (NA0, (W1 – W32), WT), where WT = ∑( W1 – W32). W1 – W32 represent NAs of surrounding nodes (W1=NA1, W2=NA2, and so on (Figure 6 (a))).

Fig. 6 (a) represents Ni,j,k matrix that contains total phase space for a given node, and is kept in node’s non-volatile memory; (b) also represents Ni,j,k matrix from different perspective. Each NA0/WT pair position records ID for all (W1 – W32) cases, thus acting as a hash bin on a more abstract level.

For each NA0/WT there are many possible rearrangements among (W1 - W32) (Figure 6 (b)). For example, for NA0 = 5, W4 can have 32 possibilities, and W1 can have 16, therefore the total of possibilities for NA0 = 5 is 32*16 = 512. Nodes in W1 and W4 positions can represent different patterns. For example, if WT = 51, the following W1/W4 pairs would be possible (5/46, 46/5, 13/38, 38/13, 21/30, 30/21, 29/22, 22/29, 37/14, 14/37, 45/6, 6/45) a total of 12 pairs. Each one of those pairs would represent a different set of patterns. When more active binary directions make NA0, the number of states for a given NA0 increases (if NA0 = 7, W2 would also be active therefore number of possible states would be NA0 = 32*16*8 = 4096).

An activated node can have 256 activation states represented by DCA formed from 8 hexagrams positions and their binary equivalents. For example, if a node is triggered in its binary places 1, 2, 16, 32, and 64, its DCA = 01110011 = 115. Combining node’s input from surrounding nodes (ID), and the input received by DCs (DCA) produces a memory matrix Mi,j = (DCA, ID) where each address represents a given state of the node U = ∑( DCA + ID) (Figure 7) which can distinguish between identical IDs by checking which DCA was activated in the NA0 node. Identical U can also be formed, but as that implies change in ID, a unique state of each node can be distinguished. This system can search for similar states, and form analogies between them, using U, ID and heuristics included in node’s software.

Fig. 7 (a) shows a matrix that combines information from nodes network (ID), and the internal activation of the node by DCs (DCA); (b) shows that a DC (a puzzle piece) can be part of many different networks (puzzles).

2.4. Memory associations at DC level

Activated nodes return feedback to DCs from IN (UI) node and ON (UO) node, once IN and ON networks have been stabilized. DCs form a memory matrix Di,j = (UI, UO) where each matrix address is identified by an identity number (IDN), where IDN = UI + UO (Figure 7 (b)). When DC receives UI/Uo feedback, it inscribes IDN into the address. Unused addresses of the matrix Di,j have value of 0 according to sparse coding.

DCs also receives information from the Patch, where a Patch Position Number (PPN) identifies where in the Patch a given DC pattern was found when the node networks was in a given state (IDN). DC keeps the memory of PPN and IDN in a memory matrix DPi,j=(PPN, IDN) (Figure 8 (b)). Each DC can thus belong to a great variety of IN/ON networks and input patterns, without loosing precision. Each address of DPi,j matrix contains a Strength number (SN2) that starts with value 0, adds +1 when there is feedback from networks, and -1 when feedback from the Patch is negative.

Fig. 8 (a) shows on the left Patch position numbers on the hidden layer that make PPN. For example, PPN for DC 22, found in binary positions 1 and 16 is equal to 17; (b) shows DPi,j=(PPN, IDN) matrix which can store all possible states of a given DC. DCs also interact with each other by receiving stimulation (+1) in their binary 1 positions, and inhibitions (-1) in binary 0 positions. For example, DC 27 (011011) can receive stimulation in binary position 16 (011011) from DC 11 binary position 16 (001011), and vice versa, DC 11 receives inhibition from DC 27 in the same binary position. DC Activation Number (AN) is equal to number received when DC is activated from the Patch (equal to the number of binary positions in a DC that are 0) plus stimulation/inhibition numbers from other DCs. In the previous example, DC 27 (011011) receives +2 when activated from the Patch, and DC 11 (001011) receives +3. When DC activates IN/ON nodes, it sends it’s AN to node’s corresponding binary positions. When DC gives feedback to the Patch, it sends DC followed by its AN number to PPN binary positions.

2.5. Memory associations at Patch level

Each Patch position can receive feedback from several DCs, which are placed in a buffer according to AN values. Patch positions (H1 and h1

1-h132) memorize in matrix Pi,j = (APN, DC). PC0 outputs are equal to DCs, and they can

be used interchangeably. APN is forms H1 from active Patch position binary numbers, and shares it with the other Patch positions (h1

1 – h132). The previously memorized Pi,j matrix positions (chosen at random) are shown in black in

Figure 9. Red is used to show activated DC that was fed to that Patch binary position, which then activate corresponding APNs. The yellow DC line and activated APN that comes from PC output indicate the expected feedback from MU for that binary position. Activated APNs from each Patch position are then gathered in H1 in APN/(h1-H64) matrix, and APN with greatest number of confirmed positions is chosen as output towards PC0s.

Fig. 9 represents the Patch position 32 Pi,j matrix memory. Black squares represent where given DC was present for given APN at h1

32 position. APNs for this position run from 32 to 63, and from 96 to 127, as only in those APNs position 32 is active. When the simulation from the example Patch is run through, the result is APN 123 with 5 confirmed positions (Figure 10 (b)), which is very close to the original input (Figure 10 (a)). Once APN is chosen, H1 calculates the number of binary 1s in the Patch, divides it by 7, and sets it as the average threshold activation number (T1) for each PC0 in the Patch.

Fig. 10 (a) shows the original Patch pattern; (b) shows that only position h132 was not confirmed (a), and the alternative pattern close to the

original was created. This shows that at greater abstraction level of node-networks, variation can be dealt with in this system.

2.6. Memory associations at input level

Central hexagons (H0) in PC0s receive DC and T1 from H1, and h11-h1

32, and send T1 to binary positions (h01 –

h032) in their respective PCs. Each binary position in PC0 makes previous threshold T0 = T1, and rereads the input

according to T0. If the stimulation (I) is equal to or above the threshold (I≥T0), (h01 – h0

32) binary positions send 1 to corresponding binary place in H0. The number calculated by H0 is the output of that PC0. H0 compares the received DC with the calculated output, and if the output is the same or differs by only one binary place, the feedback (F0) from PC0s to PC1 is positive (1). Positive feedback is memorized by H0s in PC0

i,j= (DC, T0) matrix by adding 1 to the strength number (SN0), and negative feedback subtracts 1 from that position in the matrix. When feedback is positive, PC highlights the output area on the monitor, thus picking out “form” from the “surround”.

2.7. Extension of the system

To be able to expand this system, each module can translate IN and ON networks into a new 2D input layer for another module, Node Input/Output Layer (NIOL). Each IN node or ON node can be represented in 2D as shown in Figure 11 (a), because each node has six nodes around them represented by W1 through W32 binary connections. Each IN node is also surrounded by six out of eight ON nodes (indirectly through DCs) that surround them (Figure 11 (b)). A complete NIOL contains all 27 ON nodes surrounding a given MU. Some redundant ON nodes are included. NIOL can also be used as input layer from other modules to original MU, which is translated into IN/ON networks, that act as focus for search, recall, or recognition. This multi-module system was conceived as a tool for construction, research, and development of brain like circuits that focus on information transfer through pattern transmission. It has not yet been implemented, but in collaboration with ENSIAS, MASCIR, and financial support from Moroccan Government, it may soon show its potential. Applicability of such an abstract system is wide ranging, including future multi-agent IoT systems, B2B networks planned by Ariba, Smart Cities, etc. To quote from a recent article “Can the Internet of Highly Insecure Things Be Trusted to Run the One True Network?” by JoshEAC at Enterprise Applications Consulting site:

“If only we could transact business in a many-to-many network, where all business transactions and their documents … are mediated in an electronic exchange that automatically allows all participants to interact regardless of what their internal business systems or business processes look like.” That wish could come true by transmitting information as patterns, as suggested in this article, and it would be secure.

Fig. 11 (a) shows how an IN or ON node network can be flattened into 2D representation.W1 to W32 represent activen connections between the central node and its surrounding nodes in xyz directions; (b) shows IN 000 node and six out of eight ON nodes that surround it. Missing ON nodes would appear in the surrounds of other IN nodes. Letters under the ON nodes represent their positions in 27 ON surround of MU (Front, Middle and Back layers, Top, Middle and Bottom rows, and Left, Middle and Right positions in a row respectively). The explanation in this paper limited itself to input of only one type, but the system can be extended to include three simultaneous inputs, for example image, word metadata, and sound. The input layer is shown in Figure 12 (a), where red PCs would represent the image, blue PCs the word metadata, and green PCs the sound. All three inputs would be binary: image as explained above, word metadata in ASCII starting from PC0

000 and filling in the subsequent binary code following the address increase, and sound would use its binary equivalent and fill in just as word metadata. Each data set can be extracted by overlapping layers as shown in Figure 12 (b). As each layer has its own MU, NIOL for the module can be as shown in Figure 13, a diagonal cross-section of MU complex. Eight diagonal cross-sections of the same MU complex are possible therefore eight different NIOLs of the same MU complex can be formed.

Fig. 12 (a) shows an input layer with equally distributes three types of input PCs. Originally thought of for vision, it can also be used for any other types of input. Each colour represents one type of input; (b) shows how hidden layer PCs can extract only one type of input.

Fig. 13 shows a diagonal cross-section NIOL for a cubic 15 MU-complex with three inputs combined. Outlined Patches correspond to nodes for one input NIOL, an example of which is shown in Fig. 11 (b) for IN 000 node.

As an example limited to only 7 patches (Figure 14), an image can be combined with text in ASCII code (Figure 15), and another unspecified type of input in a unified three-inputs field, and resulting NIOL.

Fig. 14 (a) an image; (b) an image overlapped with the input field; (c) activation of the hidden layer by the input field.

Fig. 15 (a) Text in ASCII code; (b) Combined Image + metadata (red = image, blue = ASC II code); (c) NIOL cross-section formed from combined image-metadata IN/ON node activation. NIOL can also be used as an input layer from other nodes to the original MU, translate it into IN/ON networks, and act as focus for search, recall, or recognition.

3. Possible application of EYEYE system to B2B global network

If a given company tries to find other companies that produce some component at a given price, quality, time to delivery, production capacity, or similar, it could use EYEYE system on the cloud:

1. The system would form a map of locations for all companies producing a given component and place them on the Input Layer.

a. Each variable would be assigned a strength number, for example, price may have a range from 1 to 10, where cheap = 9 and expensive = 2.

b. Variables would be placed in h01 to h0

32 of each PC0, as shown in Figure 1.

Fig. 1 shows variables of different strength placed in h01 to h0

32

binary positions. The numbers in H0 show the strength of each variable as well as threshold number used in further calculations.

c. H0 assigns 1 to each binary place ≥ T1 and 0 to binary places < T1. The searching company gives

threshold number, as well as wished for variables. Fig. 2 shows the result of H0 calculation. PC output number PCO = 110010 = 50. The company represented by this PC would be chosen if querying company specified the variables 2, 16 and 32. If specification also included 8, which would represent PCO = 58, the company would not be chosen.

2. In a given geographical area where the component company is found may be other companies that produce the same component with different variable strengths. A local Patch of those companies would show on the Input Layer of EYEYE system according to searching company’s T1 threshold. An ad hoc Patch is

shown in Figure 3, equivalent to a PC1 at the hidden layer. Fig. 3 shows 7 companies, the example company in the middle (C64) corresponding to binary 64, and other companies surrounding it with corresponding binary place numbers in the Patch. Only two companies have the three variables asked for by the querying company.

3. Hidden layer PC1 (Patch) calculates APN, which indicates which nearby companies also produce the wished for component within the wished for constraints. In this case only C64 and C8 would be involved, therefore the APN = 1001000 = 72. The two PCO numbers, 50 and 55, would be sent to MU.

4. PCO numbers are equal to DC numbers so that PCO = 50 = DC 50, and PCO = 55 = DC 55. The two DCs inhibit and stimulate DCs connected to them. Stimulation goes from binary 0 to binary 1, and inhibition from binary 1 to binary 0. Therefore DC 50 = 110010 would stimulate DC 58, 54 and 51, and inhibit DC 18, 33, 48. DC 55 = 110111 would stimulate DC 63, and inhibit DC 23, 39, 51, 53 and 54. By adding 1 for stimulation and -1 for inhibition Activation Number (AN) is calculated for each DC. Each DC stimulated directly from the Patch also gets 3 added to AN number. The results would be DC(AN): 63(1), 58(1), 55(3), 54(0), 53(-1), 51(0), 50(3), 48(-1), 39(-1), 33(-1), 23(-1), and 18(-1).

5. Each DC stimulates associated IN and ON nodes, which gather DCs that have the same presence or absence of given PC0 variable positions, that is, companies with similar profile type. In this case IN 111 gathers 63; IN 011 gathers 58; IN 110 gathers 55, 54, 53, 39; IN 010 gathers 51, 50, 48, 33; IN 100 gathers 23; and IN 000 gathers 18. Resulting activation number total IN(ANT) is: IN 000(-1), IN 001(0), IN 010(2), IN 011(1), IN 100(-1), IN 101(0), IN 110(1), IN 111(1).

6. IN and ON nodes create networks with surrounding nodes whose company profiles for a given component are different in only one variable position, and have positive AN value. If the Connection Number CN is

strong, cooperation between those companies in production of variations of a given component is likely. In this case IN 110 and IN 111 would connect with CN =2, and IN 010 stays activated alone.

7. The node networks reach a Network Number NN, which is kept in memory of each network-node, and permits a recall that primes entire network even with partial input.

The querying company may be searching at the same time for variety of components, and it asks EYEYE system where it can find them, giving all the required specifications:

1. The system calculates the T1 level for each component. 2. The system matches each component with the NN number and its NIOL output. Each submitted component

has its own NN network. 3. If several components need to be developed by the same company, as specified by the querying company, a

combined NIOL can be formed from 3 components at a time. 4. The system runs NIOL through the system to Input Layer reconfigured for three components. 5. The reconfigured Input Layer is matched with corresponding companies and their locations. 6. The answer, with associated variables and strength values of reconfigured Input Layer are sent to the

querying company with annotations about the best choices, and variables, without giving out the names of the choice companies (which is the same as running a public offer).

7. The querying company sets the invoice for components, and the system asks the choice companies if they are willing to accept the order and give their estimated cost. The information is sent to querying company without reveling the name of the choice company.

8. Choice companies’ estimates are sent to the querying company. 9. Querying company makes its choice, and a firm order is formulated and sent to the chosen company. If the

order is then accepted, the two companies are given each others’ info and are charged a fee. 10. Later the querying company is asked to give feedback on provider company, and the provider company is

asked to give feedback on querying company. The results are incorporated as strength numbers for each company at NN network level.

When several companies search for the same component at the same time, the system proceeds as follows: 1. The EYEYE system looks separately for the same component with T1 given by various companies, and

gives the results to each querying company. 2. The provider company gets the orders from all querying companies, and decides which order, or orders, to

accept, and gets charged for each order separately if the deal goes through.

4. Conclusion

Using this system with simple program variations at each node-level can result in swarm intelligence, adaptive and tolerant with variations2. At each level there is abstraction when going from input to IN/ON nodes, which allows the formation of concepts3. When going from IN/ON nodes towards the input level the outcome becomes more detailed and specific, yet belongs to the same abstract category3. At each level the variation tolerance permits creation of analogies3. Complex interactions in node networks created by this system cannot be linearized, and with constant back-and-fort feedback the networks start acting like chaotic attractors. How would this system explain the visual experience mentioned in the introduction? A given input would form a decentralized, ‘pixelated’ memory in IN and ON node networks. With learning (various inputs of similar type) ‘archetypal’ IN network would form, and an ON ‘container’ that would expect ‘something’ in given nodes. Many archetypal networks could be stimulated simultaneously with an input, but only some ‘containers’ would stay active. Those ‘containers’ would fixate parts that fit, and query for further fitting parts in the spaces provided. If those spaces found a valid choice from the memory, the container would get positive feedback and stabilize the whole IN/ON network, thus specifying the ‘meaning’ of that pattern through calculation of NNs.

This modular system provides output towards other modules from IN/ON node-network level, and can thus deal with very complex information integration, scalable to any level. With extension of this system through other modules, ideas could be represented by a given pattern of activation.

In this presentation the nodes and their connections have been interpreted as a unitary structure in 3D, but the nodes do not need to be in the same location as long as they communicate in the same manner. Such complex non-

linear system can develop a mechanical type of “world view”, an indispensable ingredient in creation of meaning3, 8, and hopefully resulting in a “common sense” computer.

Interactions between man and machine will increase in complexity, and in order to simplify such interaction, machines based on analogical thinking will have to be created. EYEYE system presented here may be a way in that direction. In this article only one module has been considered, but given the possibility to extend it to a multi-module system where the information flow is not opaque, may lead to a whole new way of approaching AI.

References  [1] Douglas Hofstadter, “Fluid Concepts and Creative Analogies, computer Models of the Fundamental Mechanisms of Thought,” Basic Books

1995, ISBN 0-465-02475-0 [2] Douglas Hofstadter, “Gödel, Escher, Bach: An Eternal Golden Braid,” Penguin Books 1980 [3] Douglas Hofstadter and Emmanuel Sander, “Surfaces and Essences,” Basisc Books 2013 [4] Neven Dragojlovic, “Proposal for Parallel Computer Architecture of a cellular type aimed at development of an autonomous learning

machine,” UKSim2012, ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6205540 [5] Neven Dragojlovic, “Visual mathematics used in a novel Parallel Computer Architecture of a Cellular Type,” SCSE’13, Doi:

10.7321/jscse.v3.n4.69 [6] Pentti Kanerva, “Sparse Distributed Memory,” MIT Press 1988 [7] Lee Middleton, Jayanthi Sivaswamy, “Hexagonal Image Processing, A Practical Approach,” Springer 2005 [8] Dominic Widdows, “Geometry and Meaning,” CSLI Publications 2000 [9] S. J. Russel and P. Norvig, “Artificial Intelligence, A Modern Approach,” Tsinghua University Press 2011 [10] Stephen Wolfram, “A New Kind of Science,” ISBN 1-57955-008-08