Binding representational spaces of colors and emotions for creativity

8
RESEARCH ARTICLE Binding representational spaces of colors and emotions for creativity Agnese Augello, Ignazio Infantino, Giovanni Pilato * , Riccardo Rizzo, Filippo Vella ICAR, CNR, Viale delle Scienze, edif. 11, 90128 Palermo, PA, Italy Received 13 March 2013; received in revised form 4 May 2013; accepted 17 May 2013 KEYWORDS Color perception; Emotions; Creativity; Neural networks Abstract To implement cognitive functions such as creativity, or the ability to create analogies and met- aphors, it is important to have mechanisms binding different representational spaces. The paper discusses this issue in the broader context of having a ‘‘artist’’ robot, able to process his visual perception, to use his experience and skills as a painter, and to develop a creative digital artefact. In this context, two different spaces of color representation are respectively used to associate a linguistic label and an emotional value to color palettes. If the goal is to build an image that communicates a desired emotion, the robot can use a neural architecture to choose the most suitable palette. The experience concerning palette-emotion association is derived from the analysis of data enriched with textual description available on the web. The representation of colors and palettes is obtained by using neural networks and self association mechanisms with the aim of supporting the choice of the palette. ª 2013 Elsevier B.V. All rights reserved. 1. Introduction The presented work takes place in the generic context of creativity and aims at investigating a basic mechanism that can be used for the construction of new representational spaces. A much-discussed theory was originated by the so- called model of blending (or conceptual integration) which identifies and connects mental spaces among them (Faucon- nier & Turner, 1998). Considering cognitive mechanisms for creativity, three components generate a blend: the compo- sition (or fusion) which pairs elements from the input spaces into the blend; the completion (or emergence) of a pattern in the blend, which is filled using long-term memory infor- mation; the process that simulate a cognitive work and its performance evaluation (Abdel-Fattah, Besold, & Khnber- ger, 2012; Pereira, 2007). In our opinion it is possible to have robust fusion algorithms and completion through the combination of various models of neural networks: an exam- ple of such an approach is described in Thagard and Stewart, 2011 that makes it possible to emphasize associations useful 2212-683X/$ - see front matter ª 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.bica.2013.05.005 * Corresponding author. E-mail addresses: [email protected], [email protected] (G. Pilato). Biologically Inspired Cognitive Architectures (2013) 5, 6471 Available at www.sciencedirect.com journal homepage: www.elsevier.com/locate/bica

Transcript of Binding representational spaces of colors and emotions for creativity

Biologically Inspired Cognitive Architectures (2013) 5, 64–71

Avai lab le at www.sc iencedi rect .com

journa l homepage: www.elsevier .com/ locate /b ica

RESEARCH ARTICLE

Binding representational spaces of colorsand emotions for creativity

2212-683X/$ - see front matter ª 2013 Elsevier B.V. All rights reserved.http://dx.doi.org/10.1016/j.bica.2013.05.005

* Corresponding author.E-mail addresses: [email protected], [email protected]

(G. Pilato).

Agnese Augello, Ignazio Infantino, Giovanni Pilato *,Riccardo Rizzo, Filippo Vella

ICAR, CNR, Viale delle Scienze, edif. 11, 90128 Palermo, PA, Italy

Received 13 March 2013; received in revised form 4 May 2013; accepted 17 May 2013

KEYWORDSColor perception;Emotions;Creativity;Neural networks

Abstract

To implement cognitive functions such as creativity, or the ability to create analogies and met-aphors, it is important to have mechanisms binding different representational spaces. Thepaper discusses this issue in the broader context of having a ‘‘artist’’ robot, able to processhis visual perception, to use his experience and skills as a painter, and to develop a creativedigital artefact. In this context, two different spaces of color representation are respectivelyused to associate a linguistic label and an emotional value to color palettes. If the goal is tobuild an image that communicates a desired emotion, the robot can use a neural architectureto choose the most suitable palette. The experience concerning palette-emotion association isderived from the analysis of data enriched with textual description available on the web. Therepresentation of colors and palettes is obtained by using neural networks and self associationmechanisms with the aim of supporting the choice of the palette.ª 2013 Elsevier B.V. All rights reserved.

1. Introduction

The presented work takes place in the generic context ofcreativity and aims at investigating a basic mechanism thatcan be used for the construction of new representationalspaces. A much-discussed theory was originated by the so-called model of blending (or conceptual integration) whichidentifies and connects mental spaces among them (Faucon-

nier & Turner, 1998). Considering cognitive mechanisms forcreativity, three components generate a blend: the compo-sition (or fusion) which pairs elements from the input spacesinto the blend; the completion (or emergence) of a patternin the blend, which is filled using long-term memory infor-mation; the process that simulate a cognitive work and itsperformance evaluation (Abdel-Fattah, Besold, & Khnber-ger, 2012; Pereira, 2007). In our opinion it is possible tohave robust fusion algorithms and completion through thecombination of various models of neural networks: an exam-ple of such an approach is described in Thagard and Stewart,2011 that makes it possible to emphasize associations useful

1 http://www.colourlovers.com.

Binding representational spaces of colors and emotions for creativity 65

to generate creative ideas by simple vector convolution.The importance of associative mechanisms is also under-lined by neurobiological models of creativity, many of whichare based on the simultaneous activation and communica-tion between brain regions that are generally not stronglyconnected (Heilman, Nadeau, & Beversdorf, 2003).

We propose an architecture that produces links betweenrepresentational spaces originated from visual perception.In particular, we address two interesting cognitive aspects:the association between thenameand theperception of colorand the association between palette of colors and emotionallabels. The two representation spaces are implemented byusing neural networks, and from themnewbindings in a ‘‘cre-ative’’ space can arise. Extending a previous work (Infantino,Pilato, Rizzo, & Vella, 2013), we choose to put together a rawcolor representation with a link between emotion and colorsets using some recent hypotheses that try to explain themechanisms of creativity (Boden, 2009). Currently we havebuilt a subsystem able to connect emotions and color pal-ettes. This connection is aimed at obtaining a suitable paletteif a specific emotion is required, in an image or a graphic arte-fact. This module is a part of a larger project that is focusedon obtaining a robotic system able to paint and reproduce aportrait of a human subject. The project is aimed at combin-ing an approach based on cognitive architectures (Goertzel,Lian, Arel, de Garis, & Chen, 2010; Langley, Laird, & Rogers,2009) with mechanisms of computational creativity (Boden,2009; Colton, Lopez de Mantaras, & Stock, 2009) trying toget an implementation that satisfies both software and hard-ware constraints, which are introduced when working with areal robotic platform.

All cognitive architectures presented in the literature donot explicitly provide mechanisms to achieve creativity,partly because only recent researches are addressing someimportant and closely related cognitive aspects such as emo-tions, intentions, awareness, consciousness. These aspects,and other high level functions of new cognitive architecturesare reported in Samsonovich, 2012. Given the proposedmod-els of creativity in literature, it is possible to identify somefeatures that are likely to produce a digital portrait painterusing a robot that is able to develop his artificial visual per-ception and to originate a creative act based on its experi-ence, its expertise and also through interaction with thehuman (both during learning and in final evaluation).

The paper describes the first phase of our research pro-ject, and it aims at creating cognitive software infrastructurethat provides creative skills mainly from visual perception. Ina second phase, we plan to introduce the physicalness of thehumanoid by including other sensory information such asauditory and tactile, to achieve perhaps the use of a realbrush and a canvas. The paper is structured as follows: thefirst Section 2 gives a brief introduction to colors and emotioninteraction as reported in relevant literature, the second Sec-tion 3 describes the proposed system, the third Section 4 illus-trates the dataset used, the fourth Section 5 presents theresults and finally some conclusions are drawn.

2. Colors and emotions

Choosing the right color set is an important issue for graphicprofessionals. A system focused on the creation of graphic

artefacts has to emulate the same decisions that a graphicdesigner makes while developing her/his works. Pickingthe right color set is crucial because it plays a relevant con-tribution on a global emotional impact of artworks.

A study on relationship between color and emotions is re-ported in Ou, Luo, Woodcock, and Wright, 2004; this studyassigns to each color a position in a space characterizedby four dimensions like warm-cool, heavy-light, active-pas-sive and hard-soft.

Color selection is an issue in many living environments:for example the right color set can influence the behaviorof customers in a shop, as indicated in Bellizzi, Crowley,and Hasty, 1983.

Many studies were conducted on color harmonization,i.e. the way of selecting a group of colors aestheticallypleasing. There is not a consolidated formulation that de-fines a set of harmonic colors, but there are schemas orrelations in color space that describe these sets: examplesof harmonic color sets can be found in Cohen-Or, Sorkine,and Gal, 2006. Another use of color harmonization has beenproposed in Wang and Mueller, 2008 were harmonic colorpalette were used in 3D rendering.

On these premises a system aimed at producing a digitalimage must select colors in a careful way. A wrong set ofcolors will not convey the ‘‘right information’’ and willnot evoke the desired emotional response.

Color harmonization is also the subject of the study inO’Donovan, Agarwala, and Hertzmann, 2011; in this workthe focus is on small color palettes called color themes thatare collected in COLOUR Lovers web site,1 a resource oftools for graphic designers.

A work on painting and emotion is presented in Shugrina,Betke, and Collomosse, 2006; in this work the colors andother parameters of a photograph are changed accordingto the mood of an user. The image is modified using a setof functions that shift color pixel according to the emotionin a pleasure-arousal space; for example arousal corre-sponds to color saturation and hue, while sadness and calmgenerate a shift to the blue spectrum.

Finally another relevant work is Csurka, Skaff, March-esotti, and Saunders, 2011, which is focused on the associ-ation of color themes to abstract and emotional concepts(such as classic, cool, and delicate). This association is sim-ilar to the one presented in this work, but it is not related toan emotional space.

3. The proposed system

The proposed system is represented in Fig. 1; the system iscomposed of three interconnected neural systems: a neuralgas (NG), an autoencoder network and a multi-layer percep-tron (MLP).

The NG is a self organizing clustering network, describedin Martinetz, Berkovich, and Schulten, 1993, used to repre-sent and simplify the color input stimuli; the autoencoder isused to build a connection among palettes and emotion rep-resentations, while MLP labels the clustered color stimuliand creates the association between name and color per-ception (as presented in Infantino et al., 2013).

Fig. 1 The proposed architecture for the system. The arrowsdirection refer to the training phase of the autoencoder. Theupper part with the MLP and the color names space is reportedfor sake of completeness.

66 A. Augello et al.

The NG clusters the input vectors in an unsupervisedway; the obtained clusters are used to classify, and conse-quently simplify, the color input. Its output is processedby two different systems: the first one is an MLP aiming atassociating a name to the color stimuli, the second one ispart of the autoencoder input vector (Fig. 1).

The autoencoder is implemented by the a-Net MLP archi-tecture thanks to its good generalization capability (Gaglio,Pilato, Sorbello, & Vassallo, 2000).

The input of the autoencoder network is a vector ob-tained by merging the color palette representation obtainedfrom the neural gas network with the coding of the emotionconnected to the palette, as reported in Fig. 1.

The color palette representation is obtained by superim-posing the activation of the NG units corresponding to eachcolor in the palette. This approach allows us to obtain a rep-resentation of the palette that is independent from the col-or sequence. Assuming that the palette contains ŒCŒ colors,and naming U the set of neural gas units, this superimposi-tion vector can be obtained as follows:

act ¼XjCjk¼1

�fðcolkÞ ð1Þ

were act 2 RjUj and �fðcolkÞ is a vector where the genericelement i is given by:

fiðcolkÞ ¼1 if i ¼ argmin

j2Udistð�wj; colkÞ

0 otherwise

(ð2Þ

where distð�wj; colkÞ is the Euclidean distance between thecolor k and the neural unit j in the color space.

If the color in the palette belongs to very different clus-ters there will be a single neural unit ‘‘activated’’ for eachcolor. If there are colors very close each other they will notbe distinguished and will be merged in the palette represen-tation. This is somewhat logical: two colors that are notseparated as input stimuli and, as consequence, have thesame name in the system presented in Infantino et al.,2013, cannot be separated as output from the bindingspace: they will be processed as being the same color.

The vector with the palette representation is mergedwith the vector that codes the emotions. The emotion rep-resentation is made by orthogonal vectors �e in a space Rn

where n is the number of basic emotions.The input to the auto-encoder, during training, is a vec-

tor given by �v ¼ ½act; �e�; �v 2 RjUjþn, as shown in Fig. 1.Auto-encoder neural networks are usually a 3-layer MLP

with the input and output layer of the same dimension,and a smaller hidden layer. The central part of the auto-en-coder, corresponding to the hidden layer of neurons, is indi-cated as binding space because the weights of this layerconstitute a multidimensional space of smaller dimension.In this space palette representations and emotion codingare blended together; they can be still separated by the net-work and reproduced as output as Fig. 1.

The main characteristics of the auto-encoder neural net-work is that it can be trained with a set of vectors V 2 Rn

presented as input and output of the network, and that itcan be used to reproduce a vector vj 2 V as output when,after training, a corrupted copy bv j of �vj is presented as in-put. This way the auto-encoder works as a content address-able memory.

Our application exploits this characteristic because wewant to create a binding between the vectors of the pal-ettes and emotions, and to recall an emotion from a paletteand vice versa. The link between palette and emotion is ob-tained by using the description that refers to some pictures,or the description attached to the palette. Fig. 2 reports theuse of the neural autoencoder in the system during the pro-duction phase, when the binding between palettes and emo-tions is processed. The input to the system may be anincomplete act vector, due for example to a color or a cou-ple of colors that we want to approximate and should becompleted with other colors that correspond to a desiredemotion �e. In this case the output of the system will be apalette and a vector representing the emotion. These vec-tors are an interpretation of the incomplete input vector.In the same way it is possible to have a generic palette thatcorresponds to an emotion representation or to have theemotion corresponding to a palette.

4. Palette – emotion dataset

The dataset used to train the neural autoencoder consists ofa set of colors palettes, each of them associated to emotionlabels. The dataset has been built exploiting data availableat the COLOUR lovers web site: a creative community forgraphic designers aimed to the creation and sharing of col-ors, palettes and patterns.

The palettes shared by the users of the community aredescribed by a title and a set of tags summarizing their mainfeatures. We hypothesize that when an user creates a spe-cific palette, she/he often tries to represent a particularmood, as a consequence several descriptive tags are wordsbringing an emotional content.

Labels and descriptive tags associated to the palettes inCOLOUR lovers website may be influenced by many non-emotive factors. These factors can be related to manycauses: a personal experience or a mood of the day. In thiswork we want to filter-out these inconsistencies by using: alarge dataset, a clustering method based on neural gas

Fig. 2 The proposed architecture for the system. In this schema we assume that the autoencoder is trained, the palette is theinput of the system and the output is the corresponding emotion (together with a reconstruction of the input palette that can bediscarded).

Table 1 Number of retrieved palettes for each emotion class.

Fear Joy Anger Sadness Disgust Surprise

733 734 508 733 101 735

Binding representational spaces of colors and emotions for creativity 67

network and a binding space based on the auto encodernetwork.

The clustering algorithm is useful to filter-out small dif-ferences among many levels of the same color, trying to ob-tain a small set of colors connected to the experience of theenvironment. The auto encoder network is used to mediatethe link between palette and emotional labels. The ob-tained connection emerges from many data full of inconsis-tencies and noise. The evaluation of these results will bediscussed in a future work, however it will be an evaluationof the results of the whole painting system.

Starting from these observations, we have collected,through ad hoc queries, palettes related to emotional feel-ings, arranged in order of relevance.

In particular, we have performed a palettes retrievalprocess introducing as keywords the six labels of emotionsclasses (anger, disgust, fear, joy, sadness and surprise) pro-posed in Ekman, 1999. Therefore we have extracted the pal-ettes belonging to the resulting pages, considering an upperbound of 50 pages. The dataset is constituted by a set oflines describing the palettes, where the codes and the per-centage of its constituent colors and the associated emotionlabel are reported for each palette. It should be noticedthat the same palette may be associated to different emo-tions when it is described by users with a noncoherent set oftags. We considered these kind of palettes as ambiguous andtherefore we have removed them from the dataset.

The final dimension of the dataset is reported in Table 1and Fig. 3 shows six palettes representative for the sixemotions.

We have built also a test set, by choosing a set of famouspaintings with a strong emotional impact and tagging themwith emotion labels. Each painting has been related to emo-tions exploiting a methodology for emotion recognitionfrom texts, where the texts in this case are the descriptionsof the paintings.

In particular, we have used a Naive Bayesian classifiertrained on the emotions lexicon proposed by Strapparavain the SemEval 2007 contest (Strapparava & Mihalcea,2007; Strapparava & Mihalcea, 2008). The lexicon is com-

posed of six lists of affective words, each one characterizingone of the Ekmann fundamental emotion classes, extractedfrom WordNet Affect (Valitutti, 2004). As a result, wemapped the paintings descriptions as points, that we namehere ‘‘emoxel’’, into an emotional space, that has sixdimensions corresponding to Ekmann fundamentalemotions.

Each painting is also associated to a label representingthe emotion having the strongest component in the corre-sponding emoxel.

In order to encode the relationship between emotionsand palettes, we have used an autoencoder architecture,that here we describe more in depth. It has been shown thata feed-forward neural architecture with one hidden layercan approximate various kind of functions defined on com-pact sets in Rn (Chen, Chen, & Liu, 1995). On the other handfeedforward neural networks can theoretically approximateany boundary surfaces, hence they are universal classifiers(Hornik, 1989).

Multilayer perceptrons (MLPs), thanks to the compromiserealized between recognition rate, recognition speed andmemory resources, are used for both approximation andclassification tasks.

MLPs operating as simple classifiers are not useful forpattern verification (De Gouvea Ribeiro & Vasconcelos,1999). In classification tasks, feedforward networks discrim-inate pattern classes separating the pattern space into sur-faces generated by the learning algorithm with the onlyspecific purpose of splitting the example classes. The resultis the generation of open decision surfaces which do notnecessarily enclose the training data from a particular classof interest. Auto-associators are neural architectures basedon MLP networks where the outputs are forced to reproducethe input. The effectiveness of such an architecture is eval-uated by computing how the input is approximated by theoutput. It has been shown in Gori, Lastrucci, and Soda,1996 that auto-associators produce closed separation sur-faces enveloping the training data.

Auto-associators have been mainly exploited for imagecompression problems; in particular the compression is

Fig. 3 An example of palettes representative for the six emotions.

68 A. Augello et al.

performed at the hidden layers level, which always havefewer units than the input and output layers. The hiddenlayers behave like filters that condense the informationfrom the input patterns and reproduce them at the outputlayer. Auto-associators-based models have been appliedsuccessfully in other types of verification tasks (De GouveaRibeiro & Vasconcelos, 1999).

In order to realize a mapping between the palette andthe emotion labels we have used a neural auto-associator,which exploits a performing neural architecture namedaNet, previously developed by some of the authors (Pilato,Sorbello, & G, 2001).

The peculiarity of the aNet architecture is that it is ableto modify each activation function of its hidden units usingthe Hermite regression formula and the Conjugate GradientDescent algorithm (CGRD) with the restart conditions of(Powell, 1977). In particular, each activation functionbelonging to the hidden layer is expressed as the weightedsum of the first R Hermite orthonormal functions; the coef-ficients of the sum are continuously changed during thetraining phase together with the weights of the connectionsbetween the units of the network using the CGRD algorithm.Therefore each activation function of the hidden layerchanges its shape until the minimum of the error over thetraining set is reached.

It has been shown that aNet is a flexible neural architec-ture with very good generalization capabilities (see Gaglio

Fig. 4 Representational

et al., 2000; Maniscalco, Pilato, & Vassallo, 2011; Pilatoet al., 2001).

5. Results

5.1. Color-names space

The color-name association module was developed in orderto integrate this feature in a Psi cognitive architecture (seeInfantino et al., 2013). The neural gas network has used tomodel the influence of the perceived environment on thecolor perception and, as a consequence, on the color nam-ing. The used color stimuli arise from the real images ac-quired using cameras of a robot exploring its environment.Fig. 4 shows the obtained results and illustrate how the neu-rons are spread in the RGB space in order to approximatethe colors of the training images; the name associated toeach neuron is the one obtained from the trained MLPnetwork.

5.2. aNet training results

The aforementioned characteristics make aNet the idealarchitecture to realize a mapping between palettes andemotions. For the experiments we have used a306 � 100 � 306 architecture (306 input units, 100 hidden

space of color names.

Fig. 5 Example of the auto-associator outputs.

Table 2 Emotional pattern results with different configu-ration of initial weights of the auto-associator over a test setof 2000 palette-emotion samples.

Emotion TP TN FP FN

Anger 295 1705 0 0Disgust 40 1960 0 0Fear 335 1665 0 0Joy 704 1296 0 0Sadness 378 1622 0 0Surprise 248 1752 0 0

Binding representational spaces of colors and emotions for creativity 69

units and 306 output units) with 14 Hermite polynomials foreach unit belonging to the hidden layer of the architecturein order to realize an auto-associator. The first 300 ele-ments of the input (or output) units represent the activationlevel of each NG unit, while the last 6 represent the activa-tion of an emotion (i.e. anger, disgust, fear, joy, sadnessand surprise) triggered by the palette. Fig. 5 shows anexample of output. The activation of a NG unit or of an emo-tion has been coded by using the value +0.5, while the ab-sence of activation has been represented by the value �0.5.

We have used 5000 patterns as training set and 2000 pat-terns as test set obtaining an average approximation errorrate of 1.75 · 10�4 (i.e. the network output is, for example,0.499825 instead of being 0.5). However, since we are inter-ested only on positive or negative values, this does not mat-ter for our task. As a matter of fact we reach an accuracy of100% over the test set with 10 different starting weightsconfiguration of the auto-associator. In particular, namedTP, TN, FP, FN the number of True Positives, True Nega-

tives, False Positives and False Negatives, for all the 10starting configuration of the auto-associator we have ob-tained the results illustrated in Table 2.

As a consequence we have obtained the following re-sults in terms of True Positives Rate (TPR), False PositiveRate (FPR), and Accuracy (ACC) for each one of the sixemotions:

TPR ¼ TP

ðTP þ FNÞ ¼ 1:0 ð3Þ

FPR ¼ FP

ðFP þ TNÞ ¼ 0:0 ð4Þ

ACC ¼ TP þ TN

TP þ TN þ FP þ FN¼ 1:0 ð5Þ

5.3. Choosing palettes

Given that a painting raises particular emotions, we areevaluating the emotional shift that is achieved when thecolors of the painting change slightly. The original colorswere selected by the artist who created the artistic paintingand colors are combined with shape to transmit the desiredcontent. The effect induced by the change of the palette isto affect the emotions and typically to provide a differentcontent to the painting. The input palette is the palette thatis extracted from the painting - that is the collection of allits colors. The output palette is composed by five colors,since we hypothesize this subset as a sufficient set to conveyan emotion. Each color of the original painting is comparedwith the colors of the target palette and nearest one is usedto replace the original color. On the other side if the targetpalette is composed by few colors a quantization effect oc-curs. The results are illustrated in Fig. 6.

Fig. 6 Examples of famous paintings modified with emotion labelled palettes.

70 A. Augello et al.

6. Conclusions

The presented work shows a feasible way to bind two differ-ent representational spaces based on the color perception.A robust architecture based on neural networks has beenused to associate a linguistic label and an emotional valueto color palettes. This subsystem, suitably inserted in a cog-nitive architecture, can be used to support the creativeskills of a robot that aims at simulating the creative processof a human painter. Simultaneously with these results, weare developing another subsystem, that starting from a setof painting styles, derived from a set of famous paintings,creates his own style, combining simple filters to be appliedto the scene perceived. This way, the painting created willhave a defined style and a use of color that is motivated by

emotions according to what the artificial artist intends tocommunicate.

In the future we plan to check the paintings and their col-or palettes using human subjects in order to assess the ob-tained results.

References

Abdel-Fattah, A., Besold, T., & Khnberger, K. U. (2012]). Creativity,cognitive mechanisms, and logic. In J. Bach, B. Goertzel, & M.Ikl (Eds.). Artificial general intelligence. Lecture notes incomputer science (Vol. 7716, pp. 1–10). Berlin, Heidelberg:Springer.

Bellizzi, J., Crowley, A., & Hasty, R. (1983]). The effects of color instore design. Journal of Retailing, 1, 21–47.

Binding representational spaces of colors and emotions for creativity 71

Boden, M. (2009]). Computer models of creativity. AI Magazine,23–34.

Chen, T., Chen, H., & Liu, R. (1995]). Approximation capability inc(rn) by multilayer feed-forward networks and related prob-lems. IEEE Transactions on Neural Networks, 6(1), 25–30.

Cohen-Or, D., Sorkine, O., & Gal, R. (2006]). Color harmonization.ACM Transactions on Graphics (TOG) – Proceedings of ACMSIGGRAPH, 25, 624–630.

Colton, S., Lopez de Mantaras, R., & Stock, O. et al. (2009).Computational creativity: Coming of age.

Csurka, G., Skaff, S., Marchesotti, L., & Saunders, C. (2011]).Building look & feel concept models from color combinations.The Visual Computer, 27, 1039–1053.

De Gouvea Ribeiro, J., & Vasconcelos, G.C. (1999). Off-linesignature verification using an auto-associator cascade-correla-tion architecture. In Neural Networks, 1999. IJCNN ’99. Inter-national Joint Conference on 4 (pp. 2882–2886).

Ekman, P. (1999]). Basic emotions. The Handbook of Cognition andEmotion, 45–60.

Fauconnier, G., & Turner, M. (1998]). Conceptual integrationnetworks. Cognitive Science, 22, 133–187.

Gaglio, S., Pilato, G., Sorbello, F., & Vassallo, G. (2000]). Using thehermite regression formula to design a neural architecture withautomatic learning of the hidden activation functions. AI* IA 99:Advances in Artificial Intelligence, 226–237.

Goertzel, B., Lian, R., Arel, I., de Garis, H., & Chen, S. (2010]). Aworld survey of artificial brain projects, part ii: Biologicallyinspired cognitive architectures. Neurocomputing, 74, 30–49.

Gori, M., Lastrucci, L., & Soda, G. (1996]). Autoassociator-basedmodels for speaker verification. Pattern Recognition Letters,17, 241–250.

Heilman, K. M., Nadeau, S. E., & Beversdorf, D. O. (2003]). Creativeinnovation: Possible brain mechanisms. Neurocase, 9, 369–379.

Hornik, K. (1989]). Multilayer feed-forward networks are universalapproximators. Neural Networks, 2, 359–366.

Infantino, I., Pilato, G., Rizzo, R., & Vella, F. (2013]). I feel blue:Robots and humans sharing color representation for emotionalcognitive interaction. Biologically Inspired Cognitive Architec-tures 2012 – Advances in Intelligent Systems and Computing,161–166.

Langley, P., Laird, J., & Rogers, S. (2009]). Cognitive architectures:Research issues and challenges. Cognitive Systems Research, 10,141–160.

Maniscalco, U., Pilato, G., & Vassallo, G. (2011]). Soft sensor basedon e-anets. Forntiers in Artificial Intelligence and Applications– Neural Nets WIRN10, 226, 172–179.

Martinetz, T. M., Berkovich, S. G., & Schulten, K. J. (1993]). Neural-gas’ network for vector quantization and its application to time-series prediction. IEEE Transactions on Neural Networks/APublication of the IEEE Neural Networks Council, 4, 558–569.

O’Donovan, P., Agarwala, A., & Hertzmann, A. (2011]). Colorcompatibility from large datasets. ACM Transactions on Graphics– Proceedings of ACM SIGGRAPH, 30, 1–63.

Ou, L. C., Luo, M. R., Woodcock, A., & Wright, A. (2004]).A study of colour emotion and colour preference.Part I:Colour emotions for single colours. Color Research &Application, 29, 232–240.

Pereira, F. C. (2007). Creativity and artificial intelligence: Aconceptual blending approach (vol. 4). Walter de Gruyter.

Pilato, G., Sorbello, F., & G, V. (2001]). An innovative way tomeasure the quality of a neural network without the use of thetest set. JACI International Journal of Advanced ComputationalIntelligence, 5(1), 31–36.

Powell, M. J. D. (1977]). Restart procedures for the conju-gate gradient method. Mathematical Programming, 12,241–254.

Samsonovich, A. V. (2012]). On a roadmap for the bicachallenge. Biologically Inspired Cognitive Architectures, 1,100–107.

Shugrina, M., Betke, M., & Collomosse, J. (2006). Empathicpainting: Interactive stylization through observed emotionalstate. In DeCarlo, D., & Markosian, L. (Eds.), Proceedings of the4th international symposium on non-photorealistic animationand rendering (pp. 87–97), Annecy, France.

Strapparava, C., & Mihalcea, R. (2007). Semeval-2007 task 14:Affective text. In Proceedings of the fourth internationalworkshop on semantic evaluations (SemEval-2007), Associationfor computational linguistics (pp. 70–74), Prague, CzechRepublic.

Strapparava, C., & Mihalcea, R. (2008]). Learning to identifyemotions in text. In Proceedings of the 2008 ACM symposiumon applied computing (pp. 1556–1560). New York, NY, USA:ACM.

Thagard, P., & Stewart, T. C. (2011]). The aha! experience:Creativity through emergent binding in neural networks. Cogni-tive Science, 35, 1–33.

Valitutti, R. (2004). Wordnet-affect: An affective extension ofwordnet. In Proceedings of the 4th international conference onlanguage resources and evaluation (pp. 1083–1086).

Wang, L., & Mueller, K. (2008). Harmonic colormaps for volumevisualization. In IEEE/EG symposium on volume and point-basedgraphics.