Evaluation of Commercial Brain-Computer Interfaces in Real and Virtual World Environment - A Pilot...

16
Evaluation of commercial brain–computer interfaces in real and virtual world environment: A pilot study q Athanasios Vourvopoulos, Fotis Liarokapis Interactive Worlds Applied Research Group (iWARG), Faculty of Engineering and Computing, Coventry University, Coventry CV1 5FB, UK article info Article history: Available online 12 November 2013 abstract This paper identifies the user’s adaptation on brain-controlled systems and the ability to control brain-generated events in a closed neuro-feedback loop. The user experience is quantified for the further understanding of brain–computer interfacing. A working system has been developed based on off-the-shelf components for controlling a robot in both the real and virtual world. Using commercial brain–computer interfaces (BCIs) the overall cost, set up time and complexity can be reduced. The system is divided in two prototypes based on the headset type used. The first prototype is based on the Neurosky headset and it has been tested with 54 participants in a field study. The second prototype is based on the Emotiv headset including more sensors and accuracy, tested with 31 participants in a lab environment. Evaluation results indicate that robot navigation through commercial BCIs can be effective and natural both in the real and the virtual environment. Ó 2013 Elsevier Ltd. All rights reserved. 1. Introduction Brain–computer interfaces (BCIs) are communication devices which enable users to send commands to a computing de- vice using brain activity only [1]. This technology is a rapidly growing field of research and an interdisciplinary endeavour. Research into BCIs involves knowledge of disciplines such as neuroscience, computer science, engineering and clinical reha- bilitation. Brain-controlled robots and serious games can be used for a wide range of applications from modern computer games [2], prosthetics and control systems [3] through to medical diagnostics [4]. Although research on the field started dur- ing the 1970s only the last few years it became possible to introduce BCIs mostly through commercial headsets for computer games and other simulations. BCI’s can be categorised based on the electroencephalographic (EEG) method of recording. There are mainly three cate- gories including: invasive, partially-invasive and non-invasive. With invasive BCIs the signals are recorded from electrodes im- planted surgically over the brain cortex, into the grey matter of the brain during neurosurgery. Partially-invasive BCIs are placed inside the skull but not within the grey matter. Non-invasive BCIs operate by recording the brain activity from the scalp with EEG sensors attached to the head on an electrode cap or headset without being surgically implanted and it is most widely used. EEG is the most studied non-invasive interface, mainly due to its portability, ease of use and low set-up cost [5]. The raw EEG is usually described in terms of frequency ranges: Gamma (c) greater than 30 Hz, Beta (b) 13–30 Hz, Alpha (a) 8–12 Hz, Theta (h) 4–8 Hz, and Delta (d) less than 4 Hz [6]. Delta (d) waves with the lowest frequencies of all the brainwaves are most apparent in deep sleep states, where conscious brain activity is minimal. Theta (h) waves appear in a relaxed state and during 0045-7906/$ - see front matter Ó 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.compeleceng.2013.10.009 q Reviews processed and recommended for publication to Editor-in-Chief by Guest Editor Dr. Jia Hu. Corresponding author. Tel.: +44 7557425433. E-mail addresses: [email protected] (A. Vourvopoulos), [email protected] (F. Liarokapis). Computers and Electrical Engineering 40 (2014) 714–729 Contents lists available at ScienceDirect Computers and Electrical Engineering journal homepage: www.elsevier.com/locate/compeleceng

Transcript of Evaluation of Commercial Brain-Computer Interfaces in Real and Virtual World Environment - A Pilot...

Computers and Electrical Engineering 40 (2014) 714–729

Contents lists available at ScienceDirect

Computers and Electrical Engineering

journal homepage: www.elsevier .com/ locate /compeleceng

Evaluation of commercial brain–computer interfaces in realand virtual world environment: A pilot study q

0045-7906/$ - see front matter � 2013 Elsevier Ltd. All rights reserved.http://dx.doi.org/10.1016/j.compeleceng.2013.10.009

q Reviews processed and recommended for publication to Editor-in-Chief by Guest Editor Dr. Jia Hu.⇑ Corresponding author. Tel.: +44 7557425433.

E-mail addresses: [email protected] (A. Vourvopoulos), [email protected] (F. Liarokapis).

Athanasios Vourvopoulos, Fotis Liarokapis ⇑Interactive Worlds Applied Research Group (iWARG), Faculty of Engineering and Computing, Coventry University, Coventry CV1 5FB, UK

a r t i c l e i n f o a b s t r a c t

Article history:Available online 12 November 2013

This paper identifies the user’s adaptation on brain-controlled systems and the ability tocontrol brain-generated events in a closed neuro-feedback loop. The user experience isquantified for the further understanding of brain–computer interfacing. A working systemhas been developed based on off-the-shelf components for controlling a robot in both thereal and virtual world. Using commercial brain–computer interfaces (BCIs) the overall cost,set up time and complexity can be reduced. The system is divided in two prototypes basedon the headset type used. The first prototype is based on the Neurosky headset and it hasbeen tested with 54 participants in a field study. The second prototype is based on theEmotiv headset including more sensors and accuracy, tested with 31 participants in alab environment. Evaluation results indicate that robot navigation through commercialBCIs can be effective and natural both in the real and the virtual environment.

� 2013 Elsevier Ltd. All rights reserved.

1. Introduction

Brain–computer interfaces (BCIs) are communication devices which enable users to send commands to a computing de-vice using brain activity only [1]. This technology is a rapidly growing field of research and an interdisciplinary endeavour.Research into BCIs involves knowledge of disciplines such as neuroscience, computer science, engineering and clinical reha-bilitation. Brain-controlled robots and serious games can be used for a wide range of applications from modern computergames [2], prosthetics and control systems [3] through to medical diagnostics [4]. Although research on the field started dur-ing the 1970s only the last few years it became possible to introduce BCIs mostly through commercial headsets for computergames and other simulations.

BCI’s can be categorised based on the electroencephalographic (EEG) method of recording. There are mainly three cate-gories including: invasive, partially-invasive and non-invasive. With invasive BCIs the signals are recorded from electrodes im-planted surgically over the brain cortex, into the grey matter of the brain during neurosurgery. Partially-invasive BCIs areplaced inside the skull but not within the grey matter. Non-invasive BCIs operate by recording the brain activity from thescalp with EEG sensors attached to the head on an electrode cap or headset without being surgically implanted and it is mostwidely used.

EEG is the most studied non-invasive interface, mainly due to its portability, ease of use and low set-up cost [5]. The rawEEG is usually described in terms of frequency ranges: Gamma (c) greater than 30 Hz, Beta (b) 13–30 Hz, Alpha (a) 8–12 Hz,Theta (h) 4–8 Hz, and Delta (d) less than 4 Hz [6]. Delta (d) waves with the lowest frequencies of all the brainwaves are mostapparent in deep sleep states, where conscious brain activity is minimal. Theta (h) waves appear in a relaxed state and during

A. Vourvopoulos, F. Liarokapis / Computers and Electrical Engineering 40 (2014) 714–729 715

light sleep and meditation. Alpha (a) waves are typically associated with meditation and relaxation, more so than any otherwaves. Beta (b) waves are connected to alertness and focus. Gamma (c) waves can be stimulated by meditating while focus-ing on a specific object.

Different BCI systems can be distinguished based on their application and classified into different areas of use. Recent re-views [7,8] categorised different BCI systems in: communication, motor restoration, environmental control, locomotion andentertainment. BCI for communication is recognised as one of the most important applications of brain control due to theessential need for communication, use of language and expression of feelings. A number of approaches have been exploredthough the last decades of BCI research for using brain activity as a control signal for communication like slow cortical po-tential (SCP)-based spelling BCI for selecting letters of the alphabet through a spelling device [9], detection of eye blinks [10]or classification of mental tasks [11] for keyboard control, P300 event-related brain potentials for letter spelling [12] andfinally Graz-BCI, a letter speller through mental hand and leg motor imagery [13]. Another important use of BCI’s is the mo-tor restoration for stroke rehabilitation, spinal cord injury (SCI), traumatic brain injury (TBI) or other neurological diseases.The important function of a BCI system in neurological diseases can be attributed to the ability of closing the loop betweenthe brain and the affected limbs through neuroprostheses [14], robotic orthosis and electrical stimulation [15] for compen-sating the ability of the lost function and by creating a pathway from the nervous system to the limb.

People with motor disabilities are confined within a home environment, unable to use domestic devices like TV, radio,lights and various other appliances. For this reason environmental control BCI systems have been created the last few years[16,17] with promising results. In addition, robotic wheelchairs controlled by BCI-based systems [18] are used for indoor andoutdoor navigation with the help of assistive sensors for obstacle avoidance and path finding due to the small informationrate of the BCI and the slow reaction time of the patient. BCI systems for entertainment and video game interaction thoughbrain activity are known for many years [19,20]. Within the last few years due to the launch of many commercial BCI head-sets as possible gaming controller’s [21,22] brain controlled games have started to gain ground, exposing the brain interac-tion and adaptation mechanisms within games and virtual environments. This recent advancement in the field of EEGtechnology and BCI, is offering an acceptable quality-to-cost ratio and easy-to-use, out-of-the-box equipment with commer-cial BCI headsets for a plethora of new applications [23,24] that could lead to new levels of understanding towards the studyof the brain, its mechanisms and brain–computer interaction.

This research focuses on how a robot operated through brainwaves can overcome the kinetic constraints of the user. Itinvestigates ways in extracting valuable information from user’s brain activity by interacting with both real world objectsand virtual world environments. The experimental prototype uses the basic movement operations of a Lego MindstromsNXT Robot. There are two versions of this prototype, taking readings from the users’ brain electrical activity in real-time per-formance. The first version uses a single dry sensor headset from Neurosky using the attention levels of the user. The secondversion is using a 14 wet sensor headset from Emotiv taking readings not only from EEG signals but also from facial expres-sions, eye movement and head tilt, enabling users to fully control the robot subject to training. Evaluation results indicatethat robot navigation through commercial BCIs can be effective and natural. Overall, the experience with the virtual environ-ment was reported as quite engaging and interesting regardless certain minor issues. It was reported that previous experi-ence in computer games can speed up the learning time for using the brain interface. Finally, many reporting’s concerned thetiredness that the system was triggering after certain minutes of exposure.

The rest of the paper is structured as follows. Section 2 provides a brief overview of similar systems and Section 3 presentsan overview of our system. Sections 4 and 5 present the two experimental prototypes that were implemented including eval-uation results. Finally, Section 7 presents conclusions and future work.

2. Background

2.1. Overview of BCIs

Active research on brain–computer interfaces (BCI’s) started the early 1970s. Only within the last few years this kind oftechnology had been introduced to simple users through computer games. Research on BCI began in the 1970s marking forthe first time the expression brain–computer interface on the papers and journals that have been published [25,26]. Earlyresearch involved neuro-prosthetics that aim at restoring damaged hearing, sight and movement. Over the years several lab-oratories have managed to create systems that carry out movements of robotic arms handled by monkeys getting biofeed-back. That way they were able to map patterns of the neural activity of the brain and develop algorithms.

In the 1980s, researchers at Johns Hopkins University found a mathematical relationship between the electrical responsesof single motor-cortex neurons in rhesus macaque monkeys and the direction that monkeys moved their arms [27]. Laterexperiments using rhesus monkeys researchers reproduced reaching and grasping movements in a robotic arm. By mid90s invasive BCI’s have started applied into humans. One example of invasive BCI is the direct brain implant to treat non-congenital blindness. Sixty-eight electrodes were implanted into the patient’s visual cortex and succeeded in producingphosphenes, the sensation of seeing light without light actually entering the eye, through cameras that were mounted onglasses to send signals to the implant [28].

In 1998, researchers at Emory University in Atlanta installed a brain implant in a human that produced signals of highenough quality to simulate movement [29]. A year later, researchers decoded neuronal firings to reproduce images seen

716 A. Vourvopoulos, F. Liarokapis / Computers and Electrical Engineering 40 (2014) 714–729

by cats. They decoded the signals from the targeted brain cells of the cats’ brain, to generate movies of what the cats saw andwere able to reconstruct using mathematical models [30]. More recently, a BCI that reproduced owl monkey movementswhile the monkey operated a joystick or reached for food was developed [31]. In another study, researchers from Washing-ton University in St. Louis enabled a teenage boy to play Space Invaders using his Electrocorticography (ECoG) implant, apartially invasive BCI device [32]. The last few years’ research groups have investigated various ways in controlling roboticplatforms mainly electrical wheelchairs or robotic arms for people suffering from a diverse range of impairments.

The past few years’ various ways in controlling robotic platforms (mainly electrical wheelchairs or robotic arms) for peo-ple suffering from a diverse range of impairments was also investigated [33,34]. Patients suffering from ‘locked-in syn-drome’, spinal cord injury or damaged regions of the brain responsible for the body movement have used BCIs focusingon motor neuroprosthetics in order to rehabilitate or assist their interaction with the outside world [29]. With the launchof commercial headsets, as an alternative gaming controller, BCIs appeared in the computer gaming domain. This allowedresearchers to start developing various novel applications with relatively low cost non-invasive EEG equipment and SoftwareDevelopment Kits (SDKs). This technology boosted the BCI’s in games oriented research with main target medical applica-tions and brain rehabilitation through the use of serious games and virtual worlds. Furthermore, gaming technology hasbeen assisted by virtual and augmented reality systems, making hybrid BCI systems for enhancing the user experience, studyand improvement of brain–computer interaction [2].

2.2. BCIs in games and virtual environments

Non-invasive BCI research methods in serious games development are usually oriented in the medical domain rather thanentertainment. An early study developed an internet game linked to a BCI. The system translated real-time brain activitiesfrom prefrontal cortex (PFC) or hippocampus (CA1) of a rat into external device control commands and used them to drive aninternet game called RaviDuel [35]. Another BCI-based 3D game measured user’s attention levels to control a virtual hand’smovement, making use of 3D animation techniques. This system has been developed for training those who suffering fromAttention Deficit Hyperactivity Disorder (ADHD) [36]. In another study, the design and implementation of a game capable ofmoving an avatar from a tennis game using only brain activity was examined [37] using the mu (l) rhythms of brain activity[38]. This can assist people with diseases involving movement difficulties for controlling keyboard and mouse of a computer.

An EEG pattern recognition system has been designed to adapt a serious game by comparing recognition rates forexperimental purposes without any traditional controllers. Their proposed Support Vector Machine (SVM) algorithm forclassification is compared with other algorithms, improving the recognition rate [39]. A Quantitative and Qualitative Studyin Self-Paced Brain–Computer Interaction with virtual worlds showed that, without training, roughly half of the subjectswere able to control the application by using real foot movements and a quarter were able to control it by using imaginedfoot movements. The application consisted of an interaction with a virtual world inspired by the ‘Star Wars’ movie. Partic-ipants were asked to control the take-off and height of a virtual spaceship using their motor-related brain activity [40].

The most common BCI based input devices are using motor-control. An example is the mu (l) rhythm based first personshooter game from [41] which uses the players brainwaves (from the motor cortices) to steer left/right, while forward/back-wards movement is controlled from physical buttons. Four participants had been used with 10 h of training in a range of5 weeks for them to learn how to control their mu-power. Another similar BCI system proposed by Krepki et al. [19] usedmotor-control based on lateralized readiness potential (LRP) – a slow negative shift in the EEG over the activated motorcortex – for controlling the Pacman game. An example of P300 based games are Bayliss virtual driving task and a virtualapartment [42,43] with highlighted red objects evoking a P300 when the uses wanted to make a selection.

SSVEP based games have been also designed based on subject’s attention to a visual stimulus. In the MindBalance gameby [44], a SSVEP is evoked by two different checkerboards with the participant’s attention focused in one of the two check-erboards to balance an avatar in a piece of a string. An advantage of SSVEP over induced BCI paradigms is the multiple optionselection by focussing attention on a stimulus. An example is the [45] 2D racing game using SSVEP to control four differentdirectional movements in a similar way an FPS game was controlled in [46] SSVEP BCI.

While these kinds of BCI techniques for controlling games are quite interesting, most of the games so far are proofs ofconcept. The interaction with these games is still slow, often with decreased game play speed to allow for BCI control feasibleas a result reducing fast-paced games into turn-based games. Recent projects move towards to a more stable control in BCIgames [46,47] focusing on a smooth and more meaningful interpretation of BCI control signals. As a result, it is important tomove beyond feasibility tests and proof-of-concept prototypes, and focus on the role that BCI’s play in improving the gamingexperience and interaction within virtual environments.

2.3. Direct interface techniques using BCIs

Three main techniques are currently used in BCI systems for user interaction and control (i.e. for locked in patients wherehaptic and linguistic interfaces fail) including (a) Steady State Visual Evoked Potentials (SSVEP), (b) P300 BCI and (c) ERS/ERD.SSVEP are caused by visual stimulation by flashing lights to the patient and occur at the primary visual cortex of the brain.This evoked response in EEG signals to repetitive visual stimulations is called SSVEP. In a SSVEP BCI system, specific frequen-cies are used to the repetitive visual stimuli.

A. Vourvopoulos, F. Liarokapis / Computers and Electrical Engineering 40 (2014) 714–729 717

For SSVEP detection, analysis on the frequency domain is performed to identify the peak on the frequency spectrum basedin the frequency of the repetition of stimulus in which the participant had been focused. By detecting this frequency, we cantranslate it into a control signal for a BCI system. Using this technique can come across to some issues. One is gaze depen-dence according to Fazel-Rezai et al. [48] is that in some users, the flickering stimulus is annoying and produces fatigue.Using higher frequencies for the flickering stimuli can reduce the issue but it makes it harder to detect the SSVEP. On theother hand, SSVEP BCI’s do not require any significant training and the information transfer rate is high [36,49].

P300 BCI is measuring the brain evoked response after stimulus onset, positive and negative deflections in the EEG signal.These deviations are called event-related potential (ERP) and depending on the latency of these deviations, they are clusteredas: exogenous and endogenous components [48]. The exogenous component occurs approximately 150 m after the evokedstimulus and the endogenous components have a longer latency. The largest positive deflection of the response occurs be-tween 250 and 750 ms after the stimulus. It appears as a positive curve on the EEG after 300 ms (hence the name) throughrelevant and seldom stimulation with the strongest signals captured at the parietal lobe. P300 is the most used ERP compo-nent in BCI systems.

ERS/ERD stands for event related synchronisation/desynchronisation through the imaginary limp movement, located atthe motor and somatosensory cortex of the brain. This kind of EEG activity has been used as one of the sources in BCI [1].Motor imagery is one way to create changes in ERD/ERS and has been used in a plethora of BCI systems [50]. During theimagery movement (e.g. of a limb), ERD occurs predominantly over the contralateral brain motor area and, therefore, canbe used as a control signal for a BCI system. ERD/ERS BCIs have been used in different kinds of applications including thetwo-dimensional cursor control of the first BCI systems.

3. System overview

The basic hardware components of this research include two commercial EEG headsets: the Neurosky (Mindset andMindwave), and the Emotiv Epoc neuro-headset. Additionally, a LEGO Mindstorms NXT robot, a desktop computer, anUltra-Mobile Personal Computer (UMPC) and a Netbook PC were used. The software components include a realistic virtualenvironment (the computer game) with a 3D reconstruction of the NXT robot, a JAVA application for the physical robot and aclient/server program that establishes connection with both the physical and the virtual robot. The Neurosky headsets havebeen used to extract the attention and meditation levels of the user. The headset is calculating the Raw EEG signals to producethe ‘eSense Meters’ [21] based on an algorithm patented by Neurosky. The patterns of the electrical activity are analysedwith the help of specialised algorithms (feature extraction and classification) by converting the EEG signals into control com-mands. The Neurosky headset is using a single dry sensor attached to the forehead outside the cerebral cortex in the frontallobe of the brain being responsible for the attention level and short-term memory tasks.

The Emotiv EPOC Headset is using 14 saline sensors being able not only to detect brain signals but also user’s facialexpressions, eye movement and head position through a 2-axis gyroscope. The various facial expressions referred to as‘Expressiv’ by Emotiv, levels of engagement, frustration, meditation, and excitement as ‘Affectiv’; and training and detectionof certain cognitive neuro-activities such as ‘push’, ‘pull’, ‘rotate’, and ‘lift’ as ‘Cognitiv’ [22]. By measuring these brainwaves,these headsets can infer which mental state the user is. Providing all the brain activity details on a computer, the user is ableto receive a closed neuro-feedback loop. This is a type of biofeedback that can be useful for the analysis and development oftools that can help people with severe brain damage, for brain rehabilitation or enable them to control a computer programor robotic device.

As a physical robot, the Lego Mindstorms NXT kit was used with a Java Virtual Machine (JVM) installed. This tiny JVM forARM processors (called LeJOS) was ported to the NXT robot. The main advantage of LeJOS NXJ is that it provides the exten-sibility and adaptability of the JAVA language [51]. The physical robot used includes a 32-bit AT91SAM7S256 (ARM7TDMI)main microprocessor at 48 MHz (256 KB flash memory, 64 KB RAM). It also provides five different types of sensors such as:servo motors, touch, light, sound and ultrasonic.

Fig. 1. Overview of the system.

718 A. Vourvopoulos, F. Liarokapis / Computers and Electrical Engineering 40 (2014) 714–729

The touch sensor has been implemented in the prototype to allow for collision detection with the real environment. Thelight sensor is also used as a height detector on this model. An abstract way that this prototype works with the headsets isillustrated in Fig. 1, were the user is visually stimulated; the raw data is calculated on the headsets chip and then sent to thededicated computer. A client/server program instructs the robot to move based on the user brain-activity (or muscularmovement) both on real and virtual world. The robot terminates the connection when it hits an obstacle or is stopped bythe user.

4. First prototype (Neurosky)

Two types of Neurosky headsets were used for this research (see Fig. 2). The main difference between them is that the‘Mindset’ (left) is a complete headset with speakers and microphone transmitting data on Bluetooth while the ‘Mindwave’(right) comes without a headset and transmits data using radio frequency.

Both headsets use the same ThinkGear Communications Driver (TGCD) library from Neurosky’s Development Tools. Afterestablishing connection with the headset, both the physical and virtual robot connects on the client/server application tosend instructions through sockets. The first step in communicating is to establish a connection (which has an initiatorand a receiver). The receiver waits for a connection from the initiator.

The initiator is the dedicated PC running the client/server program and the receiver is the NXT Robot including both thephysical and the computer simulation. The application running in the robot waits for Bluetooth connection; when the PCprogram establishes connection with the headset, it initiates a connection with the robot and the simulation to send instruc-tions in the client/server architecture. As soon as the connection has been established, both ends of the connection can ‘open’input and output streams and read/write data. The computer sends integers (in the range 0–100) to the robot depending onthe attention levels of the user. Based on those inputs the robot performs actions by sending feedback back to the computer.In this case, if the bumper sensor has been pressed the session is terminated in both ends.

To identify how fast the users adapt on brain-generated events, an initial evaluation has been conducted for three days ina national technology exhibition centre called ‘The Gadget Show Live 2011’ at the NEC, Birmingham, UK and 54 participantstested the prototype. Neurosky headset was used for this evaluation, as it is easier to set-up and does not require any usertraining. The Gadget Show Live is UK’s ultimate consumer electronics and gadgets event with hundreds of exhibitors from allover the world and thousands of visitors from all over the UK. This was a unique opportunity to gather data and feedback forfurther analysis and improvements of the prototype. After each testing session, users completed a questionnaire, collectinginitial feedback regarding the overall navigation experience with the Neurosky ‘‘Mindset’’ and the robot.

4.1. Qualitative evaluation

The participant ages were almost equally distributed for each age group and the results are more representative. Since thedemonstration took part in the Future Tech Zone of the expedition, men and young children was the vast majority of theparticipants. As a result the feedback gathered was mainly from males (46 males and 8 females). For this part, participantshad to complete a quick and easy task. That was to move the robot forwards, accelerate up to a point and finally try to decel-erate and stop it before the end of a small track. All participants used for the first time their brainwaves to interact with therobot. However, even if the Neurosky headset does not require any training, for some users it was quite a unique experienceand hard to adapt in taking control straight away. Some of them found it a strange experience to concentrate and tele-oper-ate a robot with only their brain power. Allowing them a few minutes to familiarise participants started to get control andadapt to the prototype system. The majority of them reported that they have been amazed from the fast response (latencyaround 1 s) of the robot as they tried to move it forwards and then stop it.

A common problem that was reported was external distractions which produced unwanted results from the lack of con-centration. Some participants found it difficult to stay focused to their task with the external stimuli generated from the peo-ple standing around. Another issue that was recorded was the initial excitement of the participants when they saw the robotmoving, boosting their attention level as a result to make the robot movement unstable and not being able to stop it. A few

Fig. 2. Neurosky Mindset (left), Mindwave (right).

Table 1Positive and negative comments that have been reported from the users.

� Interesting concept � Being able to move left/right� Could be very useful � Need some indication on the� Lots of potential � PC as to the level of thought� Can increase people’s concentration level � Needed less distractions� Can help improve disabled people’s lives � Difficult to keep the robot stationary� Can help for brain rehabilitation by triggering the motor cortex of the brain � Too many outside stimuli� The feel of controlling it with my mind was awesome � Hard to remain calm

Table 2Descriptive statistics.

N Min Max Mean Std.deviation

Skewness Kurtosis

Statistic Statistic Statistic Statistic Statistic Statistic Std.error

Statistic Std.error

Ability to control events 54 2 7 4.26 1.216 �.195 .325 .154 .639Application responsiveness to actions that have

been initiated54 1 7 4.11 1.284 .008 .325 �.118 .639

User interactions with the robot 54 1 7 4.15 1.250 �.231 .325 �.039 .639How natural was the mechanism which controlled

the robot52 1 7 4.27 1.300 �.414 .330 �.189 .650

Valid N (listwise) 52

A. Vourvopoulos, F. Liarokapis / Computers and Electrical Engineering 40 (2014) 714–729 719

participants reported that it was easy to navigate at first, but as soon as you ‘feel success and thinking about it, the robotstarts to move again’. That made it difficult for some to stay focused and relax in order to stop the robot and completethe task. The task showed that through the continuing neuro-feedback from the system, participants started to adapt andincrease their control levels relatively fast (within 5 min). A summary of the positive and negative feedback received in pre-sented in Table 1.

4.2. Quantitative evaluation

This section presents the distribution of the participant responses based on their experience with the robot. Each charthas a descriptive label and the answers are based on a 7-point scale. Standard deviation (r sigma) was used to measurethe variation from the average user responses:

r ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi1N

XN

i¼1ðxi � lÞ2

r: ð1Þ

From the square root of the variance:

Var ðXÞ ¼ E½ðX � lÞ2�: ð2Þ

Averaging the squared differences from the mean:

�x ¼ 1n�Xn

i¼1

xi: ð3Þ

A plot of a normal distribution (or bell curve) has a width of 1 standard deviation. Analysing the statistical charts below, anormal distribution on the users’ choices with standard deviation of 1.2 (Table 2) can be identified. Fig. 3a illustrates thenumber of participants that were almost (but not completely) able to control the events that have been generated from theirbrainwaves (to the robot) with an average of 4.2 and standard deviation 1.2 (see Table 2). Fig. 3b illustrates the responsive-ness of the robot in relation to the actions that the participants wanted to perform. The majority of them answered between3 and 5, an average of 4.2 and again a standard deviation of 1.2.

Fig. 4a illustrates the effectiveness of the human–robot interaction. It demonstrates how the participants felt when inter-acting with the robot to perform certain tasks (i.e. move forward and finish the track). The deviation of this chart is the samewith Fig. 3a and b at 1.2 with almost the same average of 4.1. Fig. 4b illustrates how natural was the mechanism which con-trolled the robot. Results showed almost identical distribution with the previous tests and a deviation of 1.2.

Fig. 3. (a) Ability to control events. (b) Application responsiveness to actions that have been initiated.

Fig. 4. (a) User interactions with the robot. (b) How natural was the mechanism which controlled the robot.

720 A. Vourvopoulos, F. Liarokapis / Computers and Electrical Engineering 40 (2014) 714–729

5. Second prototype (Emotiv)

Based on the first prototype, a more advanced version has been created, with the robot performing basic manoeuvring(moving forwards, backwards, turn left and turn right) using the Emotiv Epoc headset. The main idea is to use the Cognitivefunctions (brainwaves) to move the robot forwards/backwards, and the Expressive functions to steer the robot left/rightwhen the user blinks accordingly. The Emotiv Epoc Headset is a neuro-signal acquisition and processing wirelessneuro-headset with 14 wet sensors (and 2 reference sensors) being able to detect brain signals and user’s facial expressions,offering optimal positioning for accurate spatial resolution [4]. An integrated gyroscope generates positional information forcursor and camera controls, connected wirelessly through a USB dongle and comes with a lithium battery providing 12 h ofcontinuous use. The sixteen (14 plus 2 reference) sensors are placed on the international 10–20 system [52], an internation-ally recognised method which describes the electrode placement on the scalp for EEG tests or experiments. An overview ofthe system is illustrated in Fig. 5.

The system is fully operational and it relies on a combination of Cognitive and Facial/Muscular functions [53]. In terms ofthe implementation, the Emotiv Development Kit was used connecting the Emotiv Epoc headset to the Emotiv control panelto create and train a new user profile. The Cognitive functions (brainwaves) are used to move the robot forwards/backwards,and the Expressive functions to assist steering left/right when the user blinks accordingly. Unlike the Neurosky Mindset(which uses only one dry sensor and does not require user training), Emotiv needs a unique user profile to be trained to

Fig. 5. Feedback loop.

Fig. 6. (a) The robot navigating through the maze and (b) the NXT robot with UMPC attached.

Fig. 7. Left: Top map view. Right: Robot view.

A. Vourvopoulos, F. Liarokapis / Computers and Electrical Engineering 40 (2014) 714–729 721

map users’ brain-patters. In a training session no more than 10 min, user’s skills increased approximately up to 45% for theforward & backward moves and around 10% for left & right. Training the profile requires practice and familiarisation, espe-cially when the user needs to train more than two actions as it is easy to get distracted from outside stimuli and ‘confuse’ thetraining process of the users real ‘intentions’. To take control of the events the EmoKey application was used connected withEmotiv Control Panel to generate keyboard events for each identified and trained thought. After that the EmoKey applicationtransfers these events to a key-reading client/server application with each event being treated as ‘thought’, passing itthrough a socket connection to the physical robot and the computer simulation.

Moreover, the 3D environment has been designed using the Unity 3D game engine with a 3D reconstruction of the robot,navigating through a simple maze. Unity 3D is an integrated development environment for computer game design and runson Microsoft Windows and Mac OS X [54]. For this simulation both C# and JavaScript programming languages were used. Onthe computer simulation, the user navigates the robot through the maze and reaches the end using the Emotiv headset

Fig. 8. User testing trial.

722 A. Vourvopoulos, F. Liarokapis / Computers and Electrical Engineering 40 (2014) 714–729

(Fig. 6a). It is worth-mentioning, that for this prototype an Ultra Mobile PC (UMPC) was used attached to the robot formovement independency (Fig. 6b). This UMPC carries an Intel Core Solo U1500 processor at 1.33 GHz, 1 GB DDR2 RAM,32 GB Solid state drive, running Windows. The Emotiv version seems to have the appropriate attributes needed for a robustmultimodal BCI-robot system by making use of all the available user inputs.

Following the evaluation of the first prototype with the Neurosky headset, a second assessment took place using the Emo-tiv headset in a virtual environment. The experimental design was formed based on the interface complexity and the set-uptime cost that it’s required by the system. Since the Emotiv software needs to be trained (unlike Neurosky), to classify thecorrect intended movements by the user, the experiment took place in a lab environment under controlled conditions. Thetotal sample was n = 31 users, of University students and staff, without any previous experience on BCI’s. This game was de-signed for the user to navigate a 3D model of the LEGO NXT robot through a maze with main goal to find the different way-points in order to finish the level (see Fig. 7). The different levels were statically allocated from pre-defined maze designs. Ifthe robot hits a wall, the position resets and starts again from the beginning. This way the user has to be precise but also ableto find the way to the end. These actions can offer information on the adaptability of the user by getting real-time neuro-feedback. Each user used a personalised profile from the Emotiv Control Panel.

5.1. Apparatus and procedure

For this prototype off-the-shelf hardware components have been used—with the main component being the Emotiv Epocneuro-headset (i.e. 14 electrodes and a gyroscope). A laptop with a 64ibt Intel(R) Core(TM) 2 Duo processor T7700 at 2.4 GHzand 4 GB of memory was used for the evaluation. The laptop is equipped with an NVIDIA GeForce 8700 M GT graphics card.Standard laptop display technology, such as a 170 in. wide LCD was employed to display the 3D content of the application. Itwas ensured that each participant was comfortable and at ease prior to the start of the experiment. The participants weretold that their data would be used anonymously; along with the data of several others and that the experiment is dividedinto two main stages. During the first stage of the experiment the participants’ had to create and train a profile using theControl Panel, calibrating at the same time the system. Next, the users had to navigate the 3D model of the Lego NXT Mind-storms robot inside a statically designed maze and find their way to the various waypoints of the level only by using theirbrainwaves and their neuromuscular activity of the face (Fig. 8). The instructions and the statements that were used duringthe preliminary briefing were standardised for all the participants. Initially, the participant’s personal information was re-corded. During all the stages of the experiment, participants were instructed to interact with the 3D objects by staying fo-cused on the computer monitor using only their brainwaves and face neuromuscular activity as inputs and not to move theirlimbs. In addition, the participants were instructed not to move the headset to avoid noise and artefacts on the signal.

At the start of the experiment a popup window was generated in order to acquire the participant’s ID and verify the con-nection of the headset. Once the ID had been entered the window was removed and the calibration panel was appearing witha timer measuring the initiated training session. The panel was composed by a 3D cube and an EEG wave graph. When thetimer indicated that 60 s of exposure time had expired, the simulation was stopped by the author, ensuring that each par-ticipant was restricted to exactly 60 s of exposure time to the training interface. Participants were asked then to relax andenter the first game when they were feeling ready. During the gaming session, raw EEG data and Emotiv’s Affectiv� states(boredom, engagement, frustration) had been captured. The user’s brain activity and cognitive state data had been acquiredand stored for further analysis. Each user had to create and train a unique profile using the control panel within a time limit.The user had to interact initially with the game. As a specific task the user had to navigate the robot to wayfind into the var-ious waypoints of the level and terminate. Finally, an online user evaluation form was completed by each user followed by ashort interview with overall testing time of approximately 20–30 min per user.

Table 3Central tendency.

How muchwere youable to controlevents?

How responsivewas the robotto actions thatyou initiated(or performed)?

How much delaydid you experiencebetween youractions andexpected outcomes?

How proficient inmoving and interactingwith the VirtualEnvironment didyou feel at the endof the experience?

How quicklydid you adjustto the experience?

How completelywere your sensesengaged in thisexperience?

Ease ofuse

Ease oflearning

Usability Satisfaction

NValid 31 31 31 31 31 31 31 31 31 31Missing 0 0 0 0 0 0 0 0 0 0

Mean 4.35 4.10 4.10 4.23 4.45 5.00 3.13 3.68 3.42 3.45Median 4.00 4.00 4.00 4.00 5.00 5.00 3.00 4.00 3.00 3.00Std. deviation .877 1.165 1.423 1.283 1.150 1.317 .763 .979 .886 1.028Variance .770 1.357 2.024 1.647 1.323 1.733 .583 .959 .785 1.056Skewness �.157 .342 �.107 �.149 �.436 .000 �.708 �.194 �.354 �.058Std. error of skewness .421 .421 .421 .421 .421 .421 .421 .421 .421 .421Kurtosis �.764 �1.007 �.929 .033 �.397 �.994 .608 �.886 .825 �.215Std. error of kurtosis .821 .821 .821 .821 .821 .821 .821 .821 .821 .821Minimum 3 2 2 1 2 3 1 2 1 1Maximum 6 6 7 7 6 7 4 5 5 5

A.V

ourvopoulos,F.Liarokapis/Computers

andElectrical

Engineering40

(2014)714–

729723

724 A. Vourvopoulos, F. Liarokapis / Computers and Electrical Engineering 40 (2014) 714–729

5.2. Qualitative evaluation

All users were asked to provide comments on the questionnaire anonymously but also an unstructured short interviewwas taking place in a light mood after the trial. These comments/suggestions are a very helpful contribution towards theimprovement of the system, giving feedback that an ordinary questionnaire cannot capture. A very useful suggestion wasabout the Graphical User Interface (GUI) of the Game. User’s found easier to focus on GUI components instead of the virtualobject (robot) to perform the required movement. This might be a result of the training trial, in which the users had to push/pull a virtual cube and when entering the virtual environment they had to re-adapt to the new elements. This can be con-firmed by reports that it was easier to move the robot by thinking the cube from the training trial, that actually pushing therobot itself. This is really interesting as they were not getting direct feedback from the robot on the screen to complete themoves. This is a clear indication that it would be better the training trial to include the components from the game in orderfor the user to get familiarised. Alternatively, assistive GUI components might be a useful addition. Overall the experiencewas reported as quite engaging and interesting regardless certain issues of response time and accuracy that other NaturalUser Interfaces (NUI’s) might have. Finally, it was reported that people with more experience in computer games will havean easier time learning to use the interface due to the simulation and interaction required for a computer game. That expe-rience makes it much easier to learn how to operate the interface. The only negative reporting’s had to do with the tirednessthat the system was triggering after a few minutes of interaction.

5.3. Quantitative evaluation

Summarising the data samples from the users, descriptive statistics have been generated in order to quantify the expe-rience. The sample was consisted of 67.7% male and 32.3% females with a dominant age group of 18–25 with 45.2% closelyfollowed by the group of 26–33 with 41.9%. The questionnaire used the Likert scale ranging from 1 to 5 and in some casesranging from 1 to 9. Hence, the results are considered as ordinal measurements. Central Tendency will allow us to assess ifwe have high or low values.

Table 3 demonstrates that the distribution can be close to normal according to the mean values. Additionally, the skew-ness values (in 8 out of 10 answers) can be an indication of the piling up of the values to the right side. A useful way to assessnormality is to look at the shape of the frequency distributions. As illustrated on the histograms below, the distribution of thedata look normal, especially comparing with the normal curve line that includes the majority of the data inside. The majorityof the data are clustered around the centre of the distribution, with far fewer observations at the ends.

Fig. 9a indicates the ability of the users to control events. Negative skewness of �0.157 indicates a small piling of the val-ues at the right side of the distribution. Additionally, negative kurtosis (�0.764) is indicating a more pointy distribution (lep-tokurtic). Generally, as a rule of thumb, if the skewness or kurtosis is twice its standard error the data is hardly normal. Onthe other hand, Fig. 9b illustrates the responsiveness of the robot’s actions. Positive skewness of 0.342 indicates the piling upof the values at the left side, giving an indication of low values in the distribution. That can be a result of low responsivenessfrom the robot while the user was controlling with the BCI.

Fig. 10a illustrates the latency between actions and expected outcomes. The reported answers concerning the delay of theinitiated actions are distributed across all the range (with kurtosis at �0.929) with no clear indication of positive experiencealthough the skewness is at �0.107 indicating a piling up on the right side. Fig. 10b measures the proficiency in moving andinteracting with the virtual environment (the 3D game). The vast majority of the answers are battling between the centre ofthe distribution, although the skew is negative with the right side to occupy the biggest part of the distribution.

Fig. 9. (a) Able to control events. (b) Responsiveness of the robot’s actions.

Fig. 10. (a) Latency between actions and expected outcomes. (b) Proficiency in navigation.

Fig. 11. (a) Ease of adjusting to the experience. (b) Engagement of the experience.

Fig. 12. (a) Ease of use. (b) Ease of learning.

A. Vourvopoulos, F. Liarokapis / Computers and Electrical Engineering 40 (2014) 714–729 725

Fig. 13. (a) Usability. (b) Satisfaction.

726 A. Vourvopoulos, F. Liarokapis / Computers and Electrical Engineering 40 (2014) 714–729

Fig. 11a describes the ease of adjusting to the whole experience. The piling of the answers in the right side (skewness of�0.436) verifies the reported answers from the users on the speed of adjustment they had on the brain generated events. Onthe other hand, Fig. 11b measures the engagement of the users. This distribution is the only one having a skewness of 0.0,with both parts of the results equally distributed. This results in equal mean and median of 5. In addition, kurtosis is half thestandard error, indicating a normal distribution.

Fig. 12a illustrates the ease of use. The majority of the values are piled on the right side (skewness of �0.708) indicating apositive response on how easy was the system to use. Fig. 12b measures the ease of learning. This is almost equally distrib-uted with the majority of the values again on the right side (�0.194 of skewness) but not enough to clearly classify thedifficulty.

Fig. 13a demonstrates the usability of the prototype. The majority of the values for usability are clustered in the middleand right with mean of 3.42 and skewness �0.354. This can also verify the reported answers concerning the levels of usabil-ity the user had during the overall experience. Fig. 13b measures the satisfaction of the users. The overall satisfaction is dis-tributed in the centre with mean 3.45 and a very small skewness of �0.058. Additionally, the negative kurtosis indicated ahigher pick in the distribution, very close to an ideal standard distribution. The majority of the answers are above the mean,indicating a satisfaction of the users and an overall acceptance of the system.

6. Testing between two samples

Since there are two experimental conditions with different participants assigned to each condition, the independent(unrelated) samples t-test has been performed. The two unrelated groups were exposed to the control of the robot usingthe BCI and they have been separated based on the robot representation (virtual vs. physical). The responsiveness of the ro-bot and the ability of control had been reported for both conditions to assess the overall interaction between human–robotwith the use of a commercial BCI. Table 4 provides summary statistics for the reported feeling of robot responsiveness andability to control for both experimental conditions. The group who interacted with the physical robot had a mean respon-siveness of 4.11 and mean of control 4.26, with a standard deviation of 1.284 and 1.216 respectively. The standard errorof that group (the standard deviation of the sampling distribution) is 0.175 and 0.165. In addition, the average responsive-ness and control level in participants who interacted with the virtual robot was 4.06 and 4.35, with a standard deviation of1.181 and 0.877, with a standard error of 0.212 and 0.158 respectively.

Table 5 contains the main test statistics. In this table there are two rows containing values for the test statistics: equalvariances assumed, and equal variances not assumed. Parametric tests assume that the variances in experimental groups

Table 4Group statistics.

Group N Mean Std. deviation Std. error mean

ResponsivenessPhysical 54 4.11 1.284 .175Virtual 31 4.06 1.181 .212

Able to controlPhysical 54 4.26 1.216 .165Virtual 31 4.35 .877 .158

Table 5Independent samples test.

Levene’s test forequality of variances

t-Test for equality of means

F Sig. t df Sig. (2-tailed) Meandifference

Std. errordifference

95% Confidenceinterval of thedifference

Lower Upper

ResponsivenessEqual variances assumed .044 .834 .166 83 .869 .047 .281 �.513 .606Equal variances not assumed .170 67.020 .866 .047 .275 �.502 .595

Able to controlEqual variances assumed 1.321 .254 �.384 83 .702 �.096 .249 �.591 .400Equal variances not assumed �.418 78.564 .677 �.096 .229 �.550 .359

A. Vourvopoulos, F. Liarokapis / Computers and Electrical Engineering 40 (2014) 714–729 727

are roughly equal. The rows of the table relate to whether or not this assumption has been broken. Using Levene’s test it waspossible to check whether variances are different in different groups. Levene’s test is similar to a t-test since it tests thehypothesis that the variances in the two groups are equal. Therefore, if Levene’s test is significant at p 6 0.05, we can gainconfidence in the hypothesis that the variances are significantly different and that the assumption of homogeneity of vari-ances has been violated. For these data, Levene’s test is non-significant and so we should read the test statistics in the rowlabeled Equal variances assumed. Analysing the result of the t-test, in this case the two-tailed value of p for responsiveness is0.869 and control 0.702 and so we have to conclude that there is no significant difference between the means of these twosamples. In terms of the experiment, we can infer that the physical robot is equally responsive with the virtual robot, so isthe same for the ability to control it.

Since our t-statistic is non-significant, assessing our effect is important in practical terms. Hence, it is a standardised wayto measure the size of the effect that we’re testing. An effect size is simply an objective measure of the magnitude of theobserved effect. It provides an estimate of the magnitude of the difference among sets of scores. The most common typeof effect size calculations are Cohen’s d and Pearson’s correlation coefficient r. Pearson’s r is widely used and it is constrainedto lie between 0 (no effect) and 1 (a perfect effect). As a benchmark, we can classify the effect in the following ways:

� r = .10 (small effect).� r = .30 (medium effect).� r = .50 (large effect).

Cohen’s d and Pearson’s correlation coefficient r be calculated using the following equations:

r ¼

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffit2

t2 þ df

sð4Þ

d ¼ M1 �M2ffiffiffiffiffiffiffiffiffiffiffiffiffiffiSD2

1þSD22

2

q : ð5Þ

Responsiveness: d = 0.04, r = 0.02. Able to control: d = �0.084, r = �0.042.On average, participants experienced greater responsiveness to the physical robot (M = 4.11, SD = 1.284) than the virtual

robot (M = 4.06, SD = 1.181). This difference was not significant t(83) = �0.166, p > .05; and it did represent a small-sized ef-fect r = 0.02. Furthermore, participants experienced greater ability to control the virtual robot (M = 4.35, SD = 0.877) than thephysical robot (M = 4.26, SD = 1.216). Again, this difference was not significant t(83) = �0.384, p > .05; and it represent asmall-sized effect as well with r = �0.04. Finally, the effect was non-significant for both cases, representing small effect.

7. Conclusions and future work

This paper presented a human–robot interaction system with commercial and non-invasive BCI headsets using off-the-shelf components for robotic tele-operation. Two prototypes have been experimentally tested to discover how easy it is tocontrol brain-generated events in a closed neuro-feedback loop. Overall, results are promising and important for the devel-opment of future neuro-feedback based systems ranging from serious games to rehabilitation and clinical research. It hasbeen investigated through the user testing the impact that this kind of technology has to ‘healthy’ users with no previousexperience on BCI controlled games. In particular, results indicate that performance and agility of the users was reduced overtime both on the training process and the overall interaction in both interfaces. This is very important for assessing futureBCI games and help further in designing better environments for BCI, Hybrid BCI and other Natural User Interface systemswith high cognitive demands.

728 A. Vourvopoulos, F. Liarokapis / Computers and Electrical Engineering 40 (2014) 714–729

Future work will include the combination of different readings with the use of more sophisticated non-invasive BCI de-vices equipped with more electrodes and sensors. Detailed statistical analysis, analysis of the Raw EEG data and EmotivAffectiv output. Moreover, finding ways for additional physiological data extraction, including body posture and facialexpression, assisting the classification process and determine better user’s intentions. This opens up new opportunitiesand challenges in brain–computer interaction and pervasive computing. Finally, identifying and extracting features frombrainwaves (EEG signals) and multimodal systems in a closed feedback loop in a controlled environment through seriousgames.

Acknowledgements

The authors would like to thank the Interactive Worlds Applied Research Group (iWARG) members for their support andinspiration. Videos that illustrate the operation of both systems can be found at: http://www.youtube.com/user/vourvopa.

References

[1] Wolpaw JR, Birbaumer N, McFarland DJ, Pfurtscheller G, Vaughan TM. Brain–computer interfaces for communication and control. Clin Neurophysiol:Off J Int Federat Clin Neurophysiol 2002;113:67–791.

[2] Lecuyer A, Lotte F, Reilly RB, Leeb R, Hirose M, Slater M. Brain–computer interfaces, virtual reality, and videogames. Computer 2008;41(10):66–72.[3] Loudin JD, Simanovskii DM, Vijayraghavan K, Sramek CK, Butterwick AF, Huie P, et al. Optoelectronic retinal prosthesis: system design and

performance. J Neural Eng 2007;4(1):S72–84.[4] Levine SP, Huggins JE, BeMent SL, Kushwaha RK, Schuh LA, Rohde MM, et al. A direct brain interface based on event-related potentials. IEEE Trans

Rehab Eng 2000;8(2):180–5.[5] Niedermeyer E, da Silva FHL. Electroencephalography: basic principles clinical applications and related fields. Lippincott Williams & Wilkins; 2005.[6] Sanei S, Chambers JA. EEG signal processing. John Wiley & Sons; 2013.[7] Nicolas-Alonso LF, Gomez-Gil J. Brain computer interfaces, a review. Sensors 2012;12(12):1211–79.[8] Millan JdR, Rupp R, Muller-Putz GR, Murray-Smith R, Giugliemma C, Tangermann M, et al. Combining brain–computer interfaces and assistive

technologies: state-of-the-art and challenges. Front Neurosci 2010;4.[9] Birbaumer N, Ghanayim N, Hinterberger T, Iversen I, Kotchoubey B, Kübler A, et al. A spelling device for the paralysed. Nature 1999;398(6725):297–8.

[10] Chambayil RSB. Virtual keyboard BCI using eye blinks in EEG 2010:466–70.[11] Millán Jdel R, Mouriño J. Asynchronous BCI and local neural classifiers: an overview of the adaptive brain interface project. IEEE Trans Neural Syst

Rehab Eng 2003;1(2):159–61.[12] Silvoni S, Volpato C, Cavinato M, Marchetti M, Priftis K, Merico A, et al. P300-based brain–computer interface communication: evaluation and follow-

up in amyotrophic lateral sclerosis. Front Neurosci 2009;3:60.[13] Obermaier B, Muller GR, Pfurtscheller G. ‘Virtual keyboard’ controlled by spontaneous EEG activity. IEEE Trans Neural Syst Rehab Eng

2003;11(4):422–6.[14] Müller-Putz GR, Scherer R, Pfurtscheller G, Rupp R. EEG-based neuroprosthesis control: a step towards clinical practice. Neurosci Lett 2005;382(1–

2):169–74.[15] Lauer RT, Peckham PH, Kilgore KL. EEG-based control of a hand grasp neuroprosthesis. Neuroreport 1999;10(8):1767–71.[16] Cincotti F, Mattia D, Aloise F, Bufalari S, Schalk G, Oriolo G, et al. Non-invasive brain–computer interface system: towards its application as assistive

technology. Brain Res Bull 2008;75(6):796–803.[17] Hochberg LR, Serruya MD, Friehs GM, Mukand JA, Saleh M, Caplan AH, et al. Neuronal ensemble control of prosthetic devices by a human with

tetraplegia. Nature 2006;442(7099):164–71.[18] Millan JJ Del R, Galan F, Vanhooydonck D, Lew E, Philips J, Nuttin M. Asynchronous non-invasive brain-actuated control of an intelligent wheelchair.

Conf Proc IEEE Eng Med Biol Soc 2009;2009:3361–4.[19] Krepki R, Blankertz B, Curio G, Müller K-R. The Berlin Brain–Computer Interface (BBCI) – towards a new communication channel for online control in

gaming applications. Multimed Tools Appl 2007;33(1):73–90.[20] Tangermann M, Krauledat M, Grzeska K, Sagebaum M, Blankertz B, Vidaurre C, et al. Playing pinball with non-invasive BCI. Adv Neural Inform Process

Syst 2009;21:1641–8.[21] NeuroSky’s eSense™ Meters and Detection of Mental State. <http://company.neurosky.com/files/neurosky_esense_whitepaper.pdf> [accessed on

10.03.12].[22] Emotiv EPOC Software Development Kit. <http://www.emotiv.com/> [accessed on 10.03.12].[23] Vourvopoulos A, Liarokapis F, Petridis P. Brain-controlled serious games for cultural heritage. In: 2012 18th International conference on virtual systems

and multimedia (VSMM); 2012. p. 291–8.[24] Göhring D, Latotzky D, Wang M, Rojas R. Semi-autonomous car control using brain computer interfaces. In: Lee S, Cho H, Yoon K-J, Lee J, editors.

Intelligent autonomous systems, vol. 12. Berlin, Heidelberg: Springer; 2013. p. 393–408.[25] Vidal JJ. Toward direct brain–computer communication. Annu Rev Biophys Bioeng 1973;2:157–80.[26] Vidal JJ. Real-time detection of brain events in EEG. Proc IEEE 1977;65(5):633–41.[27] Georgopoulos AP, Lurito JT, Petrides M, Schwartz AB, Massey JT. Mental rotation of the neuronal population vector. Science 1989;243(4888):234–6.[28] Vision Quest. A half century of artificial-sight research has succeeded. And now this blind man can see. Wired Magazine. <http://www.wired.com/

wired/archive/10.09/vision.html> [accessed on 10.03.12].[29] Kennedy PR, Bakay RA. Restoration of neural output from a paralyzed patient by a direct brain connection. Neuroreport 1998;9(8):1707–11.[30] Stanley GB, Li FF, Dan Y. Reconstruction of natural scenes from ensemble responses in the lateral geniculate nucleus. J Neurosci 1999;19(18):8036–42.[31] Wessberg J, Stambaugh CR, Kralik JD, Beck PD, Laubach M, Chapin JK, et al. Real-time prediction of hand trajectory by ensembles of cortical neurons in

primates. Nature 2000;408(6810):361–5.[32] Teenager moves video icons just by imagination.[33] McFarland DJ, Wolpaw JR. Brain–computer interface operation of robotic and prosthetic devices. Computer 2008;41(10):52–6.[34] Philips J, Millan J del R, Vanacker G, Lew E, Galan F, Ferrez PW, et al. Adaptive shared control of a brain-actuated simulated wheelchair. In: IEEE 10th

international conference on rehabilitation robotics, 2007. ICORR 2007; 2007. p. 408–14.[35] Lee U, Han SH, Kim HS, Kim Y-B, Jung HG, Lee H, et al. Development of a neuron based internet game driven by a brain–computer interface system. In:

International conference on hybrid information technology, 2006. ICHIT ’06, vol. 2; 2006. p. 600–4.[36] Jiang L, Guan C, Zhang H, Wang C, Jiang B. Brain computer interface based 3D game for attention training and rehabilitation. In: 2011 6th IEEE

conference on industrial electronics and applications (ICIEA); 2011. p. 124–7.[37] Lopetegui E, Garcia Zapirain B, Mendez A. Tennis computer game with brain control using EEG signals. In: 2011 16th International conference on

computer games (CGAMES); 2011. p. 228–34.

A. Vourvopoulos, F. Liarokapis / Computers and Electrical Engineering 40 (2014) 714–729 729

[38] Mattiocco M, Babiloni F, Mattia D, Bufalari S, Sergio S, Salinari S, et al. Neuroelectrical source imaging of mu rhythm control for BCI applications. In:Conf proc IEEE eng med biol soc, vol. 1; 2006. p. 980–3.

[39] Ahn J-S, Lee W-H. Using EEG pattern analysis for implementation of game interface. In: 2011 IEEE 15th international symposium on consumerelectronics (ISCE); 2011. p. 348–51.

[40] Lotte F, Renard Y, Lécuyer A. Self-paced brain–computer interaction with virtual worlds: a quantitative and qualitative study ‘out of the lab’. Presentedat the 4th international brain computer interface workshop and training course; 2008.

[41] Pineda JA, Silverman DS, Vankov A, Hestenes J. Learning to control brain rhythms: making a brain–computer interface possible. IEEE Trans Neural SystRehab Eng 2003;11(2):181–4.

[42] Bayliss JD. Use of the evoked potential P3 component for control in a virtual apartment. IEEE Trans Neural Syst Rehab Eng 2003;11(2):113–6.[43] Bayliss JD, Inverso SA, Tentler A. Changing the P300 brain computer interface. Cyberpsychol Behav 2004;7(6):694–704.[44] Lalor EC, Kelly SP, Finucane C, Burke R, Smith R, Reilly RB, et al. Steady-state VEP-based brain–computer interface control in an immersive 3D gaming

environment. EURASIP J Appl Signal Process 2005;2005:3156–64.[45] Martinez P, Bakardjian H, Cichocki A. Fully online multicommand brain–computer interface with visual neurofeedback using SSVEP paradigm. Intell

Neurosci 2007;2007:13.[46] Jackson MM, Mappus R, Barba E, Hussein S, Venkatesh G, Shastry C, et al. Continuous control paradigms for direct brain interfaces. In: Jacko JA, editor.

Human-computer interaction. Novel interaction methods and techniques. Berlin, Heidelberg: Springer; 2009. p. 588–95.[47] Zhao Q, Zhang L, Cichocki A. EEG-based asynchronous BCI control of a car in 3D virtual reality environments. Chin Sci Bull 2009;54(1):78–87.[48] Fazel-Rezai R, Allison BZ, Guger C, Sellers EW, Kleih SC, Kubler A. P300 brain computer interface: current challenges and emerging trends. Front

Neuroeng 2012;5.[49] Wang Y, Wang Y-T, Jung T-P. Visual stimulus design for high-rate SSVEP BCI. Electron Lett 2010;46(15):1057–8.[50] Pfurtscheller G, Lopes da Silva FH. Event-related EEG/MEG synchronization and desynchronization: basic principles. Clin Neurophysiol

1999;110(11):1842–57.[51] LeJOS, Java for Lego Mindstorms. <http://lejos.sourceforge.net/> [accessed on 10.03.12].[52] Carmena JM, Lebedev MA, Crist RE, O’Doherty JE, Santucci DM, Dimitrov DF, et al. Learning to control a brain-machine interface for reaching and

grasping by primates. PLoS Biol 2003;1(2):E42.[53] Vourvopoulos A, Liarokapis F. Robot navigation using brain–computer interfaces. In: 2012 IEEE 11th International Conference on trust, security and

privacy in computing and communications (TrustCom); 2012. p. 1785–92.[54] Unity – Game Engine. <http://www.unity3d.com/>, [accessed on 10.03.12].

Athanasios Vourvopoulos is a research student and a Computer Science graduate of Coventry University involved in brain-controlled virtual environmentsand brain–computer interaction. He has previous experience in brain-controlled robots, assessing various prototypes during his undergraduate andpostgraduate studies. His current research focuses in stroke rehabilitation through neuro-feedback with the use of Brain–Computer Interfaces (BCIs) andVirtual Environments (VEs).

Dr. Fotis Liarokapis is the director of Interactive Worlds Applied Research Group (iWARG) and a research fellow at the Serious Games Institute (SGI). He hascontributed to more than 75 refereed publications in the areas of: computer graphics, virtual and augmented reality, serious games, brain computerinterfaces and procedural modelling. He is one of the co-founders of VS-Games international conference and he is a member of IEEE, IET, ACM andEurographics.