Effects of device obtrusion and tool-hand misalignment on user performance and stiffness perception...

14
Effects of device obtrusion and tool-hand misalignment on user performance and stiffness perception in visuo-haptic mixed reality $ L. Barbieri, F. Bruno n , F. Cosco, M. Muzzupappa Department of Mechanical, Energy and Industrial Engineering (DIMEG), University of Calabria, ponte P. Bucci, 46C, 87036 Rende (CS), Italy article info Article history: Received 8 November 2013 Received in revised form 23 June 2014 Accepted 25 July 2014 Communicated by B. Lok Available online 5 August 2014 Keywords: Visuo-Haptic Mixed Reality Occlusion handling Haptic interaction User evaluation abstract The Visuo-Haptic Mixed Reality (VHMR) is a branch of the Mixed Reality (MR) that is acquiring more and more interest in the recent years. Its success is due to the ability of merging visual and tactile perceptions of both virtual and real objects with a collocated approach. Like any emerging technology, the development of the VHMR systems is accompanied by challenges that, in this case, deals with the efforts to enhance the multi-modal human perception with the user-computer interface and interaction devices at the moment available. This paper deals with two of the typical problems related to VHMR systems, that are device obtrusion and toolhand misalignment, and suggests solutions whose effectiveness has been tested by means of user studies. First, the paper analyzes the obtrusion problem and the benets that users may gain performing task in a mixed environment with unobstructed haptic feedback, performed by means of a novel technique. Secondly, it investigates the effects of toolhand misalignment on user perception and veries the efcacy of a proposed misalignment correction technique by means of a comparative user test. Experimental results show that users would benet from using the proposed unobstructed visuo-haptic approach and the misalignment compensation technique. These enhancements demonstrate the efcacy of the proposed solutions and at the same time get stronger the awareness that obtrusion and misalignment problems are fundamental issues to take into account for producing a realistic perception of a visuo-haptic mixed environment. & 2014 Elsevier Ltd. All rights reserved. 1. Introduction Visuo-Haptic Mixed Reality (VHMR) consists of adding to a real scene the ability to see and touch virtual objects. It requires the use of see-through display technology for visually mixing real and virtual objects, and haptic devices necessary to provide haptic stimuli to the user while interacting with the virtual objects. Among others, VHMR has been widely adopted as a virtual prototyping tool, e.g., for the automotive industry (Ortega and Coquillart, 2005) allowing signicantly shorter product development cycles, or as a tool for collaborative experiences in the digital entertainment eld (Knoerlein et al., 2007) or in the context of sports (Miles et al., 2012). Furthermore it is becoming highly popular in different domains of computer training applications especially for those applied to the medical eld (Harders et al., 2007) in which they permit direct guidance and training of surgeons and therapists by means of interactive co-located realtime simulations (Pusch and Steinicke, 2012). A VHMR setup allows the user to perceive visual and kines- thetic stimuli in a co-located manner, i.e., the user can see and touch virtual objects at the same spatial location. This setup overcomes the limits of the traditional one, i.e., display and haptic device (Millet et al., 2013), because the visuo-haptic co-location of the user's hand and a virtual tool improve the sensory integration of multimodal cues and makes the interaction more natural. But it also comes with technological challenges in order to improve the naturalness of the perceptual experience. These technological challenges are hard especially when the haptic interaction in the VHMR environment is tool mediated (Bordegoni et al., 2010; de Araújo et al., 2010) rather than direct (Asai et al., 2008). Although modern haptic devices (such as the wearable CyberGlove data glove and CyberGrasp exoskeleton; the point-source Phantom or Pantograph; and the reverse-electrovibration haptics (Bau and Poupyrev, 2012)) have been used to simulate the convergent sense of touch with virtual objects, such devices cannot generally give us the multifaceted sense of touch such as a tactile sense to the whole hand. The complete sense of touch can be achieved thanks to the adoption of physical mock-ups where the visual appearance of the product is Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/ijhcs Int. J. Human-Computer Studies http://dx.doi.org/10.1016/j.ijhcs.2014.07.006 1071-5819/& 2014 Elsevier Ltd. All rights reserved. This paper has been recommended for acceptance by B. Lok. n Corresponding author. E-mail addresses: [email protected] (L. Barbieri), [email protected] (F. Bruno), [email protected] (F. Cosco), [email protected] (M. Muzzupappa). Int. J. Human-Computer Studies 72 (2014) 846859

Transcript of Effects of device obtrusion and tool-hand misalignment on user performance and stiffness perception...

Effects of device obtrusion and tool-hand misalignment on userperformance and stiffness perception in visuo-haptic mixed reality$

L. Barbieri, F. Bruno n, F. Cosco, M. MuzzupappaDepartment of Mechanical, Energy and Industrial Engineering (DIMEG), University of Calabria, ponte P. Bucci, 46C, 87036 Rende (CS), Italy

a r t i c l e i n f o

Article history:Received 8 November 2013Received in revised form23 June 2014Accepted 25 July 2014Communicated by B. LokAvailable online 5 August 2014

Keywords:Visuo-Haptic Mixed RealityOcclusion handlingHaptic interactionUser evaluation

a b s t r a c t

The Visuo-Haptic Mixed Reality (VHMR) is a branch of the Mixed Reality (MR) that is acquiring more andmore interest in the recent years. Its success is due to the ability of merging visual and tactile perceptionsof both virtual and real objects with a collocated approach. Like any emerging technology, thedevelopment of the VHMR systems is accompanied by challenges that, in this case, deals with theefforts to enhance the multi-modal human perception with the user-computer interface and interactiondevices at the moment available.

This paper deals with two of the typical problems related to VHMR systems, that are device obtrusion andtool–hand misalignment, and suggests solutions whose effectiveness has been tested by means of user studies.

First, the paper analyzes the obtrusion problem and the benefits that users may gain performing task in amixed environment with unobstructed haptic feedback, performed by means of a novel technique. Secondly, itinvestigates the effects of tool–hand misalignment on user perception and verifies the efficacy of a proposedmisalignment correction technique by means of a comparative user test.

Experimental results show that users would benefit from using the proposed unobstructed visuo-hapticapproach and the misalignment compensation technique. These enhancements demonstrate the efficacy ofthe proposed solutions and at the same time get stronger the awareness that obtrusion and misalignmentproblems are fundamental issues to take into account for producing a realistic perception of a visuo-hapticmixed environment.

& 2014 Elsevier Ltd. All rights reserved.

1. Introduction

Visuo-Haptic Mixed Reality (VHMR) consists of adding to a realscene the ability to see and touch virtual objects. It requires theuse of see-through display technology for visually mixing real andvirtual objects, and haptic devices necessary to provide hapticstimuli to the user while interacting with the virtual objects.

Among others, VHMR has been widely adopted as a virtualprototyping tool, e.g., for the automotive industry (Ortega andCoquillart, 2005) allowing significantly shorter product developmentcycles, or as a tool for collaborative experiences in the digitalentertainment field (Knoerlein et al., 2007) or in the context ofsports (Miles et al., 2012). Furthermore it is becoming highly popularin different domains of computer training applications especially forthose applied to the medical field (Harders et al., 2007) in which theypermit direct guidance and training of surgeons and therapists by

means of interactive co-located realtime simulations (Pusch andSteinicke, 2012).

A VHMR setup allows the user to perceive visual and kines-thetic stimuli in a co-located manner, i.e., the user can see andtouch virtual objects at the same spatial location. This setupovercomes the limits of the traditional one, i.e., display and hapticdevice (Millet et al., 2013), because the visuo-haptic co-location ofthe user's hand and a virtual tool improve the sensory integrationof multimodal cues and makes the interaction more natural. But italso comes with technological challenges in order to improve thenaturalness of the perceptual experience.

These technological challenges are hard especially when the hapticinteraction in the VHMR environment is tool mediated (Bordegoniet al., 2010; de Araújo et al., 2010) rather than direct (Asai et al., 2008).Although modern haptic devices (such as the wearable CyberGlovedata glove and CyberGrasp exoskeleton; the point-source Phantom orPantograph; and the reverse-electrovibration haptics (Bau andPoupyrev, 2012)) have been used to simulate the convergent senseof touch with virtual objects, such devices cannot generally give us themultifaceted sense of touch such as a tactile sense to the whole hand.The complete sense of touch can be achieved thanks to the adoption ofphysical mock-ups where the visual appearance of the product is

Contents lists available at ScienceDirect

journal homepage: www.elsevier.com/locate/ijhcs

Int. J. Human-Computer Studies

http://dx.doi.org/10.1016/j.ijhcs.2014.07.0061071-5819/& 2014 Elsevier Ltd. All rights reserved.

☆This paper has been recommended for acceptance by B. Lok.n Corresponding author.E-mail addresses: [email protected] (L. Barbieri),

[email protected] (F. Bruno), [email protected] (F. Cosco),[email protected] (M. Muzzupappa).

Int. J. Human-Computer Studies 72 (2014) 846–859

superimposed oftenwith augmented reality techniques (Barbieri et al.,2012; Bruno et al., 2013; Aoyama and Kimishima, 2009). Nevertheless,these systems present the limitations that the objects can be only rigidand it is necessary the manufacturing of the physical prototype.

We have focused our study in the specific area of point-baseddevices that are currently being developed by researchers in thehaptics field. This class of devices are designed to perform a singletype of task and this characteristic represents both their pros andcons. In fact if, on one side, it restricts the application of thatdevice to a much smaller number of functions, on the other side, itallows the designer to focus the device to perform its taskextremely well. These specific typology of haptic devices dealswith obtrusion and misalignment issues that complicate thecorrect integration of a virtual tool and the user's real hand inthe mixed reality scene. As a consequence both the issues affectthe naturalness of the user interaction and the sense of presence.

In particular the obtrusion problem is due to the physicalpresence of haptic devices into the MR environment. These bulkydevices occupy a large space in the visual region of interest, i.e.,the location where the interaction is actually taking place, andthen become obstructive visual elements that negatively bear onthe overall user perception and affect the performance of theirtasks.

The misalignments issue is related to mechanical limitations oflow-cost haptic devices such as Phantom Omni. The low stiffnessof the device cannot oppose the force exerted by the user, so thatthe virtual probe stops against the virtual wall, while the physicalprobe is still moving according to the user's actions. This mis-alignment may restrain the ability to render the virtual objects upto their fully dynamic range. In practice, the commodity hapticdevices are not always efficient enough to perform realistic feed-backs of kinesthetic stimuli. In these cases we observe a lack ofconsistency in behavior between real physical objects and theirdigital virtual representation in the MR environment. This limita-tion produces an undesired misalignment of the virtual tool andthe user's hand. From a perceptual point of view, during andinteraction with misalignments problem there is not a correspon-dence between visual and kinesthetic stimuli, this means that theuser's brains are not able to merge all these multiple sources ofsensory information into a robust and coherent percept to thedetriment of the naturalness of the perceptual experience.

In order to overcome these problems we have developed astate-of-art simulation technique (Cosco et al., 2012) that adopts acomputational camouflage solution based on image-space removalof the haptic device from the context scene (Cosco et al., 2009) anda simple technique that compensates the misalignment byredrawing the user's hand with an artificial displacement (Coscoet al., 2012).

In this work we investigate and analyze the effects of deviceobtrusion and tool-hand misalignment on user performance inVHMR environment, and to assess the effectiveness of the techni-ques (developed ad-hoc to overcome these issues). Furthermorethe paper aims to clarify and extend the experimental procedures,based on user testing campaigns, adopted for the assessment ofthe proposed novel techniques.

The rest of the paper is organized as follows. Visual obtrusionand misalignment issues introduced by commodity haptic devicesare discussed, respectively, in Section 2 and Section 3. Section 4describes the VHMR environment developed with the adoption ofthe proposed techniques. Section 5 provides an overview of theexperimental approach adopted for the evaluation of the proposedtechniques. Preliminary studies necessary to enhance the wholeapproach are detailed in Section 6 and considerations emergedfrom these studies outlined in Section 7. Section 8 describes theexperiment focused on the obtrusion problem and points thebenefits of the proposed visuo-haptic approach out. Section 9

details the second user campaign to test the efficacy of theproposed tool-hand misalignment strategy and discusses theirresults. Finally, Section 10 contains the summary of the research.

2. Visual obtrusion in VHMR environment

The inclusion of haptic interaction in a MR scene requires theuse of a haptic device, but most of these are bulky devices thatoccupy a large space in the visual region of interest, i.e., in thelocation where the interaction is actually taking place. Therefore,in a co-located VHMR setup, the haptic device becomes anobstructive visual element.

The importance of unobstructive haptic interaction has beenaddressed in the past, and the proposed answers relied onmechanical solutions that placed the haptic actuators far fromthe region of interest using string based haptic devices (Tarrin etal., 2003) such as the SPIDAR (Ishii and Sato, 1994). This solutionallows to place the actuators far from the region where manipula-tion and interaction are actually happening, and transfer force andtorque to the end effector using tensed strings. With sufficientlythin strings, the haptic device barely occludes the rest of the scene.For example, Ortega and Coquillart (2005) applied this visuo-haptic interaction paradigm in the context of an automotive virtualprototyping application.

Optical camouflage (Inami et al., 2003) has been promoted asan alternative to the mechanical solutions adopted for solving thevisual obtrusion produced by haptic devices in a MR scenes. Theseoptical solutions are based on a retro-reflective paint and a head-mounted projector (Inami et al., 2000) that render the desiredbackground images on the obstructive elements.

Our solution is a novel technique, described in Section 4, thatcan be interpreted as a computational camouflage approach. Asabove mentioned in Section 1 in this paper we aim to test andevaluate the efficacy of this proposed technique by means of userstudies that focus on objective and subjective metrics, i.e., humanperformance in target acquisition tasks and subjective and senseof presence questionnaires. It is worth pointing out that scientificliterature is missing of user studies on this topic. Thus our workrepresents a first attempt to evaluate and measure user perceptionimprovements that have occurred as a result of the implementa-tion of optical camouflage technique in VHMR environments.

3. Misalignment issue in VHMR environment

As stated in the previous sections, the core of our contributionaddresses problems induced by the combination of visual andhaptic display in the MR context, as VHMR introduces problemsthat are not present when visual or haptic display are not co-located. For this reason and taking into account that VHMRsystems are being investigated in the recent years, at the moment,there are only few researches that address these kind of problemsand final or efficient solutions even less. And what is more, forsome other kind of problems we are just at the very early stage offinding and researching.

This is the case of misalignment issue that complicates thecorrect integration of a virtual tool and the user's real hand in MRscene because of the nature of the haptic interface. In particular, asshown in Fig. 1, the commodity haptic devices suffer frommechanical limitations that may restrain the ability to render thevirtual objects up to their full dynamic range. The following figureis a composition of the VHMR environment seen by the user andthe real haptic device and user's hand. The last ones are repre-sented in transparency to allow readers to better comprehend thisproblem. Fig. 1a depicts a correct alignment between the virtual

L. Barbieri et al. / Int. J. Human-Computer Studies 72 (2014) 846–859 847

toll-hand and the real physical user's hand and the probe of thehaptic device. Conversely, as shown in Fig. 1b, while user con-tinues in pushing the virtual object and the haptic device reachesits limits due to mechanical collisions, an undesired misalignmentof the virtual tool and the user's real hand is produced.

What we have observed in our studies is that, in a VHMR displaywith a commodity haptic device and naive composition of the user'shand and the virtual tool, the misalignment issue affects the stiffnessperception. In particular, the stiffness perceived by a user whileinteracting with a virtual rigid probe will be affected by the visibledisplacement of the user's hand. In other words, despite a smalldisplacement of the stiff object, the mechanical limitations of thedevice will lead to a larger displacement of the hand, and this willimpair stiffness discrimination (Cosco et al., 2012). Starting fromthese observations in order to overcome this problem we havedecided to develop an optical re-alignment of user's hand. Thissolution have been suggested to us by scientific literature about theoculo-manual coordination (Gauthier et al., 1988; Vercher et al.,1995) that regards the human capability to simultaneously coordi-nate visual and tactile perception and also the dominance of visionover touch. A similar solution, applied in the field of pseudo-haptics,has been proposed in order to provide haptic-like sensations bydynamically offsetting the visual representation of the user's handcreating a spatial conflict between the visual and the kinesthetic orreal hand (Pusch et al., 2009).

In this paper we explore the influence of virtual tool misalign-ment on user perception and discuss the results of the test onstiffness discrimination in order to validate and verify the effi-ciency of our proposed solution. In particular Sections 6.2 and 9detail the subjective comparative test between aligned and mis-aligned handheld tool and the Weber fraction of stiffness percep-tion with and without misalignment correction.

4. The proposed visuo-haptic mixed reality environment

VHMR systems can be achieved in several ways, and the mostpopular ones include workbenches with stereo projection systems(Brederson et al., 2008; Tarrin et al., 2003), mirror-based projec-tion systems where the virtual image occludes the real scene(Stevenson et al., 1999), or see-through head-mounted displays(HMD) with head and device tracking (Bianchi et al., 2006).

The see-through capability can be accomplished using either anoptical or a video see-through HMD. While an optical see-throughsystem superimposes computer-generated imagery on the real-world view, usually through a slanted semi-transparent mirror, avideo see-through system combines virtual objects with videofeeds from cameras. The latter is able to fully cover the real worldwith the digital images, therefore it is more adequate to performthe computational camouflage of the haptic device. For this reasonwe have adopted a video see-through HMD in all our tests.

In detail, as shown in the following picture (Fig. 2), thehardware adopted was:

� the video see-through HMD was a Z800 3D visor from eMagin;� the haptic device adopted was the Phantom Omni from

Sensable Technologies. The Phantom had a handle interfacewith an armature that provides up to six degrees of freedom,only in input and not in output, to a single point;

� two external flea2-08S2C cameras from Point Grey;� PC with i7 CPU and 8 Gb of RAM.

For the implementation of the proposed techniques the systemneeds two types of data as input:

1. static data (intrinsic camera parameters, markers' position andgeometry, background scene images and their extrinsic cameraparameters, rough 3D geometry model of the haptic device, thetransformation between the local reference system of thehaptic device and the global reference system) which can beacquired in a preprocessing step;

2. dynamic data (the extrinsic camera parameters and the con-figuration of the haptic device in its local frame) that isacquired at every frame.

We have adopted the MatLab's Calibration Toolbox as a pre-processing and, at run time, the ARToolKit's marker–based track-ing for estimating the current camera poses. Image view-dependent texture mapping have been implemented on the GPUwith simple shaders for blending for the computation of theoutput. The shaders have been implemented on NVIDIA's Cgshader language.

In particular, the solution proposed to overcome the obtrusionproblem is a state-of-art simulation technique that falls in theoptical camouflage field because performs an image-spaceremoval of the haptics from the context scene. In fact, it can alsobe regarded as an example of diminished reality (Zokai et al.,2003), which was already followed by Bayart et al. (Bayart et al.,2008) to visually remove the haptic device from a VHMR display.

Fig. 1. Tool-hand alignment (a) and misalignment (b) with the physical probe of the haptic device.

Fig. 2. VHMR framework.

L. Barbieri et al. / Int. J. Human-Computer Studies 72 (2014) 846–859848

Bayart used only one fixed camera which simplifies the visualremoval of the device but it does not allow co-location of hapticand visual stimuli. The main computational aspects of our solutionmake use of image-based rendering (IBR) techniques. In particular,we follow Buehler's unstructured lumigraph rendering (Buehleret al., 2001) with a strong focus on view-dependent texture mapp-ing (Debevec et al., 1996).

The novel technique proposed can be summarized with foursteps:

� an image of the scene is captured with a camera at run-time(Fig. 3a). During this step the abovementioned dynamic dataare acquired.

� The region of the image occluded by the haptic device isidentified (Fig. 3b). Given the approximate geometric modelof the haptic device, the transformation from the global to thelocal reference frame of the device and the current configura-tion of the device, it is possible to blend the approximategeometric model on the actual haptic device. The extrinsiccamera parameters for the current frame complete the defini-tion of the modelview and projection matrices, and we canrender the approximate geometry of the device onto screen-space.

� The image is processed to erase the haptic device and theerased region is re-painted using an image-based rendering(IBR) algorithm (Fig. 3c). The scene light field is considered tobe known for a set of irregularly distributed rays, and we useview-dependent texture mapping to interpolate light field datafrom those rays. More precisely, we compute camera blendweights for a discrete set of vertices on the occluded region, wemesh the vertices and define a camera blend field over theoccluded region, and compute the final image by blending theresults of view dependent texture mapping. We use two typesof input data in a combined manner: a set of prerecordedimages of the background scene together with their associatedcamera positions, and a very rough geometric approximation ofthe background.

� The virtual objects are composed in the scene and the finalresult is displayed to the user (Fig. 3d).

This solution allows view-dependent co-located display ofvisual and haptic feedbacks, and allows the user to see his/herown hand. For a detailed description of each steps read (Coscoet al., 2012).

About the misalignment issue, as stated in Section 3, wepropose to compensate for misalignment by redrawing the user'shand with an artificial displacement. In order to develop thissolution we have developed a state-of-the-art simulation techni-que, extensively detailed in (Cosco et al., 2012), that allows adynamic manipulation of a user's perception and action during theinteraction with multimodal augmented real environments. Inparticular, we adopted a constrained-dynamics algorithm forsolving deformation and contact of rigid and deformable objects(Otaduy et al., 2009), and a multi-rate haptic rendering algorithmfor manipulating rigid handles (Garre and Otaduy, 2009). Toenable contact between real and virtual objects, for each realobject we simply modeled a virtual counterpart in our simulationalgorithm. Currently, our solution is limited to the interaction withstatic real objects. Nevertheless, articles reporting other VHMRsolutions with dynamic virtual objects and static real ones can befound in the literature (Aleotti et al., 2010).

In order to compose the final render of the scene (Fig. 4c) wefirstly perform a semantic labeling to identify the distinct regionsin screen space (Fig. 4a); i.e.; haptic device, user's hand andgrasping volume. For the misalignment correction it is necessaryto compute a hand region mask (Fig. 4b) that allows to recognizethe user's hand by means of a color detection method. This regionwill be transformed in image-space and the resulting void will berepainted with our IBR approach. By letting the user wear amonochrome glove, we increase the quality of hand segmentationover pure skin-color detection.

5. Procedures and subjects

As stated in Section 1, our primary goal was to verify theeffectiveness of our techniques investigating and analyzing theeffects of device obtrusion and tool-hand misalignment on therealism of the interaction in VHMR environment.

Fig. 3. Image of a VHMR scene where the haptic device produces visual obtrusion of the background (a); visual removal of the haptic device (b); background completionbased on IBR (c); and final composite scene (d).

Fig. 4. Current camera view of the scene (a); hand region mask (b); composition of the hand obtained by repainting the hand mask using a correction displacement.

L. Barbieri et al. / Int. J. Human-Computer Studies 72 (2014) 846–859 849

To achieve this objective, we have carried out a user testingcampaigns specifically designed to evaluate perceptive factors. Inparticular, the main goal of the testing is:

� clarify the impact of visual obtrusion and misalignment correc-tion on user perception in the mixed scene, and then on her/hisperformance during interaction while completing a simplevisuo-haptic task.

Since the VHMR environments should allow a seamless manip-ulation of both real and virtual objects, we decided to focus ourexperiments on the manipulation of metaphors where the userholds a virtual tool to interact with the MR content. For a naturalinteraction the user should be able to see his/her own handholding the virtual tool and all the objects, both virtual and real,and should satisfy physical laws, both visually and haptically.

The following figure depicts the approach. Each phase will beextensively detailed in Sections 6–9.

We have adopted similar testing procedures for the evaluationof the computational camouflage technique (Test A) and of thecompensative misalignment technique (Test B). In particular, theproposed approach (Fig. 5) establishes that participants have toundergo to a preliminary test before carrying out the experimentalsession.

Both for the preliminary and experimental sessions participantsbegan the experiment completing a background questionnaire (phases

1 and 5). Tutors provided the participants with a brief demonstrationabout how to wear HMD and use AR devices (usually lasting about3 min), but no mentions about the haptic device (participants werenot aware of the existence of the haptic device). This caution has beentaken in order to avoid any bias in the comparison of the differentscenarios or any influence of user's perceptionwhile performing tasks.

It is important to highlight that the subdivision of the proce-dure in two different sessions, i.e., preliminary and experimentalsessions, has been designed to allow us, during the experimentaltest design (phase 4), to elaborate and infer additional informa-tion, from the data gathered in phase 3, that are necessary for acorrect planning of the experimental phases and selection of themost appropriate performance measures (Boehm-Davis and Holt,2004).

5.1. Subjects

For the preliminary test we have selected 30 volunteers, 18 maleand 12 female, with ages ranged from 20 to 32 (mean 23, standarddeviation 4). The subjects have been separated in two homogenous(visual acuity, age and gender) groups (G1 and G2) in order toperform separately the comparative studies for Test A and B.

For the experimental test we have recruited 30 different peoplewith ages ranging from 20 to 36 (mean 24, standard deviation 5).Also these subjects have been separated in two homogeneousgroups (G3 and G4) in order to assign to each group a specific test,i.e., Test A and Test B. Therefore the first group have performed

Fig. 5. Procedure for the evaluation of the proposed techniques.

L. Barbieri et al. / Int. J. Human-Computer Studies 72 (2014) 846–859850

tasks in both scenarios 1 and 2; the second group have beeninvolved in scenario 3 and 4.

All of the subjects involved in the preliminary test wereengineering undergraduate students, while for the experimentaltest 10 of themwere graduate students. All of the 60 subjects wereright-handed and completely naive of MR systems and hapticdevices. The number of participants has been chosen according tothe most influential articles on the topic of sample testing size(Lewis, 1994; Virzi, 1992).

6. Preliminary studies

The preliminary studies aim to verify and querying an initialhypothesis that has been formulated for each problem, i.e., visualobtrusion and misalignment issue. In particular, our hypotheseswere:

� for the Test A: “visual obtrusion removal improves the per-ceived naturalness of interaction”;

� for the Test B: “misalignment correction improves the per-ceived naturalness of interaction”.

In order to quantify the impact and the validity of both thestated hypothesis, we have prepared two different comparativetests where subjects were asked to execute some trial comparingtwo scenarios before to fill-in a questionnaire. These tests aimed toinvestigate in which way the proposed techniques regarding thevisual obtrusion removal and the misalignment correctionimprove the realism of the interaction in a MR environmentaffects subjective perceptions.

In the first phase, i.e., phase 2 in Fig. 5, subjects of one groupcompared obstructed (scenario 1) and unobstructed (scenario 2)MR environments; whereas those of the other group wereengaged in an experiment focusing the impact of the tool-handmisalignments correction that consists on the comparisonbetween scenario 3 and 4.

The participants do not have to perform specific tasks andneither have a limited time. The used applications, both for theTest A and B, were able to automatically switch on and off every20 s the camouflage (from scenario 1 to scenario 2) or themisalignment correction technique (from scenario 3 to scenario 4).

In particular, for the Test A, users handle a virtual stick,superimposed to the real physical probe of the haptic device,and play with a virtual ball pushing it from one virtual pocket toanother. The two scenarios, i.e., scenario 1 and 2, differ only for thevisual presence or camouflage of the haptic device. Whereas, forthe Test B, users handle a virtual stick to move and touch avirtual prism.

Then, for the sake of clarity, there are two groups of 15 personsthat compare two different scenarios:

� for the Test A: MR environment with visual obstruction and MRenvironment with computational camouflage technique;

� for the Test B: MR environment without misalignment correc-tion and MR environment with the misalignment correctiontechnique.

Before to perform the comparative test the subjects wereinformed about the goal of the test through a printed copy ofthe following instructions translated in their native language:

“Try to find any differences. You are going to be immersed in aVirtual Reality environment where you will be able to comparetwo different interaction modalities, Scenario 1 and Scenario 2 (orScenario 3 and Scenario 4).

These modalities will be alternatively enabled every 20 s, but youwill be free to ask us to anticipate the switch whenever you want.During the whole experiment, a superimposed text (on the left forScenario 1 or Scenario 3, on the right for Scenario 2 or Scenario 4)will remind you which modality you are using.

Please note that after the test you will be given a questionnaireabout your sensations. In particular, we kindly invite you to stayfocused on finding any kind of difference between the environ-ments. Keep in mind that the test will end when you are sure tohave found them.”

Note that, we asked each subject to focus on their perceptionsin order to find out any difference between the two alternativescenarios. However, it is worth noting that no detail at all aboutthe used techniques were mentioned, in order to not influencetheir perceptions. After the subject confirmed to be enoughconcentrated, s/he was conducted into the experimental environ-ment and completed his/her testing experience without timelimits. As mentioned before, once the phase 2 is concluded, thepreliminary studies terminate with a satisfaction questionnaire(Table 1) aimed to evaluate their experience from a subjectivepoint of view.

6.1. Results of the preliminary study of Test A

The following chart (Fig. 6) shows the results of the subjectiveuser questionnaire after exposure to Scenario 1 and Scenario 2.

Among the three different options (Scenario 1, Scenario 2 andno preference) there is a statistically significant difference asdetermined by one-way ANOVA (F(2,6)¼66.545, po0.001). Thismeans that, from a subjective point of view, the participants have adifferent perception between the two different scenarios. Inparticular, they consider the MR environment with computationalcamouflage more immersive and natural. Moreover, the visualpresence of the haptic device requires more concentration in orderto interact with the virtual objects.

The results of the preliminary test on the visual obtrusion issueconfirm and strengthen the initial hypothesis: visual obtrusionremoval improves the perceived naturalness of interaction.Furthermore the results show that when the haptic device isvisible the user recognize this as a physical real object that in mostapplications can affect the user's interaction.

Apart from the tests, we have also interviewed each participantin order to better understand their personal opinions and,furthermore, to gain information and suggestions that have beenhighly important in the design of the following test. Most of theparticipants affirmed that in the Scenario 2, where the hapticdevice was not visible, “the interaction with the (virtual) objects is

Table 1Subjective questionnaire used to compare two variant of our Visuo-Haptic Mixed Reality environment.

Questions Answer type

q.1 Which is the Scenario more immersive? Multi-choiceq.2 Which is the Scenario where you've experienced more difficulties when performing the assigned tasks? Multi-choiceq.3 Which is the Scenario more natural? Multi-choiceq.4 Describe all differences you perceived during experiments. Open text

L. Barbieri et al. / Int. J. Human-Computer Studies 72 (2014) 846–859 851

easier and their behavior is more natural”; few of the usersbelieved that, in the same scenario, the “haptic perceptions aremore realistic”. A frequent opinion for most of the subjects andunexpected for us was that the interaction with the haptic deviceis more difficult when it is visible. Probably this opinion is due tothe reason that the visual obtrusion of the haptic device deter-mines the presence of areas where the user cannot see the virtualobjects, but only have a tactile feedback of their presence. In such asituation, a human being feels more difficulties to sense positionand shape of the objects. This happens because the human hapticresolution is worse than those of the sight, a well knownphenomenon in haptic literature (Klatzky et al., 1987). Thislimitation of the haptic sense may have repercussions on theidentification, and interpretation of sensory information that couldaffect the usability of the haptic device. Indeed the prevalentopinions about the haptic device are that with the computationalcamouflage “the tool, necessary to interact with the objects, iseasier to use and its feedbacks are faster” while with the visualobtrusion “there are some areas where the interaction with thevirtual objects is really tricky”.

6.2. Results of the preliminary study of Test B

The preliminary study about the misalignment issue is basedon an experiment on the somatosensory perception. The experi-ment consists in a haptic object exploration procedure (Ledermanand Klatzky, 2009, 1987) which aims to identify and recognize oneor more properties of the object. This kind of procedure can bedefined “act to perceive” as named by Wolfe (Wolfe, 2007). In thespecific case the preliminary study aims to gain subjective opi-nions of the participants that will take part to the experiment inorder to strengthen the effectiveness of the procedure that hasbeen adopted in the experimental session (Section 7).

The following diagram (Fig. 7) shows the results of the subjectivequestionnaire.

The results of a one-way ANOVA failed to reveal a significantdifferences (F(2,6)¼7.58; p¼0.023) among the three choices(Scenario 3, Scenario 4 and no preference). This means that thesubjects did not perceive a significant difference between the twodifferent scenarios and, as a consequence, that the implementationof the misalignment correction technique is not able to produce aconscious improvement of the naturalness.

Also for this group, after the fulfilling of the subjective ques-tionnaire, we have interviewed each participant. In this case, evenif most users were not able to distinguish at a conscious level thesubtle differences in their perception, a few of them referred usabout differences in force perception, or about friction, or even theweight of the objects.

Considering that the technique under consideration pertains tothe vision, and that we obtained some alteration in the hapticdomain, we could reasonably look for some explanation in the

field of multisensorial visuo-haptic perceptions. For this kind ofperceptions, the final interpretation of the sensory informationcome out from a weighted average that assigns a major value tothe component of the vision because, as it is well known, there isthe dominance of the vision on the touch.

In detail, during the test there are different kind of stimuli thatcontribute to the final user perception. On the one hand, we havethe haptic sensory activations. The pressure practiced by user on thesurfaces of the objects causes a retroreactive forces that stimulatethe tactile mechanoreceptors that allow to percept the stiffness andflexibility of the object. At the same time the deformation of thesurface of the object, and then the haptic feedback, induced by userinteraction, stimulates the kinesthetic receptors that allow user toindividuate the object's position and quantify the object's deforma-tion. On the other hand, the vision play a dominant role, which candistort the interpretation of the haptic sensations especially ifcompetitive information reach the brain from different sensorychannels. Even if the forces rendered by the device are completelythe same, vision have the power to alter the perception. In facts,even if there is no physical contact at all, by means of the vision it ispossible to percept the deformation of a flat surface such as all thegeometric characteristics of the objects more generally (Klatzky etal., 1993). Differently from those characteristics of the objects thatcan be recognized with haptic perception (e.g., temperature, rough-ness, etc.), stiffness perception is one of the major multisensorialperception which combine vision and touch. Therefore, we dis-carded the previous hypothesis in favor of the following one: themisalignment correction improves the stiffness perception.

In conclusion, the preliminary test encouraged us to focus theexperimental session in order to gather more detailed informationand verify whether misalignment correction techniques maychange and to what extent the stiffness perception of the users.

7. Experimental test design

As mentioned earlier, the design of experimental test was madedownstream of the first stage and only after analyzing all thegathered information.

In particular, the preliminary study about visual obtrusion quali-tatively attested an improvement in the naturalness of the interac-tion in VHMR environments when the haptic device is camouflaged.As a consequence, we have designed the experiment in order toassess to what extent this subjectively better interaction may impacton the user performance. In particular, we executed the tests foranalyzing human performance in target acquisition tasks whileinteracting with virtual objects by means of an haptic device; morespecifically, we have measured the time for completing a simplevisuo-haptic interactive task for the scenarios 1 and 2.

In a similar way, we have designed the experimental test B afterthe study of the gathered preliminary results. In this case, the

Fig. 6. Subjective questionnaire results of preliminary study of Test A.

L. Barbieri et al. / Int. J. Human-Computer Studies 72 (2014) 846–859852

measure of the time or of the errors for completing a task is notsignificant (a one-way ANOVA failed to reveal a significantdifference between two groups in the scenarios 3 and 4). Themisalignment correction, in fact, does not improve the perceivednaturalness of the interaction in the VHMR set up but affects onlythe perception of the virtual object stiffness.

On the basis of these considerations, we have decided toobjectively evaluate the misalignment correction technique bymeans of the investigation of the stiffness perception startingfrom two initial hypothesis. In detail, we hypothesize that:

1. in a VHMR display with a commodity haptic device and naıvecomposition of the user's hand and the virtual tool, the stiffnessperceived by a user while interacting with a stiff virtual objectwill be affected by the visible displacement of the user's hand.In other words, despite a small displacement of the stiff object,the mechanical limitations of the device will lead to a largerdisplacement of the hand, and this will impair stiffnessdiscrimination.

2. our misalignment correction technique will reduce the visibledisplacement of the hand when interacting with a stiff virtualobject, therefore improving stiffness discrimination.

Starting from these hypotheses, we designed an experimentaltest able to quantify the effects of the proposed misalignmentscorrection by measuring and comparing the stiffness discrimina-tion ability of each user with and without misalignment correc-tion. In following sections, we describe the implementation andresults of both the experimental tests.

In order to minimize the influence of preconceived ideas, dueto the interpretation of the sensory information acquired duringthe first scenario and in order to minimize skill transfer betweenthe two scenarios, preventive measures have been adopted:

1. an interval of at least 15 min between the two scenarios foreach user;

2. reversing the order of execution of the two scenarios betweentwo subsequent users.

8. Test to measure visual obtrusion influence on theinteraction in a MR context

The following figure (Fig. 8) shows the scenarios adopted forevaluating the impact of our unobstructed display on user perfor-mance, with our technique of computational camouflage (Fig. 8b)and without (Fig. 8a). The target acquisition task was very simple:the subjects were supposed to move the virtual red ball betweenthe blue and the green pockets using a virtual stick.

In particular, each participant was asked to move the ball backand forth between the two pockets, as quickly as possible, for atotal of 6 times. Once the ball is detected to be inside a pocket, thesystem emits a sound, and the user may move the ball to the otherpocket. Prior to start the experiment, all participants spent 5 mininteracting with a similar VHMR scenario to become familiar withthe interaction metaphor.

After exposure to the MR environment the participants com-pleted a questionnaire composed, for the most part, by questionstaken from standardized presence questionnaires proposed byAzuma (Azuma, 1997) and, for the remaining part, by questionsthat focus on contextual factors related to the specific experiment.Also other presence surveys can be adopted for this subject, suchas SUS (Slater and Usoh, 1993) and WS questionnaires (Witmerand Singer, 1998).

8.1. Results and discussions

The following diagram shows the results of the presencequestionnaire submitted to the participants.

The results showed in Fig. 9 have been submitted to pairedt-test analysis which outcomes are grouped in Table 2. What

Fig. 7. Subjective questionnaire results of preliminary study of Test B.

Fig. 8. Scenario 1 with visual obtrusion (a); Scenario 2 with computational camouflage (b).

L. Barbieri et al. / Int. J. Human-Computer Studies 72 (2014) 846–859 853

emerges is that, with the exception question 8, there is not astatistically significant difference in the sense of presence betweenthe two different scenarios. About question 8, a paired t-testanalysis (t(14)¼2.39, p¼0.03) reveals a statistically reliable differ-ence, with people reporting a sensation of a faster operation withthe camouflaged haptic device.

The similarity of the sense of presence results between the twoscenarios could be justified with the fact that, as some researchersaffirm, during the visual perception the brain operates a reconstruc-tion of the images on the basis of a limited quantity of sensoryinformation. In fact the brain tends to screen out those stimuli that arenot important for the accomplishment of the required task. Thisselective mechanism is related to the limits of the peripheral visualacuity (Posner, 1980; Tsotsos, 1990), indeed a person cannot focus his/her attention on all the details of an image at the same time. Thereforesince user attention is focused on the three virtual objects during allthe experiment, all the other objects of the scene are necessarilyignored. This limitation is imposed by the processing mechanism ofhuman brain that can elaborate only a limited amount of informationof the retinal image. In brief the brain pays attention to some stimuliwhile ignoring others.

Starting from these considerations, in our experiment, theimages of the haptic device, hence its visual perception, do nothave a fundamental function for the correct execution of the tasksand, as confirmed by the tests, they are less important comparedto other visual stimuli.

A secondary explanation to the present questionnaire's resultsis that the sense of immersion into the developed VHMR

environment is high and the implementation of the only virtualcamouflage technique is not enough for a detectable improvementof the sense of “being there”. Indeed some users from the groupwith computational camouflage mentioned that they could per-ceive the presence of some type of device, but they could notdescribe the nature or characteristics of the apparatus

In contrast with the results that come out from the subjectivemeasurements, the analysis of human performance in target acquisi-tion tasks demonstrates notable differences between the scenarios.

The following graphs (Fig. 10) show the average task completiontimes separately moving the ball left to right and viceversa. Inparticular they show the average time to complete the 6 subsequenttrials of the task, as specified in the previous section, independentlyfor the group that suffered visual obtrusion (in red) and for the groupthat used computational camouflage (in green).

Starting from the graphs shown in Fig. 10, we have conductedfour different ANOVA analysis comparing the average time regis-tered by each group for each trial in the four scenarios (scenario1 left-right, scenario 1 right-left, scenario 2 left-right, scenario2 right-left). For each analysis, we observe a statistically significantdifference (po0.001) between the mean time of the six trials.

Thanks to further analysis (a Tukey post-hoc test on the one-way ANOVA results), we have revealed that only the time tocomplete the first trial is significantly higher and with a statisti-cally significant difference when it is compared to the other trials,whereas there is no statistically significant differences when wecompare the second trial with the others and so on (Table 3). Thepost-hoc test clearly demonstrates the presence of a training effectthat affect the first trial of each group.

To complete the analysis of the results, we have carried out twoother analysis: one for comparing the average times of the twogroups and one for comparing the average times of the twoexercises “left-right” and “right-left”.

For the first case, an independent sample t-test reveals astatistically reliable difference between the mean time completiontasks in the Scenario 1 (Left to right: M¼10.7, s¼5.1; Right to left:

Fig. 9. Presence questionnaire questions (left) and their results (right).

Table 2Paired samples t-Tests of the presence questionnaire's results.

Paired t-test

Questions q1 q2 q3 q4 q5 q6 q7 q8 q9 q10 q11Sig. 0.10 0.44 0.75 0.42 0.16 0.67 0.90 0.03 0.8 0.82 0.46

L. Barbieri et al. / Int. J. Human-Computer Studies 72 (2014) 846–859854

M¼8.9, s¼4.3) and Scenario 2 (Left to right: M¼7.6, s¼3.4; Rightto left: M6.9, s¼2.8) performed by the participants. In particular:

� G1 vs G2; left to right : tð148Þ ¼ 4:45; po0:001;

� G1 vs G2; left to left : tð148Þ ¼ 3:34; po0:001;

so we can affirm that there is a reliable significant differencebetween Scenario 1 (without computational camouflage) and

Scenario 2 (with computational camouflage).

For the second case, paired samples t-tests show that for Scenario2 there is insufficient evidence (t(74)¼1.39, p¼0.166) to concludethat the paths “Left to right” and “Right to left” are different. On theother hand, there is a statistically significant difference (t(74)¼3.69,po0.001) between the two paths performed by users in Scenario 1.We consider that this difference is due to the relative positionbetween haptic device, virtual objects and user's hand. In fact, whenthe user moves the virtual ball from left to right, the tool and theuser's own hand obstruct the view of the virtual object (see Fig. 8).

The apparent contradiction, arising from the results obtainedwith the presence questionnaire (there is not difference betweentwo scenarios) and with the measure of the completion times ofthe tasks (this different exists and is very significant), is easilyexplained if we analyze the path length. A rough analysis of thetrajectories (Fig. 11) clearly indicates that carrying out the test withcomputational camouflage, users stay closer to a straight path (redlines), while carrying out the test with visual obtrusion, users tendto take a curved path (blue lines) around the haptic device.

It is important to note that, following a straight path, the devicedoes never reach its mechanical limits and the handle does nevercollide with the body of the device, whereas the virtual stick, beinglonger than the real handle, visually collides with the body of thedevice. Then, although a real contact is impossible, the physicalpresence of haptic device influences user behavior: even if theusers do not perceive the difference between scenario 1 and 2 (seequestionnaire), he/she deflects the trajectories as if the device waspart of the mixed scene, even though no collision of the virtualobject with the device were possible. During all the experiments,this behavior was adopted also after accidental visual collision ofthe ball on top of the device.

In conclusion we can confirm that the results clearly indicate thatvisual obtrusion produced by a haptic device has a negative impacton task performance and our computational camouflage approachsucceeds at improving it. Users who see the device may uncon-sciously consider it as part of the scene, and may not be able toignore it while carrying out their tasks. With computational camou-flage, on the other hand, users naturally ignore the haptic device.

9. Experimental test on the evaluation of the misalignmentcorrection

To test our hypothesis (see Section 7), we designed a user studybased on a stiffness discrimination procedure following a protocolsimilar to that in (Karadogan et al., 2010). The goal of the stiffnessdiscrimination is to provide a measure of the Weber Fraction (WF)of the stiffness discrimination of each user with and without theMisalignment Correction.

In particular, for each scenario under analysis, each participantmust repeatedly touch two virtual deformable objects using avirtual stick (Fig. 12), and then select the stiffer one on every trial.Randomly for every trial, one of the two objects has a fixednominal stiffness whereas the stiffness of the other object isautomatically adjusted, so that when the convergence clause isreached the difference between the ultimate difference is stored asthe Just Noticeable Difference (JND).

Refer to (Karadogan et al., 2010) for the exact stiffness adapta-tion algorithm, but in essence it is based on two concurrent rules.The WALD rule reduces the stiffness difference when the usermakes several correct selections in a row and increases thedifference when the user makes several wrong selections. ThePEST rule, adjust the extent of each variation, whenever the WALDrule is met.

The convergence criterion was fixed at reaching the seventhdeeper level of pest. That for our test cases, starting from an initial

Fig. 10. Average completion times of the tasks performed left to right (left) and right to left (right). (For interpretation of the references to color in this figure, the reader isreferred to the web version of this article.)

Table 3Tukey post-hoc test results.

Left to right Right to left

(I) trials (J) trials Sig. (I) trials (J) trials Sig.

Scenario 1 1 2 0.007 1 2 0.0143 0.001 3 0.0144 o0.001 4 0.0035 0.001 5 0.0016 0.001 6 o0.001

Scenario 2 1 2 o0.001 1 2 o0.0013 o0.001 3 o0.0014 o0.001 4 o0.0015 o0.001 5 o0.0016 o0.001 6 o0.001

L. Barbieri et al. / Int. J. Human-Computer Studies 72 (2014) 846–859 855

value of 1200 N/m and an initial step of 200 N/m, the convergencecriterion was reached when the PEST rule made the step size lowerthan 3.125 N/m(¼200/2\widehat6). Upon convergence the stiff-ness difference between the two objects constitutes the JND.

We have simulated the deformable objects using a linear co-rotational elastic model discretized with finite elements (Mullerand Gross, 2004), and we have selected the nominal Youngmodulus such that the prism has a nominal linear stiffness of1400 N/m at the top.

Based on this stiffness discrimination experiment, we havedesigned an user study where, as depicted in the followingfigure (Fig. 12), users have been instructed to evaluate the stiffnessof the two virtual flexible prisms by means of a virtual rigid probeaugmented on the physical stick of the haptic device. This evaluationhas been performed by users carrying out a pressure on the flatsurfaces of the virtual objects along their normal direction. Inparticular the metrics adopted in the user study were the above-mentioned JND and theWeber Fraction (WF) that is defined to be “theratio of the minimum difference that a person can discriminate to thestandard intensity of the stimulus in a sensory modality” (DeGersem,2005; Tan et al., 1995; Jones and Hunter, 1990).

As above mentioned in Section 5, all participants performed theexperiment both with (Scenario 4) and without misalignment correc-tion (Scenario 3) as specified in Section 7 point 1 and 2. Prior to theexperiment, all participants spent 5 min interacting with a simpleVHMR scenario to become familiar with the interaction metaphorFig. 13

9.1. Results and discussions

The following histogram shows the WF values of each userwhile performing the adaptive two-alternative stiffness choiceprocedure in the scenarios 3 (without misalignment correction)and 4 (with misalignment correction).

A Paired-Samples t-Test reveals a statistically significantdecrease (t(9)¼�4.86; p¼0.001) of the average experimentalWF from 0.4270.09 to 0.3370.09 thanks to the adoption of themisalignment correction technique, whereas post-hoc power ana-lysis confirms a sufficiently large test group (with 1�β¼0.98).These results show that the subjects, that perform the task withthe proposed technique, can reliably detect a 33% change instiffness perception; without misalignment correction, the WFrises to 42% that corresponds to a lower stiffness discriminationability.

These values, detected in our experiments, are comparablewith the values show by Howell (Howell et al., 2008). In the area ofhaptics, there have been a number of studies that report differentWeber Fractions for stiffness: between 0.08 and 0.12 (De Gersem,2005), 0.22 (Tan et al., 1995), 0.23 (Jones and Hunter, 1990) and,indeed, 0.40 (Howell et al., 2008). These differences, found inliterature, are due to various measures such as the existence oftactile and/or kinesthetic feedback, visual feedback, stiffness valueof the surfaces, and the speed of exploration (Karadogan et al.,2010).

Rather than reporting the results of each participant to the test,some considerations could be given on the basis of the followinggraphs (Fig. 14) that reports the outcomes of a single user that, dueto their similarities with the results provided by the otherparticipants, can be considered representative of each other. Inparticular the first graph draws the stiffness difference evolutionduring the trials between the two prisms (Fig. 14a) while thesecond is the time of choice (Fig. 14b) performed on each trial.

About the first graph, there is a statistically significancedifference between the two scenarios as determined by a Pairedt-Test (t(43)¼15.22; po0.0005). This means that the compensa-tion of the misalignment error allows a reduction of the differ-ential threshold (JND), i.e., the level at which an increase in adetected stimulus can be perceived. Taking into account the resultsof all the participants, we have observed a significant reduction ofthe average JND parameters from 6097140 N/mm to 4967172 N/mm as demonstrated by a dependent t-Test (t(9)¼3.1; p¼0.013).Furthermore, the adoption of the technique allows to most of theparticipants (67%) to got the convergence in fewer iterations.

The second graph (Fig. 14b) reveals the time necessary to theuser to make its decision for each trial for scenario 3 and scenario4. A dependent t-Test with t(43)¼3.31 and p¼0.02 demonstrates areduction of the average time required for each trial from13.676.8 s to 10.276.7 s.

These differences suggest that the subjects that interact in theVHMR environment without misalignment correction need moretime to reach a decision about stiffness discrimination that meanseven more difficult for the participant to concentrate on the taskdue to frustration and tiredness.

Thanks to these analyses, we can affirm that tool-hand mis-alignments may have a negative impact on stiffness discrimination

Fig. 12. User while performing task for stiffer evaluation of two virtual prisms.

Fig. 11. Path length graphs. (For interpretation of the references to color in this figure, the reader is referred to the web version of this article.)

L. Barbieri et al. / Int. J. Human-Computer Studies 72 (2014) 846–859856

and that the proposed technique significantly decrease JND andWF values that correspond to a more natural and realistic inter-pretation of sensory information. As stated in Section 3, in fact, oursolution consists in an optical misalignment of user's hand thathas repercussions on the oculo-manual coordination during themultisensory perception. We have deliberately introduced thisperceptual inconsistency in order to attempt to take advantage ofthe prominence of the vision over the touch. In particular, inpresence of a multimodal sensory interaction two different situa-tions can occur: the various and different sensory informationscompete with each other or merge themselves into a unique globalperception. Generally when our different senses receive correlatedinformation about the same external objects or events and thisinformation is combined in our brains to yield a multimodal

percept, as in the case of the proposed experiment, what youhave, is a crossmodal integration (Spence et al., 2000; Welch andWarren, 1986). Several different studies demonstrate that inpresence of a crossmodal integration there is a sensory modalitythat dominate over others (Rock and Victor, 1964). This priority isassigned in function of the type of the property, such as roughnessor surface density, of the external world that we want to inspect(Lederman et al., 1986). But the dominance of a sensory modalityover the other in creating a percept is not the rule. In fact, theinformation coming by the different sensory modalities has to beintegrated in order to form a coherent multisensory percept. Thisintegration does not occur taking into account the differentmodalities in an equal manner, but each one contributes with adifferent weight to the final perception. In particular, their relative

Fig. 14. Stiffness difference between the constant and the variable stimuli (a) and time of choice (b).

Fig. 13. Weber fractions measured for each user in scenario 3 and 4.

L. Barbieri et al. / Int. J. Human-Computer Studies 72 (2014) 846–859 857

weights are strictly related to the quality of information. (Ernstand Banks, 2002). In other words, even if there is a dominantsensory modality there is also always a small but consistentinfluence of other modalities on the integrated perceptions andthe discrepancies are always resolved in favor of the more preciseor more appropriate modality. In the specific case of the proposedexperiment we assist to the phenomenon of visual dominance(over the touch) known in literature as ‘visual capture’. Tostrengthen moreover the evidence of visual capture effect in ourexperiment here comes the spatial resolution issue. Indeed thetactile spatial resolution (the minimum distance between twopoints that can be percept by humans is 1mm (Loomis, 1981)) isless than visual spatial resolution (Craig and Johnson, 2000).

In conclusion it is possible to assert that, even if the subjectivedata gained in the preliminary study do not demonstrate anyadvantage reached thanks to the proposed technique, the objec-tive data results evidence an improvement in the naturalness andrealism of the interaction in the VHMR environment.

10. Conclusions

VHMR is an emerging technology that enables a variety ofpromising applications in different fields some of which arealready being applied with success in medical practice. Thus theserevolutionary technologies occupy a fundamental role in humanevolution and engineers, technologists and neuroscientists have tocooperate in order to realize even more high-performance andmodern haptic and interactive devices that will defeat the currentlimitations. Meanwhile this technological upgrade is accomplishedthe same importance is awarded to those researches that continuein the investigation of novel and efficient solutions and techniquesto overcome or bypass the above-mentioned limitations.

Our research has focused on the investigation of two typicalproblems of VHMR environments due to the haptic interactiondevices currently adopted. The research also deals with theexperimentation and testing of two different techniques that havebeen specifically developed to reduce the influence of theseproblems on user interaction with VHMR systems. In particular,the mechanical limitations and the physical presence of hapticscompromise user perception from a visual and kinesthetic point ofview and reduce the naturalness and realism of the multimodalvirtual experience.

Firstly we have analyzed the visual obtrusion in VHMR systemsand tested a novel computational camouflage solution enabling anunobstructed immersion using a AR-HMD and a bulky commodityhaptic device. Data acquired in the test show that the visualobtrusion produced by a haptic device has a negative impact onuser's performance in a MR environment, confirming the pre-liminary study's reports. Comparative analysis of VHMR environ-ments with and without computational camouflage showed thatthe adoption of the camouflage increase the user's performancesrelated to the task completion time and path length. Conversely,without the camouflage we have measured a significant decreaseof the task completion times and an increase of the path lengthscaused by the visual and kinesthetic perception of the hapticdevice. In conclusion these results demonstrate that the proposedcomputational camouflage approach succeeds at improving theuser's performance in a VHMR environment and increase thenaturalness of the interaction.

Secondly we have explored and tested the influence of virtualtool misalignment on user performance with the adoption of asimple technique, developed according to the basic notions aboutuser perception, that compensates for misalignment by redrawingthe user's hand with an artificial displacement. The resultsobjectively show that the impact of visual cues on the human

perception of mechanical stiffness in MR environments affects thenaturalness of the overall perception thus providing a wronginterpretation of simulated metaphors. In particular, in the experi-ment on the discrimination of stiffness of two virtual prisms, wehave observed lower average experimental JND and Weber Frac-tion when subjects performed the experiment with misalignmentcorrection. Where lower JND and WF values correspond to higherstiffness discrimination ability. Hence, we conclude that ourtechnique allows to counterbalance the sensory information lackthat comes from virtual tool misalignment issue in order toprovide a more efficient and natural interaction in the MR scene.

References

Aleotti, J., Denaro, F., Caselli, S., 2010. Object manipulation in visuohaptic augmen-ted reality with physics-based animation. In: Proceedings of the IEEE RO-MAN,pp. 38–43.

Asai, K., Kobayashi, H., Takase, N., 2008. Palm-on haptic environment in augmentedreality. In: Proceedings of International Conference on Human ComputerInteraction (IASTED-HCI 08), pp. 68–73.

Aoyama, H., Kimishima, Y., 2009. Mixed reality system for evaluating designabilityand operability of information appliances. Int. J. Interact Des. Manuf. 3–3,157–164.

Azuma, R.T., 1997. A survey of augmented reality. Presence: Teleoperators VirtualEnviron 6 (4), 355–385.

Bayart, B., Didier, J.Y., Kheddar, A., 2008. Force feedback virtual painting on realobjects: a paradigm of augmented reality haptics. In: Proceedings of Euro-haptics, pp. 776–785.

Barbieri, L., Angilica, A., Bruno, F., Muzzupappa, M., 2012. Mixed prototyping withconfigurable physical archetype for usability evaluation of product interfaces.Comput. Ind.

Bau, O., Poupyrev, I., 2012. REVEL: tactile feedback technology for augmentedreality. ACM Trans. Graph. (TOG) 31 (4), 89.

Bianchi, G., Jung, C., Knoerlein, B., Szekely, G., Harders, M., 2006. Highfidelity visuo-haptic interaction with virtual objects in multi-modal AR systems. In: Proceed-ings of ISMAR.

Boehm-Davis, D.A., Holt, R.W., 2004. The science of human performance: methodsand metrics. Adv. Hum. Perform. Cognit. Eng. Res. 5, 157–193.

Bordegoni, M., Ferrise, F., Ambrogio, M., Caruso, F., Bruno, F., 2010. Data exchangeand multi-layered architecture for a collaborative design process in virtualenvironments. Int. J. Interact. Des. Manuf. (IJIDeM) 4 (2), 137–148.

Brederson, J.D., Iktis, M., Johnson, C.R., Hansen, C.D., 2008. The Visual HapticWorkbench. In: Proceedings of PHANToM User Group Workshop.

Bruno, F., Angilica, A., Cosco, F., Muzzupappa, M., 2013. Reliable behaviour simula-tion of product interface in mixed reality. Eng. Comput., 1–13.

Buehler, C., Bosse, M., McMillan, L., Gortler, S.J., Cohen, M.F., 2001. Unstructuredlumigraph rendering. In: Proceedings of ACM SIGGRAPH.

Cosco, F.I., Garre, C., Bruno, F., Muzzupappa, M., Otaduy, M.A., 2012. Visuo-hapticmixed reality with unobstructed tool-hand integration. IEEE Trans. Vis. Comput.Graph. 99, 159–172.

Cosco, F.I., Garre, C., Bruno, F., Muzzupappa, M., Otaduy, M.A., 2009. Augmentedtouch without visual obtrusion. In: Proceedings of the 8th IEEE/ACM Interna-tional Symposium on Mixed and Augmented Reality, ISMAR 2009, Orlando, FL,pp. 99–102.

Craig, J.C., Johnson, K.O., 2000. The two-point threshold not a measure of tactilespatial resolution. Curr. Dir. Psychol. Sci. 9 (1), 29–32.

de Araújo, B.R., Guerreiro, T., Fonseca, M.J., Jorge, J.A., Pereira, J.M., Bordegoni, M.,Ferrise, F., Covarrubias, M., Antolini, M., 2010. An haptic-based immersiveenvironment for shape analysis and modelling. J. Real-Time Image Process. 5(2), 73–90.

Debevec, P., Taylor, C., Malik, J., 1996. Modeling and rendering architecture fromphotographs. In: Proceedings of ACM SIGGRAPH.

DeGersem, G., 2005. Kinesthetic Feedback and Enhanced Sensitivity in RoboticEndoscopic Telesurgery (Ph.D. thesis). Katholieke Universiteit Leuven, Belgium.

Ernst, M.O., Banks, M.S., 2002. Humans integrate visual and haptic information in astatistically optimal fashion. Nature 415 (6870), 429–433.

Garre, C., Otaduy, M.A., 2009. Haptic rendering of complex deformations throughhandle-space force linearization. In: Proceedings of World Haptics Conference.

Gauthier, G.M., Vercher, J.-L., Mussa Ivaldi, F., Marchetti, E., 1988. Oculo-manualtracking of visual targets: control learning, coordination control and coordina-tion model. Exp. Brain Res. 73 (1), 127–137.

Harders, M., Bianchi, G., Knoerlein, B., 2007. Multimodal augmented reality inmedicine. Universal Access in Human-Computer Interaction (Ambient Interac-tion). Springer, Berlin Heidelberg, pp. 652–658.

Howell, J.N., Williams II, R.L., Conatser Jr., R.R., Burns, J.M., Eland, D.C., 2008.Training for palpatory diagnosis on the virtual haptic back: performanceimprovement and user evaluations. J. Am. Osteopathic Assoc. 108, 29–36.

Inami, M., Kawakami, N., Tachi, S., 2003. Optical camouflage using retro-reflectiveprojection technology. In: Proceedings of the 2nd IEEE/ACM InternationalSymposium on Mixed and Augmented Reality. IEEE Computer Society, pp. 348.

L. Barbieri et al. / Int. J. Human-Computer Studies 72 (2014) 846–859858

Inami, M., Kawakami, N., Sekiguchi, D., Yanagida, Y., Maeda, T., Tachi, S., 2000.Visuo-haptic display using head-mounted projector. Virtual Reality. In: Pro-ceedings of IEEE, pp. 233–240.

Ishii, M., Sato, M., 1994. A 3D spatial interface device using tensed strings. Presence3 (1), 81–86.

Jones, L.A., Hunter, I.W., 1990. A perceptual analysis of stiffness. Exp. Brain Res. 79,150–156.

Karadogan, E., Williams, R., Howell, J., Conatser Jr., R., 2010. A stiffness discrimina-tion experiment including analysis of palpation forces and velocities. Simul.Healthc. 5 (5), 279.

Klatzky, R., Loomis, J., Lederman, S.J., Wake, H., Fujita, N., 1993. Haptic identificationof objects and their depictions. Percept. Psychophys. 54, 170–178.

Klatzky, R., Lederman, S.J., Reed, C., 1987. There's more to touch than meets the eye:the salience of object attributes for haptics with and without vision. J. Exp.Psychol.: Gen. 116 (4), 356–369.

Knoerlein, B., Szekely, G., Harders, M., 2007. Visuo-haptic collaborative augmentedreality ping-pong. In: Proceedings of the International Conference on Advancesin Computer Entertainment Technology, ACM, pp. 91–94.

Lederman, S.J., Klatzky, R.L., 2009. Haptic perception: a tutorial. Atten., Percept.Psychophys. 71 (7), 1439–1459.

Lederman, S.J., Klatzky, R.L., 1987. Hand movements: a window into haptic objectrecognition. Cogn. Psychol. 19 (3), 342–368.

Lederman, S.J., Thorne, G., Jones, B., 1986. The perception of texture by vision andtouch: multidimensionality and intersensory integration. J. Exp. Psychol. [Hum.Percept. Perform.] 12 (169), 180.

Lewis, J.R., 1994. Sample sizes for usability studies: additional considerations. Hum.Factors 36, 368–378.

Loomis, J.M., 1981. Tactile pattern perception. Perception 10 (1), 5–27.Miles, H.C., Pop, S.R., Watt, S.J., Lawrence, G.P., John, N.W., 2012. A review of virtual

environments for training in ball sports. Comput. Graph. 36 (6), 714–726.Millet, G., Lécuyer, A., Burkhardt, J.M., Haliyo, S., Régnier, S., 2013. Haptics and

graphic analogies for the understanding of atomic force microscopy. Int. J.Hum.-Comput. Stud.

Muller, M., Gross, M., 2004. Interactive virtual materials. In: Proceedings ofGraphics Interface 2004. Canadian Human-Computer Communications Society,pp. 239–246.

Ortega, M., Coquillart, S., 2005. Prop-based haptic interaction with co-location andimmersion: an automotive application. IEEE International Workshop on HapticAudio Visual Environments and their Applications, IEEE, 6 pp.

Otaduy, M.A., Tamstorf, R., Steinemann, D., Gross, M., 2009. Implicit contacthandling for deformable objects (Computer Graphics Forum). Blackwell Pub-lishing Ltd., pp. 559–568.

Posner, M.I., 1980. Orienting of attention. Q. J. Exp. Psychol. 32 (1), 3–25.Pusch, A., Steinicke, F., 2012. Manipulative Augmented Virtuality for Modulating

Human Perception and Action. CyberTherapy & Rehabilitation Magazine.Pusch, A., Martin, O., Coquillart, S., 2009. HEMP—hand-displacement-based pseudo-

haptics: a study of a force field application and a behavioural analysis. Int. J.Hum.-Comput. Stud. 67 (3), 256–268.

Rock, I., Victor, J., 1964. Vision and touch: an experimentally created conflictbetween the two senses. Science 143, 594–596.

Slater, M., Usoh, M., 1993. Presence in Immersive Virtual Environments, In:Proceedings of IEEE Virtual Reality Annual International Symposium (VRAIS),September 18–22, Seattle, Washington, pp. 90–96.

Spence, C., Pavani, F., Driver, J., 2000. Crossmodal links between vision and touch inconvert endogenous spatial attention. J. Exp. Psychol. [Hum. Percept. Perform.]26, 1298–1319.

Stevenson, D., Smith, K., Mclaughlin, J., Gunn, C., Veldkamp, J., Dixon, M., 1999.Haptic workbench: a multisensory virtual environment. Electronic Imaging'99.International Society for Optics and Photonics, pp. 356–366.

Tan, H.Z., Durlach, N.I., Beauregard, G.L., Srinivasan, M.A., 1995. Manual discrimina-tion of compliance using active pinch grasp: the roles of force and work cues.Percept. Psychophys. 57, 495–510.

Tarrin, N., Coquillart, S., Hasegawa, S., Bouguila, L., Sato, M., 2003. The StringedHaptic Workbench: A New Haptic Workbench Solution. Computer GraphicsForum. Blackwell Publishing, Inc, pp. 583–589.

Tsotsos, J.K., 1990. Analyzing vision at the complexity level. Brain Behav. Sci. 13,423–469.

Vercher, J.-L., Quaccia, D., Gauthier, G.M., 1995. Oculo-manual coordination control:respective role of visual and non-visual information in ocular tracking of self-moved targets. Exp. Brain Res. 103 (2), 311–322.

Virzi, R.A., 1992. Refining the test phase of usability evaluation: how many subjectsis enough? Hum. Factors.: J. Hum. Factors Ergon. Soc. 34 (4), 457–468.

Welch, R.B., Warren, D.H., 1986. Intersensory interactions. In: Boff, K.R., Kaufman, L.,Thomas, J.P. (Eds.), Handbook of Perception and Human Performance, SensoryProcesses and Perception, vol. I. Wiley, New York, pp. 251–2536.

Witmer, B.G, Singer, M.J., 1998. Measuring presence in virtual environments: apresence questionnaire. Presence: Teleoperators Virtual Environ. 7 (3),225–240.

Wolfe, J.M., 2007. Guided Search 4.0: Current Progress with a model of visualsearch. In: Gray, W. (Ed.), 2007. Integrated Models of Cognitive Systems, NewYork: Oxford, pp. 99–119.

Zokai, S., Esteve, J., Genc, Y., Navab, N., 2003. Multiview paraperspective projectionmodel for diminished reality. In: Proceedings of the Second IEEE and ACMInternational Symposium on Mixed and Augmented Reality, IEEE, pp. 217–226.

L. Barbieri et al. / Int. J. Human-Computer Studies 72 (2014) 846–859 859