Privacy Challenges and Methods for Virtual Classrooms in Second Life Grid and OpenSimulator

8
Privacy challenges and methods for virtual classrooms in Second Life Grid and OpenSimulator Andreas Vilela, Márcio Cardoso, Daniel Martins Dep. Engenharias, Escola de Ciências e Tecnologia UTAD-Universidade de Trás-os-Montes e Alto Douro Vila Real, Portugal [email protected], [email protected], [email protected] Arnaldo Santos, Lúcia Moreira Portugal Telecom Inovação Aveiro, Portugal [email protected], [email protected] Hugo Paredes, Paulo Martins, Leonel Morgado GECAD – Grupo de Investigação em Engenharia do Conhecimento e Apoio à Decisão Vila Real, Portugal [email protected], [email protected], [email protected] Abstract—Mass adoption of virtual world platforms for education and training implies efficient management of computational resources. In Second Life Grid and OpenSimulator, commonly used for this purpose, a key resource is the number of servers required to support educational spaces. Educational activities can take place at different altitudes over the same virtual land, for different classes. This way a single virtual world server can sustain several different educational spaces/classes, reducing the number of servers needed to make available different classrooms or other educational spaces. One issue whose importance is emphasized in such conditions is that of class privacy, bearing in mind that most privacy-management features of these platforms are land-based, not space-based. In this paper, we provide an overview of the issues to consider when planning privacy in these platforms and the methodologies that can be developed and implemented to ensure it at an adequate level, including the extra privacy possible in OpenSimulator regarding Second Life Grid. Keywords-privacy, access control, virtual learning, virtual classroom, Second Life, OpenSimulator I. INTRODUCTION Typically, on-line training is provided in asynchronous mode, with little interactivity between trainees or with the trainer, even though there has been an increase in use of virtual classrooms, usually text-based, audio-based or video-supported conferences, with or without a supporting shared medium, using software such as WebEx or Elluminate; with limited success in terms of interaction and engagement [1, pp.3-4]. On the contrary, educational use of virtual worlds for synchronous learning has achieved successful levels of interaction and engagement, as well has enlarging the scope of educational activities doable on-line, as attested by recent special issues of the British Journal of Educational Technology (http://www3.interscience.wiley.com/journal/122324711/issue) and the Association for Learning Technology Journal (http://www.informaworld.com/smpp/title~db=all~content=g9 06960349~tab=toc). Although there are many 3D virtual worlds and virtual world platforms, Second Life has been the most used for adult- oriented education and training. Here we distinguish between the technological platform, Second Life Grid (SLG), and the virtual world it supports, Second Life (SL). SL has its specific features, such as economy, social dynamics, and pricing models, but from a privacy perspective a key issue is that it is hosted on servers located in facilities of SL’s hosting company, Linden Lab – that is, the servers are accessible to all Internet users. However, the technological platform SLG can be and has been used to implement alternative – private – virtual worlds. Information on such private worlds is quite limited, mostly coming from news sources (e.g., [3;13]). Also, since the communication protocol between SLG virtual world servers and client software running on user’s computers is open, many platform-related concepts are similar to an alternative platform, called OpenSimulator, which implemented most of the SL server functionality based on the operation of the public protocol [8]. This platform is both open source and freely available for installation at any computer; therefore it can therefore be installed on an organization’s own servers. Thus, we will also analyse OpenSimulator here. SLG/OpenSimulator virtual worlds share many common features, such as being based on areas of 256mx256m of virtual space, enabling users to control their viewpoint or “camera” independently of their avatar’s movement, basing media access and voice privacy on virtual area subdivisions, and enabling both physics-enabled or physics-free elements (e.g., those that stand on the ground/fall, and those that hover). The underlying implementations of these platforms are different, but typically they are transparent from the client software point of view (for instance, SLG servers store some information in a file server whereas OpenSimulator uses a database for the same purpose, but the client software is unaware of this – Sequeira, 2009).

Transcript of Privacy Challenges and Methods for Virtual Classrooms in Second Life Grid and OpenSimulator

Privacy challenges and methods for virtual classrooms in Second Life Grid and OpenSimulator

Andreas Vilela, Márcio Cardoso, Daniel Martins Dep. Engenharias, Escola de Ciências e Tecnologia

UTAD-Universidade de Trás-os-Montes e Alto Douro Vila Real, Portugal

[email protected], [email protected], [email protected]

Arnaldo Santos, Lúcia Moreira Portugal Telecom Inovação

Aveiro, Portugal [email protected], [email protected]

Hugo Paredes, Paulo Martins, Leonel Morgado GECAD – Grupo de Investigação em Engenharia do Conhecimento e Apoio à Decisão

Vila Real, Portugal [email protected], [email protected], [email protected]

Abstract—Mass adoption of virtual world platforms for education and training implies efficient management of computational resources. In Second Life Grid and OpenSimulator, commonly used for this purpose, a key resource is the number of servers required to support educational spaces. Educational activities can take place at different altitudes over the same virtual land, for different classes. This way a single virtual world server can sustain several different educational spaces/classes, reducing the number of servers needed to make available different classrooms or other educational spaces. One issue whose importance is emphasized in such conditions is that of class privacy, bearing in mind that most privacy-management features of these platforms are land-based, not space-based. In this paper, we provide an overview of the issues to consider when planning privacy in these platforms and the methodologies that can be developed and implemented to ensure it at an adequate level, including the extra privacy possible in OpenSimulator regarding Second Life Grid.

Keywords-privacy, access control, virtual learning, virtual classroom, Second Life, OpenSimulator

I. INTRODUCTION Typically, on-line training is provided in asynchronous

mode, with little interactivity between trainees or with the trainer, even though there has been an increase in use of virtual classrooms, usually text-based, audio-based or video-supported conferences, with or without a supporting shared medium, using software such as WebEx or Elluminate; with limited success in terms of interaction and engagement [1, pp.3-4].

On the contrary, educational use of virtual worlds for synchronous learning has achieved successful levels of interaction and engagement, as well has enlarging the scope of educational activities doable on-line, as attested by recent special issues of the British Journal of Educational Technology (http://www3.interscience.wiley.com/journal/122324711/issue) and the Association for Learning Technology Journal

(http://www.informaworld.com/smpp/title~db=all~content=g906960349~tab=toc).

Although there are many 3D virtual worlds and virtual world platforms, Second Life has been the most used for adult-oriented education and training. Here we distinguish between the technological platform, Second Life Grid (SLG), and the virtual world it supports, Second Life (SL). SL has its specific features, such as economy, social dynamics, and pricing models, but from a privacy perspective a key issue is that it is hosted on servers located in facilities of SL’s hosting company, Linden Lab – that is, the servers are accessible to all Internet users. However, the technological platform SLG can be and has been used to implement alternative – private – virtual worlds. Information on such private worlds is quite limited, mostly coming from news sources (e.g., [3;13]). Also, since the communication protocol between SLG virtual world servers and client software running on user’s computers is open, many platform-related concepts are similar to an alternative platform, called OpenSimulator, which implemented most of the SL server functionality based on the operation of the public protocol [8]. This platform is both open source and freely available for installation at any computer; therefore it can therefore be installed on an organization’s own servers. Thus, we will also analyse OpenSimulator here.

SLG/OpenSimulator virtual worlds share many common features, such as being based on areas of 256mx256m of virtual space, enabling users to control their viewpoint or “camera” independently of their avatar’s movement, basing media access and voice privacy on virtual area subdivisions, and enabling both physics-enabled or physics-free elements (e.g., those that stand on the ground/fall, and those that hover). The underlying implementations of these platforms are different, but typically they are transparent from the client software point of view (for instance, SLG servers store some information in a file server whereas OpenSimulator uses a database for the same purpose, but the client software is unaware of this – Sequeira, 2009).

These common features present some challenges for organizations that wish to use SLG/OpenSimulator for large-scale education and training. In particular, if various different training groups are to use the virtual space, virtual classrooms may need to be created at different altitudes over the same patch of virtual land, in order to maximize the use of computational resources, since each server manages a limited amount of land, as mentioned above. In such situations, the most obvious privacy measures available in these platforms are not sufficient to ensure privacy. In this article we describe the various aspects that must be taken into account when dealing with privacy management of training groups/classes in SLG/OpenSimulator, and provide a list of possible solutions.

II. TECHNICAL ENVIRONMENT OF VIRTUAL CLASSROOMS IN SLG/OPENSIMULATOR

The concept of a virtual space is in itself unrestricted: one may idealize it as a simulation of reality, or as alternative realities, in the domain of fiction, unbound only by imagination. However, virtual worlds are based on physical computer systems and software platforms, which enable different features and place various constraints, for engineering, social-political or arbitrary reasons [7], like the ability or inability to teleport directly from one location to another, or the ability/inability to roam freely. The various features and constraints of virtual world platforms are the technical environment one must bear in mind.

Virtual worlds based on the SLG/OpenSimulator platforms are structured around virtual land regions, also called simulators, or “sims”, typically representing 256 m x 256 m of virtual land, and up to 4096m of altitude of virtual space (SLG; 10000m in OpenSimulator). Each region is supported by a server, which can have its own technical specifications and configuration, from low-level issues such as hardware configuration and optimization, to higher-level issues such as sea altitude, how the time of day changes, or which users can subdivide the region into smaller parcels, which in turn can have even higher-level settings, such as an user access list, detailed user permissions (e.g., ability/inability to create objects), etc. Furthermore, regions can be adjacent to one another, in a bidimensional map organization known as grid, forming larger areas of virtual land (in SL, some of these are known as mainland or continents). A grid can also hold non-adjacent regions, surrounded by non-traversable sea (usually called islands, although their representation is freely decided by the region manager: it could be simply traversable sea or an archipelago, not necessarily an actual island).

This underlying structure means that an organization wishing to use SLG/OpenSimulator virtual worlds regularly for education/training purposes must manage one or several regions of 256mx256m. While a region this size this may be adequate for large groups of participants, for conferences or lectures, it is vastly superior to the area used by a training group of 10 to 20 trainees in a classroom situation. Even considering that a group may use this entire area for a complex role-playing activity or for jointly developing activities involving 3D modelling, one must not forget that the server manages all virtual space up to 4096 m of altitude – in many cases, plain unused space. If we consider a training plan for

bank clerks, for instance, where altitude is not of essence, we could reserve 100 m of altitude for each group of trainees, assign the full 256m x 256m area at each 100m, and thus use a single server for 41 training groups (the last one having 3 less meters of headroom: a “mere” 97m), thus 410 to 820 users. Obviously, these numbers are for illustration purposes only: we could have smaller or larger class sizes, subdivide even further each 256x256m horizontal area, or use more or less than 100 m of altitude between levels, and reach different results.

A different consideration is whether a server would be able to handle high numbers of simultaneous avatars: e.g., in SL virtual world regions, the default (and recommended) avatar limit is only 40, albeit configurable by the user [4], with a limit of 100 not being unheard of. Also a consideration is whether the users’ computers will be able to handle the likely increase in visual complexity, albeit this is more easily managed using virtual architecture approaches like solid objects to cause occlusion (restrictions likely to lessen as technology evolves). However, if one considers typical distance training situations, not all users need to be online at the same time. We are focusing on situations where each training group will need to maintain their virtual training space between training sessions, but not necessarily be online at the same time as all other training groups. In this typical case, common server loads of less than 40 avatars can be enforced by using training schedules, and the full space of a server can be optimized for educational content of the various training groups.

This optimization may be further enhanced: if access is scheduled, it’s conceivable that virtual content can be stored and redeployed between scheduled sessions – and thus, further increase the number of training groups that a single SLG/OpenSimulator region can hold.

For instance, supposing a training programme of 2000 trainees only requires a single 1-hour virtual world session per week, for each 20-people group (100 groups in all), 2 groups can be online at the same time, keeping within the 40 avatars default limit, and thus a total of 2000/(20*2) = 50 training periods/hours would suffice. Considering that these could be scheduled between 9 AM and 5 PM (8 hours/day), then 6 days and 2 hours would be enough to schedule all 100 training groups. If there were only 41 spaces available, as in the former example, this would imply the use of a storage-and-retrieval system to allow the use of virtual space by all groups.

There are already several mechanisms providing the underlying functionality of such storage-and-retrieval mechanisms in SLG/OpenSimulator, like various holodecks (e.g., [12]), or Amazon S3 backups (e.g. [10]). Our team is also actively developing such a storage-and-retrieval system, but that is not the focus of this paper. What we wish to put forward is that this situation raises specific challenges in terms of privacy of training groups. Indeed, in many business training situations it may be undesirable for a training group to have access to the events taking place within another group, and not only due to pedagogic or assessment reasons: we can envisage a situation where the virtual space is being provided not to a single company’s training groups, but rather to various companies, as a virtual space provider service; in such a case, privacy concerns are an even more pressing concern.

The basic requirement, whose fulfilment will be discussed in the coming sections, is that privacy control must take place dynamically both in time (i.e., related to which training group is using the space) and in space (i.e., related to the volume and geometry of the training space used by each group). SLG/OpenSimulator offer a per-parcel access control method and voice privacy option, which is based on the geography of the virtual land, but this does not solve the issue of having different groups of users, with different access privileges, at varying altitudes and possibly different spatial geometries at each altitude. Nor does it address other types of privacy issues, which we discuss further ahead.

III. PRIVACY CHALLENGES IN SLG/OPENSIMULATOR Privacy comes in two forms: avoiding eavesdropping by

external parties, and being able to enjoy the training session without disruptions caused by external parties. Under this viewpoint, to have total privacy in the SLG/OpenSimulator platforms, it is necessary to bear in mind a set of elements: avatar presence, voice communication, text chat, camera controls, and object/particle creation. In some cases, privacy can be ensured programmatically and transparently, in others, programmatically but impacting the user’s experience, and yet in other cases management decisions may be required.

A. Avatar presence The main element to control in order to ensure privacy are

users’ avatars. An avatar’s position determines most operational features of SLG/OpenSimulator impacting privacy, as described in the following sections. Furthermore, the mere presence of an undesired avatar in a training session may be reason for discomfort or other complications, like increased graphical rendering load at other users’ computers.

Avatars may move around a SLG/OpenSimulator virtual world in four different ways: by walking or running on a surface; by flying; by teleporting to specific coordinates; and by being dragged or pushed by scripts in objects.

As is the case with several settings mentioned in the coming sections, three of these (flying, teleporting, running scripts) may be deactivated in SLG/OpenSimulator parcels, but not separately for different altitudes, and so alternative methods of preventing avatar access to a training room – or indeed, to its vicinity – must be implemented, as detailed in section IV.

B. Voice communication In SLG/OpenSimulator, there are three types of voice

communication: spatial voice channel, group voice chat and one-to-one voice chat. The spatial voice channel enables an user to talk and be heard by all avatars nearby (this is actually more complex, as we’ll explain shortly), and is the focus of the privacy issues. Group voice chat is a private conversation between group members, and thus privacy can be ensured as long as group membership is automated, based on the training sessions’ scheduling. One-to-one voice chat is a direct conversation between individual users, which can even fall outside the realm of the training session, and is therefore not a privacy issue of virtual worlds specifically (it could be a security issue, but that is beyond the scope of this paper).

In SLG, the listening distance for spatial voice is 60 meters from the listening position, with the volume having a distance fall-off for spatial effect, but which can be overridden. However, this listening position can be set to operate from the camera position, not the avatar’s position, and so even with default camera settings the listening distance can be 110 meters [5]; or up to 512 meters, if one elects to disable camera constraints, a mere user-selectable setting [2], related to the draw distance (see section on camera control, below), and thus not enforced by the communication protocol.

In OpenSimulator, voice can be implemented via autonomous modules, and thus have different specifications in each server – but this also simplifies implementation of privacy, as we’ll explain ahead.

C. Text chat The original and still common way to communicate in

SLG/OpenSimulator is through text messages. These can be one-to-one or one-to-many private conversations, known as instant messaging, which being private in nature do not concern us at the moment; or group messages, with privacy implementable by automating groups membership based on training sessions’ scheduling; or public text chat, which is where our privacy concern lies.

Text messages can be “spoken” or “heard” by avatars and scripted objects; these messages are broadcast through channels, designated by the “speaker”, with messages sent on channel 0 being “public” (visually presented to human users on the screen), and messages sent on any other channel only “listened” by scripted objects – one must be aware that they are also being broadcast, and not encrypted or protected in any way. In SLG, messages are heard only up to 20 m from the location of the “speaker” (object or avatar), with two exceptions: users can “shout”, and make themselves heard up to 96 m; and objects may issue a specific script command (llRegionSay) in order to issue a message that can be “heard” in an entire region, albeit not on channel 0, and thus not disrupting conversations, but potentially usable for eavesdropping [6]. OpenSimulator is identical, except that these distances are configurable for each server.

D. Objects/particles In SLG/OpenSimulator, users may create two different

types of elements: objects and particles. An object is a set of geometric shapes, known as primitives or prims. It can contain scripts, and is a persistent asset of the virtual world. A particle is a short-lived visual element, produced by an object-based script, but is not an asset of the virtual world: it is a mere visual feature that is rendered by the graphics engine using physical features such as rotation, speed, colour, decay, etc. They are used for visual effects such as sparks, water droplets, etc.

Object creation and script execution can be allowed or denied on a parcel basis, but once again this does not solve our situation where training rooms can be located at varying altitudes, not just at ground level. A forefront issue with object creation is that it can be used to disrupt training sessions, either by objects being obtrusive/inappropriate or by troublesome scripted behaviours (for instance, an object that “speaks”

constantly can render text chat virtually useless). While objects are traceable to a specific creator and owner, social engineering techniques may be used to render such simple tracing mechanisms useless: for instance, a user may be tricked into creating an object and providing it to another user, which then creates the scripted behaviours and then returns it to the original user, thus short-circuiting the tracing.

Another issue with object creation is that it can be used to eavesdrop on training sessions, by communicating the content of text chat, reporting the arrangement of objects, logging the positioning of avatars, etc. This communication can be done not only to other objects or users, but indeed to servers external to the virtual world, using the HTTP protocol or e-mail.

Finally, objects may be created in two ways: by requesting their creation to the server ad hoc, depending on the parcel’s permissions; or by attaching them to customize one’s avatar, regardless of parcel permissions. We’ve discussed the issues with ad hoc creation above, but one must emphasize that attached objects can execute scripts, just like any other object.

Particles need an object to be created, but being entirely visual do not contain scripts and cannot be used for eavesdropping; however, they can be produced with any graphic appearance or texture, and if this is done in enormous numbers, it is an effective disruption strategy. Individual users can deactivate particle rendering on their local software clients, but may not be aware of how to do it or the software may slow down to a halt trying to render all particles, before the user has the opportunity to deactivate the rendering.

E. Camera control While SLG/OpenSimulator worlds can be experienced

from a first-person perspective, the common situation is that of a third-person perspective, with the default camera position being a few feet behind and above one’s avatar. The camera will show all objects and terrain within the maximum viewing distance or draw distance, that is, how far can a visual element be and still be rendered on the user’s screen, which each user may configure as a distance from the avatar’s position. It can be set between 64 and 512 meters (that is, between a quarter of the horizontal region size and two times the horizontal size of a region). This does not ensure that all objects within this range will be rendered or even sent to the software client at all, due to SLG/OpenSimulator servers’ optimization strategies of not sending all objects to the client. E.g., objects < 1 m and away >100 m are not sent by SLG servers [11, p. 73]; but some do, and are potential privacy issues (e.g., virtual slide presenters, typically > 1m). Draw distance in itself would not be a major problem: a simple solid wall would block the sight. But in SLG/OpenSimulator client software, users control the camera’s position, pressing Alt while clicking/moving the mouse. A user with default settings may think that camera movements are limited to a somewhat close range in terms of vertical offset and horizontal distance, but by disabling camera constraints [2] a user can in effect navigate the camera at will and zoom in on any element within the draw distance. This does not increase the draw distance, which is measured from the avatar’s position, not the camera’s; but it allows a user to “see” beyond walls simply by moving the camera position beyond the wall.

IV. PRIVACY-SUPPORTING METHODOLOGIES Due to the nature of how SLG/OpenSimulator operates (vd.

section III), the only way to ensure the privacy of a training session, without compromising functionality, is to implement a single session per region, ensure that such a region does not have contiguous regions in the same grid, disable script execution for objects not belonging to the region’s owner, disallow object creation, and mandate that users login only with prescribed avatars, created and managed by the training manager (to unsure that users are using avatars that do not have scripted attachments). And even so, this only suffices in a region that is part of a private SLG/OpenSimulator grid, under tightly controlled conditions; in SL, any avatar – including the ones assigned by training manager – can receive “offers” from others, and so an user may be tricked into accepting an “offer” and attaching it to his/her avatar in a non-visible manner (for instance, it can have a default attaching position inside the skull or torso; or as a personal display attachment, known as “heads-up display”, visible only to himself/herself).

However, in many cases some functional shortcomings or privacy risks may be acceptable. For instance, if the training is taking place remotely, but using trainees’ personal home computers, not computers in a controlled business environment, then all the social engineering challenges mentioned above are of the same class as those that could happen on traditional Web-based e-learning platforms (e.g., keyloggers, trojans, viruses), and so arguably acceptable in situations where Web-based platforms are deemed adequate.

We propose that in many cases the security challenges can be lessened by appropriate programmatic and management measures, which albeit causing some functional shortcomings, when combined with an acceptable level of privacy risks can allow the better use of resources by allowing multiple training sessions to take place within the same region.

A. Avatar presence There are two methods available to control avatar access to

a training area. The first is to teleport an undesired avatar away from training areas. This method can be implemented by using a script inside an object responsible for controlling the access, issuing one of two functions: llTeleportAgentHome and llEjectFromLand [6]. This object/script will work as a sensor and determine, at any time, if an avatar is present whose access is not allowed. If such an avatar is found he/she/it will be teleported to his/her/its home location or pushed upwards and then away from the area, respectively. Therefore, we must think about what the “home location” of an avatar can be. It is a user-defined setting that can be any place on land owned by the user, owned by a group the user belongs to, or on land designated by grid administrators as infohubs, such as welcoming areas for new users. By default, it is the welcome area where a user first accessed the virtual world.

Both functions have their shortcomings, especially in SLG: llTeleportAgentHome can be circumvented if a user belongs to the group owning the land and decides to set his/her home location to the place from where he/she is being teleported from. This means that particular care must be taken not to use the same group for land ownership and other purposes, i.e.,

only administrative users should be assigned to the group that owns the land where training is taking place. This is an acceptable solution, but the main shortcoming is that Linden Lab has announced that support for this function would likely be removed from SL scripting sometime in the future.

Figure 1. Avatar at corner of region from which she was expelled with llEjectFromLand()

So, in SL one should also consider using llEjectFromLand. This function does expel an avatar from the training area, with the main shortcoming of not doing it reliably: it simply places the avatar at the nearest region. This will work if a region is surrounded by other regions. However, if there are none (as in the case of Figure 1, which displays what can happen on an isolated region), the avatar is simply placed at the nearest corner of that region –still inside it, and perfectly able to follow nearby text & voice chat, move the camera, be seen by others, etc. – and the process will likely repeat itself: since the avatar is still within the region, the script running llEjectFromLand will probably try to eject the avatar again and again. One should have in mind that an avatar may be intruding unwillingly or without ill intent: one could simply be strolling around, and not take lightly the fact of suddenly being teleported to a possibly faraway location (the home location) or “stuck” into being constantly ejected into a region corner, repeatedly.

A smoother solution can be achieved in OpenSimulator, by ensuring that an avatar is expelled gracefully from the private training area into a specific reception area. For instance, Figure 2 shows a map with such an area marked: this ground level can be used for overall reception of visitors or training participants, and the area marked on the map can be used specifically for presenting information for visitors/participants that have found themselves there after being expelled from a training room to which they should not have access.

This is not possible in SLG, because it requires a change to the server operation, and SLG’s code is proprietary. But since OpenSimulator’s source code is open, such change can be expedited. To do this, one needs to run a customized OpenSimulator server version in regions where such a behaviour if desired. That customized version is created by

recompiling the server source code, after editing the section controlling llTeleportAgentHome, so that it produces a teleport to a specific location in the region – the reception area – not to avatars’ home locations (obviously, the coordinates of that location can be hardwired in the code or part of a configuration database table in a convenient server). This is a trivial change, done in the file ~\OpenSim\Framework\UserProfileData.cs.

Figure 2. Reception location for avatars on the ground level of a region; training rooms are high above.

The second method to control avatar access to a training area is to push the avatar away on a specific direction, using the function llPushObject [6]. This function is also executed by a script running in a sensor object, as described above for llTeleportAgentHome and llEjectFromLand, and applies to an avatar a specific force, its strenght and direction set by input parameters, and thus dependent on avatar speed, direction, and mode of movement (Figure 3, Figure 4, Figure 5, & Figure 6).

Figure 3. Avatar entering a private room (walking)

Figure 4. Where it ended after being pushed (while walking)

Figure 5. Avatar entering a private room (flying)

Figure 6. Where it ended after being pushed (while flying)

The major issue with this approach is that it is not possible to determine the outcome exactly, because it is the result of combining the push force with the movement physics of the avatar being pushed. One may get data on an avatar’s position and speed and adjust the push force accordingly, but by the time the push takes place, it is likely that the actual avatar position and speed (amount and direction) may be different from the calculated ones (Figure 7 & Figure 8). This could potentially be exploited to cause a push into the room where privacy is desired, or – in the case of users without ill intent – potentially causing disorientation or discomfort to the user being pushed. One cannot employ a “reception area” approach as in the teleport case discussed earlier, because with push one cannot determine the exact end location for the pushed avatar.

B. Voice communication As mentioned in the previous section, the focus of privacy

issues is the spatial voice channel (also known as “public voice chat”), which potentially allows users to eavesdrop on public conversations up to 512 meters away.

A privacy measure of SLG is that public voice chat can be limited to a parcel, i.e. a subdivision of a region. This allows training sessions conducted in voice to be entirely private, as long as a specific parcel is created specifically for the training

area, and there are only a few training sessions per parcel (at varying altitudes, under the same constraints as described ahead in the section on camera control, since voice can be associated with the camera position). A consequence is that field trips or training in large, shared areas are not conducted with voice privacy unless group voice is used (implying that a group must exist for each class). (Unlike parcel restrictions to avatar entry, which are only enforced until an altitude of 60 meters or so, parcel restrictions to voice eavesdropping are independent of altitude.)

In OpenSimulator, voice can be implemented via autonomous modules, and thus privacy can be implemented by dynamic adapting of voice switching behavior. For instance, a trainer wishing to ensure private communication between herself and trainees, during a training session in OpenSimulator, may select a privacy option by acting upon a virtual world object, and that object can process that by contacting a Web service installed at the voice switching server, which, in response, can deactivate all voice routing between the group of trainees and other users, or just the routing of the spatial voice channel involving the trainees.

Figure 7. Results: avatar end location after push while walking.

Figure 8. Results: avatar end location after push while flying.

C. Text chat As explained in section III, regular text chat can be “heard”

over 20 m – 96 m, if “shouted” (issued with Ctrl+Enter instead of simply Enter). Since these distances are measured from the avatar’s position, not the camera’s, this means that spacing

training areas 20 m apart effectively prevents users (and scripted objects) from eavesdropping. This does not prevent a user/scripted object from disrupting a training session by shouting, for that a distance of 96 m is required. However, this means that for training sessions where text chat privacy is required, but not camera privacy, several sessions can be held at the same altitude in the same region, closer or further apart, depending on the likelyhood of disruptive chat (likely in public grids; unlikely in private grids).

There is also the issue of eavesdropping chat done by objects, with scripts that capture chat and then relay it using llRegionSay, sending e-mail, contacting Web servers, or other communication options. We’ll address that in subsection D of the current section, immediately below.

D. Objects/particles The issues surrounding object and particle creation were

discussed in the matching subsection of section III. To avoid them, there are basically two options: region/parcel administrators can turn off object creation completely or allow it only to a group of power users.

Avatar attachments aren’t affected, so these could still be a method of disruption by inadequate appearance – but in the context of training sessions, if we employ avatar presence control, as described earlier, all avatars are identified as trainees/students, and this allows other, management-based, approaches to this problem.

Object scripts pose a more serious problem, since they may operate stealthly to relay communications, avatar presence, etc. Particles, for instance, can only be created by running an object script. To avoid this, region/parcel administrators can disallow script execution completely or limit it to a group of power users.

While for some training scenarios these limitations to creation of objects and running of scripts suffices, in many it won’t. For instance, supposing an educational activity requires a trainee to model something, the most simple method is to provide the trainee with basic modelling components and let him/her create them at will; with object creation disabled, the training needs to provide a scripted “library” of objects (owned by administrators) with features for duplication and use by trainees – reinventing the wheel, as it were. Scripts are even more critical: if we want a trainee to “wear” a costume or tool in order to practice a procedure, it’s likely that scripts are required to ensure that such elements provide the required behaviours, such as interaction between tools and scenario objects.

Having trainees be part of the group of users allowed to execute scripts or create objects may be acceptable in most cases, but opens the possibility of eavesdropping by “silent” scripts (i.e., that provide no indication they’re running, except for their creators), or disruption by chance, triggering of conditions, or remote control. If we consider the basic scenario of this paper, where groups of trainees can be taking part in training sessions at different altitudes, allowing a group to execute scripts can result in compromising the privacy of all other groups.

Therefore, training sessions taking place over the same parcel must have similar privacy requirements. Creation of objects without scripts is more of a potential disruption than a privacy compromise, so the key issue is with scripts. If privacy is required alongside use of scripts by trainees, the recommended approach is to develop such training sessions in the contexto of a private network/server: while execution of unintended scripts cannot be prevented, the ability of those scripts to communicate with external parties for information or commands is curtailed.

Finally, we may consider the option to engage in an arms race of monitoring and response: objects trying to submit information to avatars elsewhere in a region may do so using region-wide messages, which can be detected by monitoring applications; but sending of e-mail or HTTP requests to Web servers cannot be detected in SL servers, only on OpenSimulator servers installed on a network under control of the training organization. Still, scripts could resort to other covert techniques (like storing information in object’s content until they eventually are worn in a public location), and such an arms-race approach seems woefully inadequate from a strategic point of view (even if adequate under specific tactical or operational circumstances).

E. Camera control Given that anything within 512 meters of an avatar is

potentially visible, virtual classrooms should be separated by a greater distance. Horizontally, this is not possible in a single region, since a region is at most 256 x 256 meters. Therefore, there can only be one classroom per horizontal level in each region, and regions cannot be adjacent on the grid, nor separated by a single empty region space (256 m), but rather by two empty region spaces. A superficial hands-on analysis may seem to indicate that only one region is enough, because when trying to “look” beyond open space in SL nothing usually is sent to the client application. However, this is just an optimization strategy of client-server communication, not a reliable behaviour. For instance, all it takes is for a user to teleport to locations on both sides of the open space, so that graphic elements from both sides are transmitted to the local computer: from then on, those elements are visible to the user across the open space and can be used for camera fly-overs.

Regions cannot be spaced vertically, since SLG/OpenSimulator technology does not foresee it; however there can be several classrooms arranged vertically in a single region with privacy from camera controls, because we can take advantage of SLG’s 4096 vertical meters and OpenSimulator’s 10000 vertical meters: setting classrooms 512m of altitude apart, we get 9 virtual classrooms with camera independence in SLG and 19 in OpenSimulator. However, since each must occupy at least a couple of meters of altitude, the actual maximum for SLG is 8 (maximum height per classroom, if all are identical: 512/8 = 64 meters). Regarding the 19 virtual classrooms possible in OpenSimulator, the maximum height per classroom, if all are identical is (10000-18x512)/19 = 41 meters, as shown on Figure 9.

Figure 9. Virtual classrooms separated from each other vertically.

V. CONCLUSIONS This article described the considerations and methodologies

to bear in mind to achieve adequate privacy of training sessions in 3D virtual classrooms in SLG and OpenSimulator. The differences between these platforms have been taken into consideration in descriptions and methodologies.

Privacy of training sessions requires control of avatar presence, voice chat, text chat, objects & particles, and camera movements. In a social, open world such as SL, which places user-to-user communication and user customizations at higher priority than local contexts, privacy cannot be simultaneously achieved along all these dimensions – not even using private SL regions; a SLG or OpenSimulator installation in a private network is needed for that level of privacy – to the point of having a private virtual world per training group, to avoid privacy issues between members of different training groups using a single server.

This runs against the basic goal of resource optimization, hence privacy compromises must be made (altough, obviously, each specific case of application requires a decision on which compromises are adequate and which aren’t). Absolute text chat secrecy and absolute avoidance of visual disturbances may be unfeasible, but it is quite possible to create situations where such breaches may be detected, and are not supported by avatar presence, hence if not rendering them preventable at least rendering them correctible when occuring. And avatar presence privacy, visual privacy, and voice privacy can be achieved even in the basic goal of optimizing server usage by having several training spaces per server, at various altitudes.

The quality level of available solutions is higher in OpenSimulator, due to its open-source nature, as was explained in the sections on avatar presence avoidance and voice privacy. SLG, by its closed-source nature, does not lend itself to similar approaches, although if Linden Lab so desires the features we mention (teleport home destination and dynamic voice switching configuration) can be made available in reasonably quick fashion.

In providing this paper, we hope to have assisted readers interested in conducting large-scale deployments of virtual worlds for teaching and training over SLG and OpenSimulator platforms.

ACKNOWLEDGMENT This research is developed under grant “MULTIS”, by

Portugal Telecom Inovação, as part of its “Plano de Inovação.”

REFERENCES [1] Clark, R. C.; & Kwinn, A. (2007). The New Virtual Classroom:

Evidence-Based Guidelines for Synchronous e-Learning. San Francisco, USA: Pfeiffer.

[2] Johnson, K. (2008). Making the Second Life camera go further. In KerryJ’s blog, retrieved on August 10, 2009, from http://blogs.educationau.edu.au/kjohnson/2008/01/15/making-the-second-life-camera-go-further/

[3] Linden, A. (2009). Announcing Second Life "Behind-the-Firewall" Product on Nov 4th, blog post retrieved on November 15th, 2009, from https://blogs.secondlife.com/community/workinginworld/blog/2009/10/28/announcing-second-life-behind-the-firewall-product-on-nov-4th

[4] Linden Research (2009a). Region Performance Improvement Guide. Retrieved August 10, 2009, from Second Life® Wiki website at http:/ /wiki.secondlife.com/wiki/Region_Performance_Improvement_Guide

[5] Linden Research (2009b). How far does my voice carry?. Retrieved on August 10, 2009, from the Second Life® Wiki website at http://wiki.secondlife.com/wiki/How_far_does_my_voice_carry%3F

[6] Linden Research (2009c). LSL Portal. Retrieved on August 10, 2009, from the Second Life® Wiki website at http://wiki.secondlife.com/wiki/LSL_Portal

[7] Murphy, C. (2005). Arbitrary Limitations in Second Life. SpinMass. Retrieved July 29, 2009 from http://spinmass. blogspot.com/2005/11/arbitrary-limitations-in-second-life.html

[8] OpenSimulator (2009). Main Page – OpenSim. Retrieved July 21, 2009 from OpenSimulator website: http://opensimulator.org/wiki/Main_Page

[9] PT Inovação (2009). Formare | soluções globais de eLearning e bLearning. Retrieved on August 10, 2009, from http://www.formare.pt/

[10] RIT Software (2009). OpenSim Amazon Readme. Retrieved on August 10, 2009, from the RIT Software website at http://www.ritsoftware.com/Portals/6/opensim/Readme.txt

[11] Sequeira, L. (2009). Mechanisms of three-dimensional content transfer between the OpenSimulator and Second Life Grid platforms. Master dissertation. Vila Real, Portugal: UTAD.

[12] Urban, R.; Marty, P. F.; & Twidale, M. B. (2007). A Second Life for Your Museum: 3D Multi-User Virtual Environments and Museums. In Trant, J. & Bearman, D. (Eds), Museums and the Web 2007: Proceedings. Toronto: Archives & Museum Informatics. Retrived Aug. 10, 2009, http://www.archimuse.com/mw2007/papers/urban/urban.html

[13] Virtual World News (2008). IBM Takes Second Life Behind Firewalls. Retrieved August 10th, 2009, from Virtual World News website: http://www.virtualworldsnews.com/2008/04/ibm-takes-secon.html