Webized 3D Experience by HTML5 Annotation in 3D Web

8
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. Web3D '15, June 18 - 21, 2015, HERAKLION, Greece © 2015 ACM. ISBN 978-1-4503-3647-5/15/06…$15.00 DOI: http://dx.doi.org/10.1145/2775292.2775301 Webized 3D Experience by HTML5 Annotation in 3D Web Daeil Seo * Byounghyun Yoo Heedong Ko University of Science and Technology Korea Institute of Science and Technology Korea Institute of Science and Technology Korea Institute of Science and Technology University of Science and Technology (a) (b) Figure 1: Web annotation example of the 3D user experience with a 3D object using HTML elements: (a) text and video streaming annotation on 3D planet objects in solar system, and (b) changing the camera perspective of solar system Abstract With the development of 3D Web technologies, 3D objects are now handled as embedded objects without plug-ins on web pages. Although declarative 3D objects are physically integrated into web pages, 3D objects and HTML elements are still separated from the perspective of the 3D layout context, and an annotation method is lacking. Thus it is scarcely possible to add meaningful annotations related to target 3D objects using existing web resources. In addition, people often lose the relationship between the target and related annotation objects in a 3D context due to the separation of the content layouts in different 3D contexts. In this paper, we propose a webizing method for annotating user experiences with 3D objects in a 3D Web environment. The relationship between the 3D target object and the annotation object is declared by means of web annotations and these related objects are rendered with a common 3D layout context and a camera perspective. We present typical cases of 3D scenes with web annotations on the 3D Web using a prototype implementation system to verify the usefulness of our approach. *†‡ CR Categories: I.3.3 [Computer Graphics]: Three-Dimensional Graphics and Realism—Display Algorithms I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism—Virtual reality; Keywords: webizing, 3D Web, user experience, annotation, HTML5 * e-mail: [email protected] Corresponding author. e-mail: [email protected] e-mail: [email protected] 1 Introduction With the ongoing development of 3D Web technologies, creating and manipulating 3D objects in the web environment of standard web browsers without plug-ins is possible, as it was in traditional standalone native applications. 3D objects are handled as embedded objects in web pages and include images, video, and scalable vector graphics (SVG). There are two approaches that can be used to integrate 3D objects into a web document. The first is an imperative solution such as WebGL [Parisi 2012] or Three.js [Dirksen 2013] which creates 3D objects based on procedural APIs. The second is a declarative 3D method such as X3DOM [Behr et al. 2009] or XML3D [Sons et al. 2010] which adds 3D objects to scene graphs as part of the HTML document. A declarative approach is integrated into web technologies such as cascading style sheets (CSS) for styling and a document object model (DOM) for manipulation. However, although declarative 3D objects are physically integrated into web pages, the 3D objects still involve the same separation of the HTML element from the perspective of the 3D layout as other HTML elements on a web page, and the DOM elements in the HTML document have a separate rendering context. Despite the increasing amount of user experiences related to the 3D Web, any method to add annotations about 3D objects is limited in current declarative 3D Web integration models [Behr, et al. 2009; Sons, et al. 2010]. For example, when a user adds a comment (i.e., an annotation) to a 3D object (i.e., a target) on the Web with the intention to modify the 3D object, the user needs to create an additional 3D object for the annotation as part of the 3D scene where the selected target 3D object resides. In this case, the user has limited privileges to add a comment using existing web resources supported by the current 3D Web integration models. Furthermore, it is not easy to share user-generated content such as annotations or comments in a 3D context on the Web. On the other hand, if the user adds a comment to a web page in, which 3D objects are embedded, like other widely used methods on social network sites, the user has advantage to use existing media and application library resources on the Web. In such a case, 73

Transcript of Webized 3D Experience by HTML5 Annotation in 3D Web

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. Web3D '15, June 18 - 21, 2015, HERAKLION, Greece © 2015 ACM. ISBN 978-1-4503-3647-5/15/06…$15.00 DOI: http://dx.doi.org/10.1145/2775292.2775301

Webized 3D Experience by HTML5 Annotation in 3D Web Daeil Seo* Byounghyun Yoo† Heedong Ko‡

University of Science and Technology Korea Institute of Science and Technology Korea Institute of Science and Technology Korea Institute of Science and Technology

University of Science and Technology

(a) (b)

Figure 1: Web annotation example of the 3D user experience with a 3D object using HTML elements: (a) text and video streaming annotation on 3D planet objects in solar system, and (b) changing the camera perspective of solar system

Abstract

With the development of 3D Web technologies, 3D objects are now handled as embedded objects without plug-ins on web pages. Although declarative 3D objects are physically integrated into web pages, 3D objects and HTML elements are still separated from the perspective of the 3D layout context, and an annotation method is lacking. Thus it is scarcely possible to add meaningful annotations related to target 3D objects using existing web resources. In addition, people often lose the relationship between the target and related annotation objects in a 3D context due to the separation of the content layouts in different 3D contexts. In this paper, we propose a webizing method for annotating user experiences with 3D objects in a 3D Web environment. The relationship between the 3D target object and the annotation object is declared by means of web annotations and these related objects are rendered with a common 3D layout context and a camera perspective. We present typical cases of 3D scenes with web annotations on the 3D Web using a prototype implementation system to verify the usefulness of our approach. *†‡

CR Categories: I.3.3 [Computer Graphics]: Three-Dimensional Graphics and Realism—Display Algorithms I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism—Virtual reality;

Keywords: webizing, 3D Web, user experience, annotation, HTML5 * e-mail: [email protected] † Corresponding author. e-mail: [email protected] ‡ e-mail: [email protected]

1 Introduction

With the ongoing development of 3D Web technologies, creating and manipulating 3D objects in the web environment of standard web browsers without plug-ins is possible, as it was in traditional standalone native applications. 3D objects are handled as embedded objects in web pages and include images, video, and scalable vector graphics (SVG). There are two approaches that can be used to integrate 3D objects into a web document. The first is an imperative solution such as WebGL [Parisi 2012] or Three.js [Dirksen 2013] which creates 3D objects based on procedural APIs. The second is a declarative 3D method such as X3DOM [Behr et al. 2009] or XML3D [Sons et al. 2010] which adds 3D objects to scene graphs as part of the HTML document. A declarative approach is integrated into web technologies such as cascading style sheets (CSS) for styling and a document object model (DOM) for manipulation. However, although declarative 3D objects are physically integrated into web pages, the 3D objects still involve the same separation of the HTML element from the perspective of the 3D layout as other HTML elements on a web page, and the DOM elements in the HTML document have a separate rendering context.

Despite the increasing amount of user experiences related to the 3D Web, any method to add annotations about 3D objects is limited in current declarative 3D Web integration models [Behr, et al. 2009; Sons, et al. 2010]. For example, when a user adds a comment (i.e., an annotation) to a 3D object (i.e., a target) on the Web with the intention to modify the 3D object, the user needs to create an additional 3D object for the annotation as part of the 3D scene where the selected target 3D object resides. In this case, the user has limited privileges to add a comment using existing web resources supported by the current 3D Web integration models. Furthermore, it is not easy to share user-generated content such as annotations or comments in a 3D context on the Web. On the other hand, if the user adds a comment to a web page in, which 3D objects are embedded, like other widely used methods on social network sites, the user has advantage to use existing media and application library resources on the Web. In such a case,

73

however, it is difficult to determine the relationship between the target 3D object and the annotation because they have separate 3D layouts on the web page.

People create and share content easily in the web environment and there are numerous existing resources on the Web. Web technologies support CSS Transforms for the 2D and 3D transformation of HTML DOM objects [Fraser et al. 2013]. DOM objects have 2D or 3D positions in the HTML layout, but DOM objects are overlaid on the screen because the viewpoint of the user’s perspective for the HTML document is fixed. In addition, CSS3 is difficult (i.e., inappropriate) when used to create 3D objects in the spatial reference model of current web pages.

In this paper, we propose a webizing method that integrates both the 3D layout of a scene graph using declarative 3D assets to describe 3D objects and that of web resources to describe the user annotations with the 3D objects. All DOM elements (i.e., both 3D objects and HTML elements) have a 3D position and transform on the 3D Web. The webizing method uses an open annotation data model [Sanderson et al. 2013] to define the relationship between the 3D objects and the HTML elements so as to share the rendering context in the 3D space and layout. Any HTML element can refer to a target 3D object with the web annotation model, and the user experience is embedded into the 3D Web. Consequently, user experiences such as annotations and comments on 3D Web content are easily created and shared. We call “3D experience” in this paper as means of 3D assets and their annotations with 3D context on 3D Web.

The remainder of this paper is structured as follows. We present supporting related work in Section 2, after which we introduce the proposed webizing method to integrate 3D objects and HTML elements on the 3D Web in Section 3. Then, we explain the prototype implementation in Section 4. We present the experimental results to verify the proposed method and discuss lessons learned in Section 5. Finally, we conclude with a summary and an outlook on future work in Section 6.

2 Related Work

3D content has been increasingly employed in web applications in various areas such as design and manufacturing [Beier 2000], virtual clothing [Chittaro and Corvaglia 2003], and virtual and augmented reality museum exhibitions [Wojciechowski et al. 2004]. However, there is now a need for rendering frameworks and run-time architectures of 3D objects on the Web, such as FreeWRL [Stewart 1998] and a few others [X3D 2005].

Berners-Lee [1998] noted that webizing is a means of bootstrapping the Web using a large amount of legacy information. Because creating 3D interactive content using the HTML5 platform is complex, 3D graphics should leverage web technologies. Jianping and Jie [2010] compared the principles of certain models used to create 3D content which can run on a web browser without special plug-ins. 3D Web technologies such as

X3DOM and XML3D leverage existing web technologies to render 3D objects on web browsers and employ modern web technologies such as CSS3, Ajax and DOM scripting as a SVG integration model for 2D graphics on the Web [Andronikos et al. 2014]. X3DOM [Behr, et al. 2009; Behr et al. 2010] is a JavaScript-based open-source framework for directly integrating X3D nodes into HTML5 DOM content. XML3D [Sons, et al. 2010] is an extension to HTML5 that describes interactive 3D content as part of a web page using DOM. Jankowski et al. [2013] presented declarative 3D principles, current approaches, and research agendas. Declarative 3D approaches use web technologies such as CSS for styling and JavaScript for interacting. However, a major media type that is still missing in current web standards is 3D assets, and existing approaches for 3D Web integration require a method for annotating the user experience directly onto 3D objects using web technologies.

In order to connect the user annotation to 3D objects, Jankowski and Decker [2013] introduced a dual-mode user interface that has different integration level modes (i.e., a hypertext mode and a 3D mode), with the user able to switch anytime between modes. In the hypertext mode, a 3D scene is embedded in hypertext and the user performs simple hypertext-based interactions. The hyper-textual annotations are located into the 3D scene in the more immersive 3D mode. Gatto and Pittarello [2014] proposed a system for creating stories based on a repository of annotated 3D worlds. Flotynski and Walczak [2013] proposed semantic markup schema for the 3D Web, and Ahn et al. [2013] introduced a system that annotates 3D objects for sensor web information on an augmented reality world. Although previous approaches include annotated user experiences on 3D worlds, these systems are limited with regard to sharing 3D layout context for rendering or using existing web resources.

With regard to web technologies, the open annotation data model specifies an interoperable framework for creating associations between related resources using annotations without requiring changes to the original content on the Web [Sanderson, et al. 2013]. Web annotations of content are now emerging as first-class objects [Ciccarese et al. 2013] on the Web. However, the annotation model on the Web has only been discussed with regard to relationships between content without considering the visualization and layout of related content. Specially, the annotation model lacks the visual integration of 3D objects and annotations in a 3D spatial context. In this paper, the proposed method uses a web annotation model to declare the relationship between the user experience and 3D objects, rendering them based on this relationship to share the layout in a 3D context.

Evans, et al. [2014] classified existing browser-based 3D rendering approaches according to the level of declarative behavior. However, previous works lack an annotation model on the 3D Web. Table 1 shows a comparison between previous studies and the proposed method. The proposed method provides web annotations to declare the relationship between 3D target object and HTML annotation elements to share the 3D layout

Table 1. Comparison of approaches to browser-based 3D rendering

Approach Inbuilt scene graph

Customisable pipeline

Standards-based

Web annotation*

Previous works

[Evans et al. 2014]

X3D No X3DOM No XML3D No

CSS Transform No No Three.js

Proposed Method No Web annotation*: declaration of a relationship between 3D target object and HTML annotation element

74

context on the 3D Web using web technologies.

3 Webizing 3D Experience

3.1 Integration of HTML and 3D Web

Declarative 3D objects are embedded in HTML DOM elements and have a separate layout in a 3D context in a web environment. Figure 2 shows examples of 3D Web integration in which a 3D scene is integrated into an HTML document [Jankowski and Decker 2013]. A 3D scene is embedded into an HTML document, and the scene is another media type of web page, as shown in Figure 2(a). Annotation text and arrows in 3D objects are located inside the 3D scene. Figure 2(b) shows another example, in which annotated HTML elements have separate user interfaces (UIs) from a 3D scene. However, the HTML UI perspective is not transformed depending on the changing perspective of the 3D scene, as the layout of the HTML UI is fixed as a 2D layout of page media. Thus, it is difficult to determine the relationship between target 3D objects and HTML annotations.

(a) (b)

Figure 2: Integration of the 3D Web: (a) 3D object on the Web and (b) a 3D object with HTML annotations [Jankowski and Decker 2013]

The Declarative 3D for the Web Architecture W3C Community Group (Dec3D) proposed feasible technical solutions to easily add interactive high-level declarative 3D objects to HTML-DOM [W3C Community Group 2013] such as X3DOM [Behr, et al. 2009] and XML3D [Sons, et al. 2010]. Declarative 3D is a part of HTML documents and is capable of being integratedwith DOM. It also features a scene graph and has a high level of platform interoperability [Evans, et al. 2014]. The declarative 3D approach provides annotation methods. For the 3D-Scanned CH Model with Metadata[1], an annotation is another 3D object that is contained in the 3D scene. A user clicks on the 3D object to annotate an object, but the annotation object has only position information. To create a meaningful annotation, the user needs to modify the 3D scene. Another example is Component Explorer[2], which uses an image to describe a component of a CAD model of scene. To modify an annotation of the scene, the user creates a new image media resource, which is inconvenient. Previous declarative 3D and HTML integration approaches use CSS Paged Media [Grant et al. 2013] on the Web as shown in Figure 3(a). HTML elements and 3D objects have separate layouts on a 2D web page. On the other hand, the proposed method uses CSS Place Media [Ahn et al. 2014], which supports the rendering of an HTML document as a 3D volumetric medium, as shown in Figure 3(b).

Figure 4 shows a transforms comparison of between a pervious integration method of declarative 3D and HTML, as shown in Figure 3(a), and a webizing 3D method, as shown in Figure 3(b). The pervious method uses the X3D transform for 3D objects and the CSS 3D transform for HTML elements. However, they perform separate transforms from each other, as shown in Figure 4(a). To determine the relationship between 3D and the annotation objects, the camera perspective and the 3D layout content should

be shared as shown in Figure 4(b). The target 3D object and the annotated object share the position origin and transform together.

(a) (b)

Figure 3: Comparison of previous and proposed approaches: (a) declarative 3D and HTML integration uses CSS Paged Media on the 3D Web and (b) Webizing 3D uses CSS Place Media on the 3D Web

(a) (b)

Figure 4: Transform comparison of previous and proposed approaches: (a) integration of declarative 3D and HTML and (b) proposed Webizing 3D

Existing web technologies and media resources should leverage the 3D Web to describe experiences with 3D scenes and objects so as to create and share user experiences on the 3D Web easily. HTML elements for describing user experiences and 3D objects should share their layouts in a 3D context for the appropriate rendering of a scene graph with annotations. When changing the camera perspective of the scene, 3D objects and HTML elements are transformed by declarative 3D transforms and CSS 3D transforms methods respectively in the same 3D layout. Figure 5 shows an overview of the proposed webizing method for annotating user experiences on 3D objects. When a user declares a web annotation on a 3D target object using an HTML element, the 3D object and web annotation are related on the scene graph. The annotation HTML element inherits the transform origin from the 3D target object to share the layout in the 3D context. HTML elements without web annotations in an HTML document have separate position and layout information. The user experience denoted by HTML elements can be shared and searched by existing web technologies and can be rendered in the shared layout of the 3D context; i.e., it can be augmented on 3D objects in the scene, as shown in Figure 5.

To determine the relationship between a 3D object and a web annotation element, 3D assets and HTML elements are integrated as HTML elements into the 3D Web to share the rendering layout in a 3D context, rather than 3D objects being physically integrated into the Web as a type of media on the web page. In this paper, we use X3D to declare 3D objects on HTML documents to integrate 3D objects with web annotations on the 3D Web. X3D is an ISO

75

standard [X3D 2004] that describes the abstract functional behavior of time-based, interactive 3D, multimedia information. This declarative 3D approach is more suitable for the integration of 3D content into web technologies and allows the indexing of 3D content. In addition, it is easy to declare the web annotation relationship because 3D objects in an HTML document are selectable by a CSS selector [Çelik et al. 2011] as are other DOM elements.

Figure 5: Overview of webizing 3D experience with annotation

3.2 Semantic Annotation on 3D Web

In order to annotate the user experience with 3D objects on the 3D Web, a method to declare the relationships between targets and annotations is necessary. In this paper, we propose webizing experience annotation on the 3D Web based on an open annotation data model [Sanderson, et al. 2013]. The proposed method uses semantic annotation schema based on Schema.org [Google et al. 2011], a common set of schemas for structured data markups such as the Resource Description Framework in attributes (RDFa) [Herman et al. 2013] and JSON-LD [Sporny et al. 2014] on web documents constructed by major web search providers. The proposed method defines a new schema known as AnnotationObject for annotations on a 3D target object, as given in Table 2, which is a more specific type of MediaObject. A MediaObject is an image, video, or audio object embedded in a web page. An AnnotationObject is a media object to annotate the user experience on 3D target object on the 3D Web.

Table 2. Webizing annotation schema for the 3D Web experience

Property Range Description

target URL 3D target object’s DEF attribute value of the annotation

translate Doubles Defines a translation rotate Doubles Defines a rotation (degree) scale Doubles Defines a scale transformation

contentURL URL URL of an external web page to annotate on 3D target object

To use a MediaObject for annotation, additional properties are required. The target property of an AnnotationObject refers to an identifier of the target object that is a DEF attribute of an X3D object. Transform properties such as the translate, rotate, and scale properties are based on definitions of W3C CSS Transforms [Fraser, et al. 2013]. The origin of an AnnotationObject on a 3D layout on a web page is determined by the origin of the 3D target object. The properties of the AnnotationObject as shown in Table 2 are declared on the HTML document, and each webized annotation is described by div tags, with each uniquely identified by the id attributes of each div tag. The contentURL property is an optional property that is used to annotate an existing web page on

a 3D target object.

Figure 6 depicts an example of a webized experience with 3D objects as described by X3D and RDFa in an HTML document. An X3D object has a sphere and a 3D position in a shared layout on the 3D Web. The object is identified by the DEF attribute on the X3D declaration, and the identifier ‘earth’ in the 3D target object is used for web annotation to refer to the target object. The web annotation element in Figure 6 is identified by ‘earth_annotation’ with the value of id attribute, and it refers to a shape which has the ‘earth’ value for the DEF attribute in the X3D scene according to the target property. The transform properties (i.e., the translate, rotate, and scale properties) based on CSS Transforms are described by the property attribute in div tags. The target object determines the origin of the AnnotationObject relatively. The annotation position is ‘81, 0, 150’ due to both the X3D transforms and the CSS transforms. HTML elements in the p tag of the ‘earth_annotation’ annotation are rendered on the 3D target object. The annotation has a text description and an embedded online video streaming service application, i.e., YouTube in this example. The rendering result is shown in Figure 1(a). Although the X3D objects and HTML annotation elements are declared separately, the scene graph renderer as will be explained in Section 4, builds a single integrated scene graph according to the relationship between the targets and the annotations denoted by RDFa. If the X3D document is separate from the HTML document and the X3D document is imported by an inline node, personalized annotation on the 3D objects in the same scene is also possible.

Figure 6: Example of an HTML document of a webized 3D experience annotated with X3D

4 Prototype Implementation

We implemented a prototype system using the proposed web annotation method. Figure 7 shows an overview of the prototype system architecture used to render an X3D object and an HTML document on the 3D Web. The prototype implementation consists of two parts: the content store provider and the scene graph renderer. The content store provider manages 3D assets (e.g., X3D scenes) and HTML documents. The scene graph renderer builds a 3D scene graph from X3D and HTML documents and renders the scene graph on the 3D Web. The scene graph renderer is implemented based on Three.js [Dirksen 2013] to render and transform 3D objects. In addition, the scene graph renderer has an

76

additional module to interpret X3D documents and determines the positions of annotation elements in HTML documents based on the relationships between AnnotationObjects and X3D objects.

Figure 7: Overview of the prototype system architecture

The X3D document, a container of virtual objects on the 3D Web, is separate from or embedded into an HTML document for annotation objects. The content provider sends X3D and HTML documents as a response to a HTTP GET request message from a web browser. The scene graph renderer, implemented in JavaScript, deals with the response documents. The renderer builds a HTML DOM tree for 3D objects as a part of the HTML document and extracts the annotation relationship from the web annotation denoted by RDFa on the HTML document. The relationship information is stored into the DOM annotation as shown in Figure 7. The scene graph integrator builds a 3D integrated scene graph to render 3D and annotation objects with a shared 3D layout context. Upon changes in the camera perspective, the 3D integrated scene graph is rendered by each renderer, in this case an X3D renderer and a CSS 3D renderer. The X3D renderer serves to draw X3D objects on the 3D Web and transforms the 3D objects according to changes in the DOM

elements’ attributes of 3D objects. HTML elements for web annotations are rendered and transformed by the CSS 3D renderer. The renderers use a shared camera perspective to maintain the 3D layout context on the 3D Web.

5 Experimental Results and Discussion

5.1 Experimental Results

We applied the implemented system to various usages of a 3D scene with web annotations on the 3D Web to verify the usefulness of our approach. Figure 1 depicts solar system examples of webized 3D experience with web annotations using the prototype implementation shown in Figure 7. This example uses web annotations to provide information about the solar system and planets to educate students. An X3D scene declares the planets in the solar systems and the sun is located in the center of the scene. Planets as 3D sphere objects in the 3D scene have image textures. The circles around the sun indicate each orbit of revolution. Each annotation refers to the planets, as shown in Figure 6 and the annotations are described by HTML elements. When a user selects a planet, the web annotation related to the selected planet is shown in the 3D scene. The annotation object contains text representing overview information and a video streaming application related to the planet. A user can change the position, orientation, or perspective of the camera from Figure 1(a) to Figure 1(b) by interacting with the scene on the 3D Web. A 3D object and an annotated object are synchronized in the scene because the annotation is linked to the target as the context of the 3D layout and as context of the annotated content. Figure 8 is a separate rendering result of the 3D objects and annotated objects in the scene, which was shown in Figure 1(b). The X3D renderer draws 3D objects as shown in Figure 8(a), but it is difficult to use existing sophisticated web resources and libraries to describe the scene within the capability of existing X3D renderer. In contrast, the CSS 3D renderer transforms the HTML elements of web annotations, as shown in Figure 8(b). The scene graph renderer explained in Figure 7 transforms and draws 3D and annotated objects on the 3D Web by sharing the 3D layout context of Figure 8(a) and (b).

Another example of webized 3D experience with web annotation is shown in Figure 9. The 3D model is an architectural CAD model of a house. The client and architect exchange their opinion about the 3D model on 3D Web. To add an opinion about the model using web annotation, a user clicks a button as shown in Figure 9(a) and the user chooses a target 3D object on the 3D Web as shown in Figure 9(b). A TinyMCE WYSIWYG editor

(a) (b)

Figure 1: Separate rendering results of the 3D Web: (a) 3D object rendering and (b) web annotation rendering on the 3D Web

77

Figure 2: User experience annotation on 3D model

Figure 3: Transforms web annotation of user experience on 3D model

(a) (b)

Figure 4: Comparison of HTML element interaction on 3D Web: (a) X3D MovieTexture example[3] and (b) Webized annotation on the 3D Web using Video, YouTube, and video tag of HTML

[Moxiecode Systems AB 2003] to declare web annotation about the target object is appeared on the 3D Web as shown Figure 9(c). Metadata of semantic annotation based on the schema given in Section 3.2 refers the target 3D object by target property of AnnotationObject. When the user finishes editing annotation content, the annotation object is attached on the 3D target object as shown in Figure 9(d). The user can choose an annotation URL option for external web pages or web service applications to annotate on the target object. Figure 9(e) is a map service application example to indicate the location where the house will be constructed.

To change transform properties of the annotation object, the user clicks a transform button as shown in Figure 9(f) and Figure 10(a), and then the user choose an annotation object to transform as shown in Figure 10(b). The prototype implementation has a transformation UI as shown in Figure 10(c). By using the transforms UI, the user changes translate, rotate, and scale properties of the annotation object on the scene. We show that our proposed approach is suitable for various applications to annotate user experience on the 3D Web with the current standard web technologies.

5.2 Discussion

To verify the usefulness of the proposed system, we discuss the benefits of webized 3D experience by HTML annotation in 3D Web compared to those of previous approaches mentioned in Section 2 and Section 3.1. The proposed webizing method is based on declarative 3D for 3D objects and HTML for annotation objects. The proposed method inherits advantage of declarative 3D and HTML. Thus it is easy to declare 3D objects on the 3D Web and has no limitation for annotating objects to use existing sophisticated media resources and libraries on the Web.

Interaction with HTML element on 3D Web: To use existing multimedia resource on the Web, X3D supports components such as ImageTexture or MovieTexture. MovieTexture example[3] is an example to render multiple video-sources on the 3D Web by X3DOM as shown in Figure 11(a). This approach only supports rendering video on the scene. When a user wants to interact with the content in a 3D object, a content provider needs to add interaction components such as sensor nodes or buttons to control the video on the 3D scene. On the other hand, the content provider does not need additional effort to provide interaction controls and

78

modes because the web service application related to the video stream as well as the video content are available in the webized 3D scene as shown in Figure 11(b). The proposed webizing method supports web annotations on a 3D object, the content provider can use any existing web services and libraries for providing web content (e.g., video streaming from Vimeo, YouTube) as well as video tag of HTML document. Thus the proposed webized 3D experience by HTML5 annotation leverages existing web technologies and media resources to create and share user experiences on the 3D Web.

Rendering 3D objects with annotation on 3D Web: Although the declarative 3D objects are physically integrated into web pages, the 3D objects still involve the separation from HTML elements in the perspective of the 3D layout on a web page as shown in Figure 12(a). The declarative method provides simple annotation methods such as adding an image or 3D object in the scene. It is difficult to determine the semantic relationship between 3D objects and annotated objects even though the relationship is available by the visual cognition. On the other hand, the proposed webizing method supports 2D layout for annotation UI composition and 3D layout for 3D objects and annotated objects. 3D objects and annotated objects share the camera perspective and 3D rendering layout context on a 3D scene. A 2D layout element on the scene is simultaneously used with 3D layout of annotation to describe overview information on the scene as shown in Figure 12(b).

6 Conclusion

In this study, we proposed a method for webizing 3D experience by web annotation to express user experience on 3D Web. Our contribution point is that the proposed method uses web annotation model to declare relationship between user experience and 3D objects, and renders them based on the relationship to share layout and camera perspective in 3D context. In addition, the webizing method has advantage to use existing sophisticated media and application library resources of current web technologies on the 3D Web. The prototype implementation based on semantic web annotation schema shows feasibility for various applications by annotating user experience on the 3D Web.

Most existing 3D web applications assume a hard division of production to annotate user experience like Web 1.0 but here, our examples demonstrate that webized annotation method can be produced by users like Web 2.0, a digital prosumption that has proven effective in dramatic expansion of 3D web contents, we expect 3D contents with annotation may be expanded likewise

with our approach.

The proposed webized method determines the relationship between 3D objects and annotated objects by semantic web annotation based on web technologies, and renders webized 3D experience by HTML5 annotation on the 3D Web. Our prototype implementation is limited to sharing the context of 3D layout and the context of the annotated content. The occlusion between 3D target objects and annotated objects in HTML is not resolved in this paper. This is partly due to the constrained capabilities of current web browsers and the current implementation for sharing the HTML rendering pipeline and X3D rendering pipeline. We expect to resolve this issue by sharing depth buffer of HTML rendering pipeline and X3D rendering pipeline.

Acknowledgement

This research is supported in part by the Korea Institute of Science and Technology (KIST) Institutional Program (Project No. 2V03820).

References

AHN, S., KO, H. AND YOO, B. 2014. Webizing mobile augmented reality content. New Review of Hypermedia and Multimedia 20, 79-100.

AHN, S., YOO, B. AND KO, H. 2013. A comparative study of 3D web integration models for the sensor web. In Proceedings of the International Conference on 3D Web Technology, San Sebastian, Spain, Jun 20-22 2013 ACM, 199-203.

ANDRONIKOS, N., BAH, T., BIRTLES, B., CONCOLATO, C., DAHLSTRÖM, E., LILLEY, C., MCCORMACK, C., SCHEPERS, D., SCHULZE, D., SCHWERDTFEGER, R., TAKAGI, S. AND WATT, J. 2014. Scalable Vector Graphics (SVG) 2 World Wide Web Consortium.

BEHR, J., ESCHLER, P., JUNG, Y. AND ZÖLLNER, M. 2009. X3DOM: a DOM-based HTML5/X3D integration model. In Proceedings of the International Conference on 3D Web Technology, Darmstadt, Germany, Jun 16-17 2009 ACM, 127-135.

BEHR, J., JUNG, Y., KEIL, J., DREVENSEK, T., ZOELLNER, M., ESCHLER, P. AND FELLNER, D. 2010. A scalable architecture for the HTML5/X3D integration model X3DOM. In Proceedings of the International Conference on Web 3D Technology, Los Angeles, California, USA, Jul 24-25 2010 ACM, 185-194.

BEIER, K.-P. 2000. Web-based virtual reality in design and manufacturing applications. In Proceedings of the

(a) (b)

Figure 5: Comparison of rendering 3D object with annotation on 3D Web: (a) X3DOM example of 3D-Scanned CH model with metadata[1] and (b) Webized annotation on the 3D Web with 2D and 3D layout annotated objects

79

International EuroConference on Computer Applications and Information Technology in the Maritime Industries, Potsdam, Germany, Mar 29-Apr 4 2000.

BERNERS-LEE, T. 1998. Webizing existing systems. In World Wide Web Consortium, personal notes on: Design Issues - Architectural and Philosophical Points, Last chage date: March 9, 2010.

ÇELIK, T., ETEMAD, E.J., GLAZMAN, D., HICKSON, I., LINSS, P. AND WILLIAMS, J. 2011. Selectors Level 3 World Wide Web Consortium.

CHITTARO, L. AND CORVAGLIA, D. 2003. 3D virtual clothing: from garment design to web3d visualization and simulation. In Proceedings of the international conference on 3D Web technology, Saint Malo, France, Mar 9-12 2003 ACM, 73-84.

CICCARESE, P., SOILAND-REYES, S. AND CLARK, T. 2013. Web Annotation as a First-Class Object. Internet Computing, IEEE 17, 71-75.

DIRKSEN, J. 2013. Learning Three.js: The JavaScript 3D Library for WebGL. Packt Publishing.

EVANS, A., ROMEO, M., BAHREHMAND, A., AGENJO, J. AND BLAT, J. 2014. 3D graphics on the web: A survey. Computers & Graphics 41, 43-61.

FLOTYNSKI, J. AND WALCZAK, K. 2013. Microformat and Microdata schemas for interactive 3D web content. In Proceedings of the Federated Conference on Computer Science and Information Systems, Kraków, Poland, Sep 8-11 2013 IEEE, 549-556.

FRASER, S., JACKSON, D., O’CONNOR, E. AND SCHULZE, D. 2013. CSS Transforms Module Level 1 World Wide Web Consortium.

GATTO, I. AND PITTARELLO, F. 2014. Creating Web3D educational stories from crowdsourced annotations. Journal of Visual Languages & Computing 25, 808-817.

GOOGLE, YAHOO, MICROSOFT, YANDEX AND W3C, 2011. Schema.org [online]. http://schema.org [Accessed May 30 2014].

GRANT, M., ETEMAD, E.J., LIE, H.W. AND SAPIN, S. 2013. CSS Paged Media Module Level 3 World Wide Web Consortium.

HERMAN, I., ADIDA, B., SPORNY, M. AND BIRBECK, M. 2013. RDFa 1.1 Primer - Second Edition World Wide Web Consortium.

JANKOWSKI, J. AND DECKER, S. 2013. On the design of a Dual-Mode User Interface for accessing 3D content on the World Wide Web. International Journal of Human-Computer Studies 71, 838-857.

JANKOWSKI, J., RESSLER, S., SONS, K., JUNG, Y., BEHR, J. AND SLUSALLEK, P. 2013. Declarative integration of interactive 3D graphics into the world-wide web: principles, current approaches, and research agenda. In Proceedings of the International Conference on 3D Web Technology, San Sebastian, Spain, Jun 20-22 2013 ACM, 39-45.

JIANPING, Y. AND JIE, Z. 2010. Towards HTML 5 and interactive 3D graphics. In Proceedings of the International Conference on Educational and Information Technology, Chongqing, China, Sep 17-19 2010 IEEE, V1-522-V521-527.

MOXIECODE SYSTEMS AB, 2003. TinyMCE Homepage [online]. http://www.tinymce.com [Accessed Feb 14 2015].

PARISI, T. 2012. WebGL: Up and Running. O'Reilly Media. SANDERSON, R., CICCARESE, P. AND SOMPEL, H.V.D. 2013. Open

Annotation Data Model World Wide Web Consortium. SONS, K., KLEIN, F., RUBINSTEIN, D., BYELOZYOROV, S. AND

SLUSALLEK, P. 2010. XML3D: interactive 3D graphics for the web. In Proceedings of the International Conference on Web 3D Technology, Los Angeles, California, USA, Jul 24-25 2010 ACM, 175-184.

SPORNY, M., LONGLEY, D., KELLOGG, G., LANTHALER, M. AND LINDSTRÖM, N. 2014. JSON-LD 1.0 World Wide Web Consortium.

STEWART, J.A., 1998. FreeWRL [online]. http://freewrl.sourceforge.net [Accessed Mar 27 2015].

W3C COMMUNITY GROUP, 2013. Declarative 3D for the Web Architecture [online]. https://www.w3.org/community/declarative3d [Accessed Mar 26 2015].

WOJCIECHOWSKI, R., WALCZAK, K., WHITE, M. AND CELLARY, W. 2004. Building Virtual and Augmented Reality museum exhibitions. In Proceedings of the international conference on 3D Web technology, Monterey, California, USA, Apr 5-8 2004 ACM, 135-144.

X3D. 2004. ISO/IEC 19775:2004 Extensible 3D (X3D). X3D. 2005. ISO/IEC 19776:2005 X3D encodings (XML and

Classic VRML).

Notes

[1] http://examples.x3dom.org/v-must/Summerschool/index.html

[2] http://examples.x3dom.org/CAD_Explosion/index.html

[3] http://examples.x3dom.org/example/x3dom_video.xhtml

80