Systematic Enablers for Advanced Product Design Support

34
Contents 1 Systematic Enablers for Advanced Product Design Support 3 Eliab Z. Opiyo, Imre Horv´ ath, Wilhelm F. van der Vegte, Zolt´ an Rus´ ak, Regine W. Vroom, Jouke C. Verlinden, Yu Song, Joris S. M. Vergeest and Bart H. M. Gerritsen 1.1 Introduction ...................................... 3 1.2 Review of the State of the Art and Related Works .................. 4 1.3 An Overview of Research Activities, Approaches, and Methods .......... 7 1.4 A Concept for a Spatial Interactive Visualization Environment ........... 9 1.5 Realization of Advanced Design Support Enablers ................. 15 1.6 Ongoing Research .................................. 27 1.7 Summary and Conclusions .............................. 28 1

Transcript of Systematic Enablers for Advanced Product Design Support

Contents

1 Systematic Enablers for Advanced Product Design Support 3

Eliab Z. Opiyo, Imre Horvath, Wilhelm F. van der Vegte, Zoltan Rusak, Regine W. Vroom, Jouke

C. Verlinden, Yu Song, Joris S. M. Vergeest and Bart H. M. Gerritsen

1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.2 Review of the State of the Art and Related Works . . . . . . . . . . . . . . . . . . 4

1.3 An Overview of Research Activities, Approaches, and Methods . . . . . . . . . . 7

1.4 A Concept for a Spatial Interactive Visualization Environment . . . . . . . . . . . 9

1.5 Realization of Advanced Design Support Enablers . . . . . . . . . . . . . . . . . 15

1.6 Ongoing Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

1.7 Summary and Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

1

1

Systematic Enablers for Advanced Product DesignSupport

Eliab Z. Opiyo, Imre Horvath, Wilhelm F. van der Vegte, Zoltan Rusak, Regine W. Vroom,Jouke C. Verlinden, Yu Song, Joris S. M. Vergeest, Bart H. M. GerritsenDelft University of Technology, Delft, the Netherlands

1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.2 Review of the State of the Art and Related Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.3 An Overview of Research Activities, Approaches, and Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 6

1.4 A Concept for a Spatial Interactive Visualization Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

1.5 Realization of Advanced Design Support Enablers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

1.6 Ongoing Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

1.7 Summary and Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

1.1 Introduction

Product design is a complex multitask process that requires a wide range of expertise, knowledge,

and designers creativity. It includes several interrelated activities such as studying feasibility of

product concepts, developing design concepts, generating alternative product or system architec-

tures, defining product or system interfaces, selecting materials and production processes, refining

design concepts, defining parts geometry, specifying tolerances, building and testing experimental

prototypes, evaluating usability and acceptance of design concepts, and analyzing aesthetics, er-

gonomics, reliability, performance, and cost—see, e.g., [1], [50], [54] and [56]. Effective methods

and tools are needed to enable designers and engineers to perform these activities, to support cre-

ativity, and to ensure quality and success of products. One of the main challenges is that the needs

on ground and the requirements for design support tools have been changing over the years. This is

partly attributed to the ever-changing nature and increasing complexity of products. Overall, there

is a demand for new appropriate methods and tools that meet today’s designers’ and engineers’

needs in the processes of development of present-day products, and which match up with today’s

technology advancements in the areas of computing, computer graphics, and communications.

Our research group has been involved in multiple research activities over the past five years, most

of them focusing on creation of novel theoretical solutions as well as on developing methods and

computational algorithms to support designers and engineers in the execution of activities in the

design interval. The strategy in developing new tools has been to explore the relevance and ap-

propriateness of the emerging technological solutions and to take advantage of the technological

advancements in various areas of computing, computer graphics, and communications. This has

been done with a view to providing contextualized solutions to various problems in product design

and to enable designers and engineers to quickly and reliably externalize their innovative ideas when

designing. The new theoretical and technological solutions presented in this chapter contribute to

3

4 Systematic Enablers for Advanced Product Design Support

the efforts that are continuously being invested by many researchers around the globe to develop

effective and efficient methods and tools to support designers and engineers in performing product

development activities and to improve quality of products. While there are several factors that can

contribute to reducing quality of products as well as designers or engineers performance and pro-

ductivity, the limitation of the capabilities of the applied tools and methods is often the key factor

that aggravates these problems. It is expected that accommodation of new technological solutions

in the design process is likely to impact how the designers and engineers go about their daily pro-

fessional work routines. For instance, tools with more advanced functionalities (e.g., for enabling

interactive 3-D visualization) and with enhanced capabilities (e.g., with more processing power,

graphics performance, and storage capacities) are likely to impact how the designers and engineers

accomplish tasks in workplaces; how they communicate and share information and product data;

the way they collaborate; and how businesses streamline processes.

Overall, the product development playground has been changing and evolving continuously over

the years. For instance, due to the desire to improve the well-being of humans, new complex prod-

ucts with advanced functionalities are continuously emerging and various technological solutions

continue to advance. These trends of evolution are evident in all types of products. In general terms,

new principles, methods, and tools are needed in the processes of development of new emerging

products, which typically have new features. The developers of new design support enablers also

need to take advantage of continuous advances in the areas of computing, computer graphics, and

communications. They can hugely benefit from the ever-improving capabilities of computing de-

vices, displays and visualization systems, and networks.

This chapter presents several novel contextualized solutions we developed to address various

specific advanced product design needs. The chapter starts by reviewing the state of the art and

related works in the subsequent section and by giving an overview of how research activities in

our research group have been evolving over the years. It then briefly describes the approaches

that have been adopted and used in the execution of research activities. Afterward, it presents a

spatial visualization concept, which was used as the basis for developing the pilot implementation

of the environment that served as the experimentation platform for most of the developed novel

theoretical and methodological solutions. Finally, several scientific research challenges that have

been dealt with over the past five years are presented and discussed.

1.2 Review of the State of the Art and Related Works

Spatial interactive visualization and designing in 3-D workspaces has been the focus of many design

and computer science researchers in recent past several decades. Much effort have been directed

to advancing the capabilities of 3-D visualization systems and to developing appopriate interaction

methods and user interfaces. This section briefly gives an overview of the advances and related

works on these fronts. Recent advances in computing and computer graphics have led to creation

of a large variety of 3-D visualization systems with a wide range of capabilities. And as a result,

the spatial 3-D display technology scene is currently characterized by a large variety of competing

3-D display concepts offering a wide variety of spatial visualization solutions. Various techniques

(such as rotating or oscillating screens, lenses, or mirrors; emitting light; and projecting images

using CRTs, light valves, or lasers) are being used to create spatial display systems with different

configurations. There are three main types of 3-D displays: (a) stereoscopic displays (which use

various methods to convey separate images to each eye to create a 3-D illusion ), (b) autostereo-

scopic displays (which display 3-D images that are viewable even without the need of wearing

3-D glasses, goggles, helmets, or other stereo view enhancing devices), and (c) volumetric displays

Review of the State of the Art and Related Works 5

(which create 3-D images in three-dimensional physical space via emission, scattering, or relaying

illumination from well-defined regions in space). As for configurations of 3-D visualization sys-

tems, the most common ones are: (a) dome–shaped volumetric 3-D displays (which use techniques

such as swept-volume, i.e., using mechanical mechanisms to sweep 2-D image or envelop through a

spatial volume at a higher frequency than the viewer’s eye can see—e.g., see [18] and static-volume

techniques—e.g., see [33] to represent the images in 3-D space), (b) flat-screen 3-D displays (which

use techniques such as creating a projection volume composed of a physically deep stack of several

electrically-switchable liquid crystal scattering shutters that allow viewers to see objects in three

dimensions without using viewing gear to create 3-D volumetric images), (c) air-borne 3-D displays

(which create images that do not require any specific space or a physical screen by using techniques

such as optical and electro-holography—see [6], [21], [35]), and (d) immersive visualization sys-

tems (such as virtual reality (VR) or cave automatic virtual environment (CAVE) technologies, in

which viewing gears such as head mounted devices (HMDs) and active stereo display shutter glasses

are used) [8], [11].

It should be noted that the configuration of a 3-D visualization system has huge influence on its

acceptability in various application areas, including engineering design. Some of the 3-D visualiza-

tion technologies are already being applied successfully, e.g., in the entertainment and advertisement

industry. And these technologies are also increasingly gaining acceptance in other application areas,

including medical, military training, and engineering. In engineering, attempts are being made to

use these visualization technologies, e.g., to improve product design and development processes.

There are, however, still many hardware; software, and user-interface-related challenges that must

be overcome in order to make these technologies satisfy the spatial engineering design application

needs. The major downside of the existing technologies, with regard to supporting design in spa-

tial workspaces, is lack of sufficient interactivity and suitable user interfaces. It is argued by some

researchers that these technologies will remain inferior as serious spatial work environments, un-

less they are equipped with proper user interfaces—see [2]. The dream-come-true for the designers

and engineers, who typically work with a volumetric data set, would be an interactive 3-D visual-

ization technology that would allow them to look into or onto 3-D virtual images of their design

concepts from any direction, walk around them, interact and manipulate the models, and see real

3-D information being displayed without using any viewing gear.

Numerous research attempts aiming to enhance the existing 3-D display technologies have been

reported. Most of these attempts focus on certain visualization technologies and systems, and on

certain application areas. The research directions have generally been twofold, namely: (a) improv-

ing interactivity of visualization systems, and (b) extending the capabilities of the devices. With re-

gard to interactivity, there have been several multi-directional and multi-disciplinary research efforts

directed to developing effective spatial interactive modeling and visualization methods based on nat-

ural modes of communication such as gestures, speech, and hand motions. For instance, Darken and

Durost [13] describe a method to identify the dimensional characteristics of a task to ensure appro-

priate interaction in virtual environments. Raskar and Low [55] describe the notion of spatially

augmented reality, in which users can interact naturally with images in virtual environments. Foley

et al. [17] present a classification of interaction tasks that can be performed on VR systems. Balakr-

ishnan et al. [4] explore the user interface design challenges for swept-volume volumetric displays

(i.e. display systems that sweep periodically time-varying 2-D images through 3-D spatial volumes

at higher frequencies than the human eye can resolve, e.g., Felix display, http://www.felix3d.com

and Perspecta display, http://www.actuality-systems.com). Dani and Gadh [12] report on a type

of interaction paradigm for shape modeling in which combinations of spoken commands and hand

motions are used to model shapes in a virtual reality environment. Bloomenthal et al. [7] present a

prototype system named Sketch-N-Make, which employs a gesture interface that mimics the famil-

iar pencil sketching practice to enable the user to create rough digital sketches. Peng [51] created

a set of tools for users to instantiate and edit 3-D graphic elements that can be positioned directly

onto the virtual model during 3-D concept design modeling, allowing early concept designs to be

6 Systematic Enablers for Advanced Product Design Support

developed as if designers are working directly on the site. Eggli et al. [14] describe Quicksketch: a

system on which solid objects can be created by using gestural commands.

Furthermore, Grossman et al. [20] report on a technique for tracking the motion of the viewer’s

multiple fingers to provide direct gestural interaction with virtual objects through manipulations on

and around a swept-volume volumetric display. Zeleznik et al. [69] describe a gesture-based inter-

face for sketching in a 3-D scene. Kurtenbach et al. [31] present a system that allows the user to in-

teract, for instance, by moving a digitizing stylus on the sensing grid formed on the dome enclosure

surface of a 3-D volumetric display. This interaction affects the content of the volumetric display by

mapping the positions and the corresponding vectors of the stylus to a moving cursor within the 3-D

display space of the volumetric display. Varga [64] uses results from an experimental hand-motions

detection system to argue that hand motions can play an important role in interaction and model-

ing because hands offer the possibility to express 3-D shape information and gestural instructions

concurrently. There have also been extensive research directed to developing interaction styles and

user interfaces for VR systems, see, e.g., [13], [15], [34] and to making them autostereoscopic (i.e.,

to free viewers from using HMDs or other viewing gears), see, e.g., [58]. Overall, the emerging

VR interface solutions are typically multimodal, i.e., with stereo 3-D graphics, sound/voice input,

and tactile/force based interfaces, which provide a much broader link between the viewer and the

VR application. This allows viewers to use different parts of their bodies (e.g., hands, voice, eye,

etc.) to interact with applications and to get feedback (visual, sound, tactile, etc.). Apart from the

above-mentioned interfaces, it is important to note that there are also other forms of interfaces and

interfacing techniques such as tangible user interfaces [28], multi-touch user interfaces [16], and

body-based interfaces [29]. Some of these techniques have not yet been fully developed but their

potential for application in product visualization also need to be explored.

In conclusion, it can be said that a wide range of issues regarding how to interact with virtual

objects and how to navigate in a 3-D scene within a spatial workspace of a 3-D visualization system

(including how fundamental tasks such as selecting, rotating, scaling, or moving an object in a 3-D

scene could be accomplished, see, e.g., [4], [19]) have been investigated. It should be noted, how-

ever, that the nature of the problems dealt with in the works reported in this chapter are somewhat

different. The idea was to explore how to enable designers and engineers to interact with virtual

objects in the same way they do with physical objects. With this in mind, it was desirable to mimic

the real natural environment, e.g., to use a visualization system that generates images that appear to

be similar to real physical objects, e.g., images that are isolated or detached from the display system

and that have spatial representation (as opposed to, e.g., stereoscopic display images generated by

focusing pairs of 2-D images on, e.g., 2-D projection screen or film planes for the human brain to

reconstruct). It was also desirable that multiple viewers should be able to see the same 3-D image or

scene from different perspectives. Images with these qualities were considered suitable for use, e.g.,

in product visualization—especially in activities that require space imagination such as ergonomics

review, product use simulation, and assembly verification. It is important to note that using these

devices in the imagined way poses many unique interaction and user-interface design challenges

that we also attempted to address. These include the challenges of developing suitable interactive

visualization mechanisms for managing the navigation and placement of virtual objects within spa-

tial workspaces. The works described in this chapter build on the works discussed above, but are

focused more on engineering design application areas. The following section briefly describes our

research activities over the past five years.

An Overview of Research Activities, Approaches, and Methods 7

FIGURE 1.1: Areas of applications of our research.

1.3 An Overview of Research Activities, Approaches, and Methods

Our research activities and targets have been changing constantly over the years to match the pace

of product and technology evolution. Attention has specifically been paid to developing new mod-

eling, prototyping, and interaction techniques (see Figure 1.1). Much of our contributions have

been to advancing design support systems by developing some dedicated computational methods,

algorithms, and tools to support modeling, prototyping, and interaction with virtual objects. From

the application perspective, the idea has been to provide the designers and engineers with effective

means of execution of the design tasks (such as building experimental prototypes and testing or

reviewing aesthetics, performance, usability, and ergonomics of the proposed product design con-

cepts). Figure 1.2 shows the main research themes we dealt with over the past five years. It should

be noted, however, that the foundational investigations started in the late ’90s with research efforts

that were directed to addressing the challenges related to supporting designers and engineers with

effective methods and tools in the early stages of the design process [23]. The activities were or-

ganized under the umbrella of the research program named integrated concept advancement (ICA).

Most of the methods and tools that were available at that time were intended to support activities in

the detail design stage of the product design process. Methods and tools for supporting early design

activities were lacking and the focus was therefore on developing methods and tools to support de-

signers and engineers in the conceptual phase of the design process. ICA was later on succeeded

by the advanced design support (ADS) research program, and the focus of research then shifted to

developing advanced methods and tools to support the designers and engineers in performing tasks

that were not supported by conventional design support systems.

Recent advances in computing have made us refocus our research once again over the last five

years to the themes shown in Figure 1.2. In general terms, the challenges have been threefold,

namely: (a) how to take advantage of the potentials of the emerging technological solutions to fa-

8 Systematic Enablers for Advanced Product Design Support

FIGURE 1.2: Topics of scientific investigations in our research group over the last five years.

cilitate innovation; (b) how to handle the complexities of today’s products and systems; and (c)

developing principles for designing complex products, e.g., which are functionally smart, self-

evolutionary, and operate in synergy and ubiquitously in distributed environments. In this regard,

the fundametal question has been how to support the process of development of products with the

above-mentioned characteristics and how to offer support and services whenever and wherever re-

quired, e.g., how to enable various users (e.g., designers, members of the general public, etc.) to

perform activities or receive services everywhere, anyhow, and anytime. Efforts to deal with the

latter challenge have recently been extended to include exploring complex systems and products

(e.g., cyber-physical systems - (CPSs)) to understand their behaviors in order to create appropriate

and effective principles for designing and developing these systems and products. In short, much of

our recent research have specifically centered on (a) developing contextualized methods and tools

to support modeling, prototyping, visualization, and interaction with virtual objects, with a view to

facilitating innovation, (b) handling of the complexities in the processes of development of prod-

ucts and systems that are omnipresent, smart, self-evolutionary and operationally synergetic, and (c)

developing approaches and principles for designing and developing complex products or systems.

One of the challenges we faced was how to ensure that the results and the conduct of the re-

search are trustworthy. There was a need to come up with suitable research execution frameworks

that would help us to systemize our research activities, to check the validity of our methods and to

avoid, e.g., procedural biases that could have affected the repeatability of the results. The ”research

in design context” and the ”design inclusive research” methodologies, see, e.g., [24], were therefore

developed, and have been the de facto methodologies applied to guide research activities. These ex-

ecution methodologies split research into multiple cycles of explorative and confirmative research

tasks (see Figure 1.3) in which solutions are developed and refined repeatedly through cycles of

explorative research (which involves aggregating knowledge, making or defining assumptions, and

theorizing a problem solution) and confirmative research (which passes through the following three

confirmation stages: justification, i.e., exploring the limits and trustworthiness of a theory; valida-

tion, i.e., assessing the applicability of the proposed solution; and consolidation, i.e., investigating

A Concept for a Spatial Interactive Visualization Environment 9

FIGURE 1.3: Research execution scheme. The outputs of the research in cycle i serve as the

inputs (or starting point) for research in cycle (i+1)

how strongly and to what extent the conducted research fills the knowledge gap and if the results

can be generalized). The results (or outputs) of the research in cycle i serve as the inputs (at the

starting point) for research in cycle (i+1). Refer to [24] for more details of the ”research in design

context” and ”design inclusive research” methodologies.

We describe various works done and present the results achieved over the past five years in the

following sections of the chapter. In the next section, we describe a concept for an interactive spatial

visualization environment that was used as the basis for developing a pilot implementation of the

visualization environment that served as the experimentation platform for most of the developed

theoretical and methodological solutions.

1.4 A Concept for a Spatial Interactive Visualization Environment

In typical product development processes, designers and engineers are often confronted with com-

plex and challenging tasks that require them to imagine and think spatially. Product concepts eval-

uation, ergonomics review, assembly verification, and product use simulation are examples of tasks

that require spatial imagination. An ideal interactive spatial product visualization enabler should

therefore provide designers and engineers with means that promote and facilitate spatial thinking,

visualization, and execution of tasks just as they usually interact with physical prototypes in real

3-D scenes. Visual images are understood to be the most effective ways to communicate both ab-

stract and concrete information or ideas, see, e.g., [53]. To this end, it is important for the visual

display units used in design and engineering visualization environments to display 3-D images that

allow viewers to walk around the images as they do with physical prototypes. Designers and engi-

neers typically work in teams. Therefore, a spatial product visualization environment should also

provide physically spetial images that would allow effective collaboration and facilitate discussions.

A spatial visualization environment with the above-mentioned desirable operational features would

enable the designers and engineers to explore design concepts, acquire knowledge, and accomplish

10 Systematic Enablers for Advanced Product Design Support

FIGURE 1.4: The proposed interactive 3-D visualization enabler reference scheme showing the

relationships among the interconnected sub processes, underlying operations, and the implied se-

quence of operations.

tasks more effectively.

1.4.1 Reference Scheme and Equipment

The desirable characteristic features of an ideal spatial interactive product visualization enabler de-

scribed above were used as the basis for creating a reference scheme of the spatial design support

environment. This reference scheme was used as the basis for building a prototype environment

for demonstrating the applicability of various design enabling solutions. The proposed reference

scheme (see Figure 1.4) consists of several advanced mechanisms for enabling and supporting mod-

eling, interaction, visualization, and simulation. Figure 1.4 shows the relationships among the inter-

connected sub-processes and the involved underlying operations, as well as the implied sequencing

of the operations. The key enablers needed are: (a) virtual modeling interface, through which

the viewers communicate and interact with the displayed virtual objects; (b) hand-motion detec-

tor, which detects hand-motion commands; (c) hand-motions interpreter, which interprets the users’

hand-motion points; (d) geometric entities identifiers, which analyze the input data and identify

data that relates to the geometry of the object; and (e) procedural commands identifiers, which ana-

lyze the input data and identify procedural commands. Others are the: (f) geometric data manager,

which manages and processes geometric and procedural commands; (g) vague modeler constructor,

which uses the input geometric data and procedural commands to generate a vague model of the

object; (h) object modeler, which transforms vague model into a device-independent 3-D model;

(i) volumetric image generator, which post-process the device-independent 3-D model generated

by the object modeler to make it viewable on a specific 3-D visualization system; and (j) behavior

simulator, which provides tools for manipulation and simulation of the data displayed on a truly 3-D

volumetric display.

One of the main tasks was to explore and identify the equipment that could be used as building

blocks of the experimental spatial visualization enabler. We systematically evaluated various com-

peting display technologies and selected the appropriate type of technology, see, e.g., [44]. This

involved (a) formulation of the visualization requirements for various conceptual design tasks, (b)

analysis of the visualization and interaction needs and expectations of the designers and engineers,

A Concept for a Spatial Interactive Visualization Environment 11

FIGURE 1.5: The elements of the experimental truly 3-D visualization environment include a

case-study holographic display-based visual display unit.

and (c) investigation of the relevance of various emerging 3-D visualization technologies and sys-

tems on spatial shape design. This was done to ensure that the display device of the visualization

environment is selected based on formal criteria and through thorough evaluation of the available

and emerging 3-D displays, rather than choosing based on highly visible attributes such as doc-

umentation or look and feel. Truly-volumetric 3-D displays seemed to fulfill the spatial product

visualization requirements better than other types of 3-D displays. The most appealing feature of

these displays is that they are capable of displaying images that appear to occupy actual volume or

space. An evaluation of some selected truly-volumetric 3-D display systems was then carried out

in a separate study [42] and it emerged that holographic displays fulfilled some of the key basic

interactive spatial product visualization requirements. For instance, holographic displays generate

3-D images which viewers can walk around—just like walking around a physical prototype in a

wide field of view without using viewing gear. Overall, they are somewhat capable of substituting

natural viewing. A holographic display was therefore selected and used as a visual display unit of

the visualization environment. One of the most appealing features of these displays is that they gen-

erate 3-D images that appear to pop out of the flat screen. Multiple viewers can therefore see 3-D

images and scenes on these displays from different perspectives. They also support all depth cues

and they can reconstruct the same wavefront that a physical object would reflect [59]. A HoloVizio

128WD display device was subsequently selected and used as the main visual display unit of the

experimental spatial product visualization environment (see http://www.holografika.com/).

Figure 1.5 shows the main elements of the experimental truly 3-D visualization environment.

12 Systematic Enablers for Advanced Product Design Support

FIGURE 1.6: Examples of images of some selected components of a product generated by a

volumetric holographic display. The generated images appear to be suspended in midair.

In short, the main hardware devices that make up this visualization environment are: a HoloVizio

128WD display—the primary visual display unit, which displays 3-D virtual objects in spatial vol-

ume of space, see Figure 1.6 (it can be positioned vertically or horizontally thus providing versatil-

ity of application); a set of six hand-motion and gesture-tracking cameras for tracking and detecting

hand motions; a traditional flat-screen LCD monitor, standard input devices (keyboard and mouse);

and a high performance computer. Apart from hardware devices, there are also several applications

that control the devices, manage and process information, and enable communication between the

devices. A mirror is mounted above the display to allow multiple viewers to experience 3-D views

within the otherwise limited viewing angle of 50 degrees. The display is connected to a high per-

formance computing system, which is also connected to a standard LCD display (the LCD display

serves as an auxiliary proxy output device on which non-graphical outputs and responses to users’

inputs via keyboard or mouse are displayed). The proposed reference scheme can be used as the

basis for developing visualization environments for viewing 3-D virtual objects in 3-D workspaces

as well as for presentation of design concepts. The new interfaces and the interaction techniques in-

troduced in the following sections are designed to support and facilitate interactive 3-D visualization

and manipulation of virtual models in spatial workspaces.

1.4.2 Interactive Visualization Process

The application scenario of the proposed 3-D interactive visualization enabler can be described as

follows. A group of stakeholders (e.g., designers, engineers, end-users, etc.) would, for instance,

join a product concept presentation session administered by a certain design team. The partici-

pants would stand around the visual display unit of the visualization environment, which has been

mounted horizontally as shown in Figure 1.5. Participants would be allowed to communicate and

exchange ideas freely while walking around the product model displayed on the display unit, and

to interactively visualize the product model in the workspace. It is anticipated that the designers

and engineers would use this visualization environment as a means of accomplishing some of the

activities that require space imagination such as 3-D concepts presentation and evaluation. The ap-

propriate interfaces are therefore needed to enable the designers and engineers to accomplish the

above-mentioned activities intuitively in 3-D workspace. The participants are expected to follow

the presentation and view 3-D models interactively (i.e., manipulate the model, e.g., select, pan,

scale, or rotate the models) and while standing or walking around the display.

Some preliminary investigations have been conducted as attempts to come up with suitable inter-

active visualization interfaces for spatial visualization of product models in 3-D workspaces [38],

[43]. Specifically, the idea was to develop multi-modal interfaces for interactive visualization of

3-D product models on truly 3-D volumetric displays and to investigate how the basic interactive

A Concept for a Spatial Interactive Visualization Environment 13

visualization operations such as selecting, rotating, or moving 3-D virtual object in 3-D scene can be

accomplished by using various interaction methods, including the natural modes of interaction such

as hand motions, gesture, and touch. The tasks involved were threefold, namely: (a) to explore and

identify the most suitable interface techniques or combinations of the most appropriate interaction

concepts; (b) to develop alternative interaction concepts and to select a suitable concept or combina-

tion of concepts for interactive 3-D visualization in 3-D workspaces; and (c) to evaluate the selected

combination of concepts (i.e., with a view to exploring how the selected interface techniques could

be applied in product visualization, discovering the problems that might be encountered in using the

selected concepts, and identifying activities that can possibly be supported). As elaborated in the

following sections, hand motions, gesture, and haptic modes of interaction were eventually selected

for further investigation.

In order to visualize a product model interactively in the proposed experimental 3-D visualiza-

tion environment as described above, a framework for interactive visualization of virtual models

generated by holographic display systems was created (Figure 1.7). There are three basic commu-

nication routes and interfacing methods which viewers can pursue to manage the navigation and

placement of virtual objects within the workplace of the display system. Apart from using the tra-

ditional old-fashioned graphical user interface (GUI) (which is designed to be hosted on and usable

via an auxiliary flat-screen display and operated by using a standard keyboard and mouse), view-

ers can interact with virtual models by using hand-motions commands or other natural interaction

means such as via haptic interfaces. The GUI and hand-motions interface have been implemented

and their applicability has been investigated—refer also to [43],[57]. The GUI was intended to be

a vehicle for transferring the appropriate traditional GUI-based interaction methods to volumetric

holographic displays. The expectation is that since holographic displays look just like flat-screen

displays in the way they are constructed, certain peripheral device technologies, including, e.g., the

input devices (such as mouse and keyboard) as well as other existing interaction methods used in

flat-screen-based user interface implementations could transfer well to volumetric holographic dis-

plays. The idea was also that the conventional methods and techniques that do not transfer well

would be replaced with more innovative and intuitive solutions.

Further research is needed to determine how well the proposed interface methods work together,

as well as how effective each interface method is in supporting various interactive visualization

operations. Ideally, the selection of an interfacing method for the task at hand would depend on

various factors, including the time taken to perform a task using a given input device or method as

well as on the viewer’s own familiarity with the input device or method. An advisory mechanism

can be incorporated to provide the viewer with some recommendations or guidelines, e.g., regarding

the time or cost implications of the choices made.

1.4.3 Scientific Challenges and Areas of Improvement

Evaluation of the spatial visualization environment was conducted with a view to understanding the

challenges that users might face in using it [42]. It was found to be lacking, particularly in terms of

providing adequate and quality images, and of supporting interactive visualization of virtual models

in 3-D workspaces. In regard to supporting product visualization, the selected visual display unit

(i.e., HoloVizio 128WD) had rather too narrow a field of view to adequately support collaborative

visualization and virtual model-based discussions involving large design teams. It was also lacking

a proper mounting mechanism (i.e., the question of how holographic visual display units should

be fixed or placed in workplaces still had to be answered). Three scenarios for mounting the dis-

play device were envisioned, namely: (a) mounting the HoloVizio 128WD display as a screen on

a wall or on a special mounting (i.e., just like flat panel displays are mounted), with all the image

writing mechanisms behind the wall; (b) suspending the HoloVizio 128WD display in midair, e.g.,

on upright bars; or (c) laying down the HoloVizio 128WD display on a workbench while hiding

the holographic image writing apparatus underneath. In the experimental system, the holographic

14 Systematic Enablers for Advanced Product Design Support

FIGURE 1.7: A framework for interaction with models in the proposed 3-D visualization envi-

ronment [43].

visual display unit was mounted on a workbench as shown in Figure 1.5. Overall, there were a wide

range of image quality related problems. The problems uncovered in using the experimental 3-D

visualization environment included poor image resolution, poor contrast and lighting of images, im-

age aberrations, rendering delay, distortion of images, background noise, and inability to represent

the geometry volumetrically.

As for interactivity, the main problems encountered include: lack of proper user interface, slow re-

sponse to users’ actions, lack of intuitiveness, poor coordination of users actions when manipulating

virtual objects, and inability to interact directly with virtual objects. Furthermore, some interactive

visualization operations could only be carried out via a text-based interface (i.e., with only typed

text commands used to express user actions) and viewers had to remember the commands. There

was therefore the real need for developing effective and efficient interfaces. Large amount of data

in holographic models was another major shortcoming in using the HoloVizio 128WD display to

visualize product models interactivity. It should be noted that holographic displays generate spatial

images through a two-stage procedure that involves extensive computations performed to convert

3-D description of an object into holographic fringes, as well as optical processes in which light is

modulated by the holographic fringes. The computation process typically comprises many algorith-

mic procedures and involves numerous arithmetic operations with numerous computation steps. A

much larger quantity of image points is also required to construct a volume-filling image, see, e.g.,

[35]. This makes real-time completion of the computation processes impossible even on powerful

computational devices. We attempted to address the challenges of interactive visualization in spatial

environments as well as other challenges mentioned above.

Realization of Advanced Design Support Enablers 15

1.5 Realization of Advanced Design Support Enablers

Designing in 3-D workspaces requires appropriate and unique methods and tools that would enable

designers and engineers to perform tasks efficiently and conveniently. The problem is that most of

the available methods and tools have not been developed in the first place to support the designers

and engineers in performing design task in 3-D space. Our research efforts over the past five years

have included several attempts to develop advanced 3-D methods and tools that could be used to

enable designers and engineers to design products in 3-D space. Overall, every research work has

actually applied one or both of the research execution methodologies described in Section 1.3 in

the development of new solutions (i.e., in the form of theories, methods, and tools). As mentioned

earlier, attention has been paid to developing methods and tools for enabling 3-D modeling, 3-D

interactive visualization, and prototyping in the design interval. The research has been focused on

the following two broad challenges: (a) how to facilitate innovation in the design interval, and (b)

how to handle dynamic complexities of products or systems. We present the research conducted to

address these challenges in the following sections and subsections.

1.5.1 Innovation Facilitating Enablers

Research on the development of advanced design support enablers for facilitating innovation has

chiefly focused on developing techniques for (a) interactive manipulation and simulation of virtual

objects, (b) simulation of use processes, (c) interactive augmented prototyping, and (d) free-form

feature identification, recognition, and manipulation techniques. The following subsections describe

in detail the research conducted on these fronts.

Interactive Manipulation and Simulation

Real-time interactive visualization, manipulation, and simulation in a spatial workspace by using

interaction modalities such as hand motions, gestures, and speech is a difficult challenge that many

researchers have continuously been attempting to deal with in the past decades, see, e.g., [5], [37].

Little is still known, for instance, about the usability and applicability of each of these interaction

modalities in supporting design processes in virtual workspaces, or about how to manage complex

interaction that would, say, allows interaction at semantic levels or adaptation of a mode of inter-

action to meet certain specific local circumstances on ground. Furthermore, it is also interesting

to make explorations with a view to understanding what 3-D interaction techniques should be like

and how they could be implemented, for instance, to facilitate human-product, product-product, or

product-environment interactions.

In an attempt to support realistic and accurate simulation of human-product interaction, Rusak

[57] proposed a new method for interactive, real-time simulation of the grasping process. Real-

time simulation of the contact between the user and a virtual representation of the product was the

focus of this research, with attempts aimed at addressing challenges such as capturing the motion

of the user and transforming this into contact forces, simulating the contact and other physical

effects (such as deformation, motion, and gravity), and providing haptic and visual feedback. A

realistic grasping process is typically rather complex, as it characteristically involves multi-sensory

human perception. An accurate grasping simulation would typically require modeling of the entire

human sensory perception process. The goal in this work was therefore confined to developing a

simplified grasping simulation control mechanism (for demonstrating the proposed concepts and

methods) and means of (a) enabling the user to control a virtual hand model in a natural way,

(b) simulating interactively the contact phenomena for various use scenarios, (c) computing and

simulating contact forces on each individual finger during the grasping process to determine if

16 Systematic Enablers for Advanced Product Design Support

FIGURE 1.8: Virtual hand and object in a simulation space [57]. Fingers and complete kinematic

and dynamic hand models mimicking the actual movement of a real hand and the contact forces

applied on them (see also the corresponding video footage).

the forces are stable and the hand is accurately positioned (i.e., as required by the user), and (d)

minimizing the latency caused by the computations involved in the simulation of the processes

and user responses. Two key operational parameters, namely, the accuracy of finger position with

respect to the virtual product and the stability of the grasping movement were important for the

success of the simulation operation. Using the proposed simulation approach involved creation

of static and dynamic hand models in spatial positions and orientations based on several marker

positions on the hand of a human user (these marker positions defined the coordinates of various

points on the user’s hand). This dynamic hand model followed the movement of the actual hand (on

which contact forces are applied). The proposed approach offers a proof of concept 3-D solution

for interactive manipulation and simulation of the grasping process. Further research is needed to

develop a grasping simulation enabler that can simulate the grasping process more accurately and

realistically.

As for the significance of the developed solutions, natural modes of interaction such speech,

hand motions, and gestures are widely acclaimed for being most effective and intuitive modes of

communication in the design interval, see, e.g., [10]. The techniques developed to enable virtual

hand modeling and grasping simulation contribute to the efforts to advance interaction techniques,

and could be used as the basis for developing practical grasping manipulation and simulation tools.

Among the techniques that could be used are the developed finger models as well as the complete

hand models (i.e., kinematic and dynamic hand models which mimic the actual movement of a real

hand as well as the contact forces applied on them) for controlling hands in grasping simulation pro-

cesses, see, e.g., Figure 1.8. This study also provided knowledge on the ability of humans to control

a virtual hand model with their hands. Similarly, control mechanisms developed for enabling the

user to manipulate the virtual hand could also be applied as the basis for developing real-world prac-

tical tools. These include the motion control mechanisms of the virtual hand and grasping forces

based on basic kinematics and energy transfer principles (i.e., which enable the user of the system to

control the contact forces on the finger) and the mechanism that applies multi-body dynamics prin-

ciples to control the motion of the hand by using proportional-integral-derivative (PID) controllers.

Also, various proposed techniques for enabling, e.g., reconstruction of the motion of the hand based

on measured data could be transferred or adapted and reused. Obviously, achieving the acceptable

accuracies when moving the virtual hand in the 3-D simulation space, positioning fingers on the

grasped objects (including the accuracy of the positions of finger tips), and when controlling the

Realization of Advanced Design Support Enablers 17

FIGURE 1.9: Resource-integrated simulation of user interaction with a product. The inherent

particle-based physics simulation models have been built by using the basic solid modeling instruc-

tions offered by Adams ( http://www.mscsoftware.com/product/adams) (see also the corresponding

video footage).

grasping forces on the objects is a challenge of paramount importance.

The expectation is that the technological solutions realized based on the developed interactive

simulation techniques can positively impact the product design practice. For instance, during the

design process, the designers are often required to anticipate, right from the early stages, how the

user and the product would possibly interact. Grasping is one of the most frequent interaction

actions in most products’ use environments. Realistic and accurate simulation of users’ grasping

actions on virtual products would have a huge potential to improve the designers’ experience with

design concepts—and this might translate directly to realization of high quality products.

Simulation of Use Processes

The main research issue in this investigation was that the available behavioral simulation approaches

presently used to support engineering design often fail to tender multifaceted simulations, and often

exclude simulation of human interactions and of use processes of products, see, e.g., [22], [49]. It

was argued that there was an apparent need to either enhance or replace the existing approaches in

order to address the existing limitations, including the inability to support multi-physics simulations,

inability to handle human perceptions, cognition, and actions, and inability to vary input data during

simulation. Therefore, van der Vegte [62] focused on this problem and proposed a novel approach

for simulation of human manipulative interactions with products with a view to supporting designers

and engineers in testing alternative product concepts by involving (virtual) users.

New concepts and theories needed to achieve the targeted system functionality (i.e., concepts

and theories for resource-integrated modeling and simulation, nucleus-based representation of hu-

mans and artifacts, state-machine-based representations of user actions, and embedded artifact con-

trol) were created. The elements of the theoretical solutions were deduced logically from the

18 Systematic Enablers for Advanced Product Design Support

hypotheses of the research (which were assumed to be true). Based on these concepts and the-

ories, a proof of concept simulation software system that allows a designer or an engineer to

model and/or specify humans, products, and use processes simultaneously, as well as for run-

ning of controlled simulations was developed, see, e.g., [63]. The particle-based physics simu-

lation models were built by using the basic solid modeling instructions offered by Adams, see

http://www.mscsoftware.com/product/adams, which is one of the widely used multi-body dynam-

ics simulation solutions), the logical constructs for control were instantiated as state-flow charts

using the graphical interface offered by Simulink Stateflow, while the interface between control

and simulation was implemented in Simulink. The pilot implementations were subsequently used

to demonstrate the proposed simulation functionality and to validate the created theories and con-

cepts. The implemented functionality offers a 3-D solution for simulation of user actions and use

processes, see Figure 1.9. The artifact, human’s arm, the interactions, the procedural routines, and

all other actions that the product is intended to perform are modeled, and this allow users to acquire

excellent ’hands-on’ experience regarding how the eventual product would behave or function in

practice. The application case studies demonstrated that the proposed approach allows virtual ex-

ploration of usability of a product (i.e., may eliminate the need of using subjects in usability inves-

tigations), offers a possibility to test variations of products in their use environments (i.e., eliminate

the need to make changes to the models or to change logical specifications), and it allows testing

with variations of human characteristics (i.e., there is no need to make changes to the models or to

change logical specifications). However, further research is needed to enhance realism in simulation

of human motor control and robustness of particle-based physics simulation.

Interactive Augmented Prototyping

Augmented reality (AR) technologies provide means of enhancing a person’s view and percep-

tion of the real world by adding layers of digital information such as videos, photos, GPS data, or

sounds on top of physical objects, see, e.g., [9], [30]. AR technologies can therefore deliver ad-

ditional information that allow designers and engineers to be able to experience and percieve their

product concepts better. Thus, they can be applied to support activities such as design concepts

evaluation and aesthetics review. One of the shortcomings of AR solutions is lack of interactivity,

in the sense that users cannot manipulate or modify the resulting augmented model in real time.

In our research, we attempted to address the issue of interactivity in AR design and prototyping

processes. The concept of interactive augmented prototyping (IAP) was developed [65]. IAP essen-

tially combines AR imaging with rapid prototyping (RP) technologies. One of the key hypotheses

put forward was that IAP can enhance the design process. Some studies have been conducted to

investigate how IAP compares with abstract prototyping (which is a prototyping concept that was

also developed in our research group—refer to Section 1.5.2; unlike IAP, abstract prototyping does

not entail using physical objects [48]). The results of the investigations suggest that IAP facilitates

more extensive brainstorming (of novel ideas and potential applications) than abstract prototyping,

but nonetheless it does not significantly help reveal more concrete details. Overall, both prototyp-

ing approaches provide valid feedback, with abstract prototyping helping to raise more issues on

acceptability of product concepts while IAP helps facilitate discussion on specific benefits and fea-

tures of product concepts. Several design activities can be supported by using the developed IAP

technique. These include activities such as presentation of product concepts, contexts analysis, and

sketching—activities which, apart from physical and augmented reality experiences, also require

strong interactivity.

Free-form Feature Identification, Recognition, and Manipulation for Shape Design

Langerak [32] dealt with the challenge of modifying shape features parametrically in shape design

processes. Specifically, the principal research issue was how a free-form feature recognition method

can be developed on the basis of a proper, complete, and sound definition of the free-form feature

Realization of Advanced Design Support Enablers 19

concept, and how free-form features could be used as the basis for parameterizing free-form shapes

locally. The work resulted in the development of a theory and a methodology that attempts to ad-

dress the problem of free-form feature recognition, particularly when geometric data is treated as

feature data, thereby allowing users to modify shapes parametrically. Feature-based modeling op-

erations were identified, and computational models for these operations were subsequently created.

These included a computational model for feature recognition, for instantiation of an embedded

feature, and for transposition from feature space to modeling space. Based on the created theoret-

ical and methodological solutions, algorithms for defining shapes (i.e., computing the shape of a

compound feature and increasing the control point resolution), instantiation (i.e., concrete repre-

sentation of an abstract concept of a feature, and construction of a distance-based correspondence

function), and transposition of free-form features (i.e., transposing from feature space to modeling

space) as well as for template matching, feature identification, and curve-based feature recognition

were developed and used as the basis for implementation of proof of concept pilot implementations.

A generic toolbox for supporting free-form feature modeling (that consisted of the pilot implemen-

tations of the above-mentioned algorithms) was developed.

In a follow-up work, Song et al. [60], [61] proposed a general framework of a feature-based

free-form shape indexing system to help designers find desired shapes through partial shape match-

ing. According to this framework, local shape descriptors serve as footprints of free-form features

in a given 3-D model. Feature handles are extracted on different levels of shape descriptor scale

space and further clustered as principal and detail free-form features. The locations and topological

structures of those features are then recorded as the global index of the shape. This work also led to

creation of a novel approach of content-based text description, in which standard text descriptions of

the feature’s geometric and intrinsic properties are generated and associated through cross-indexing

of semantic attributes such as culture and age. Furthermore, Wiegers and Vergeest [68] also have

recently conducted several studies to investigate interaction in shape design operations. For in-

stance, some experiments have been conducted to identify which types of shape terms the designers

frequently used to express shape, and how effective different types of shape terms are. It became

apparent in this study that the effectiveness of shape communication depends not only on the type

of the shape terms that are used, but also on the characteristics of the shape of the object and on the

experience of the subject.

Tools to enable free-form feature recognition and to support free-form feature modeling can

be developed based on the achieved results and can be integrated into the traditional computer-

based design support systems, variously known as computer-aided design (CAD), computer-aided

design/computer-aided manufacturing (CAD/CAM) or computer-aided engineering (CAE) systems.

The generic toolbox developed for supporting free-form features modeling has been used to demon-

strate that the developed solutions can be a useful aid for designers and engineers. For instance, if

the CAD data or model used to produce a product is not available for some reason (e.g., misplaced

or lost) but the actual product is available, then the available product can be scanned to obtain digital

data of the product. Then, by using the developed free-form feature-based technique, the features

of the original object can be reconstructed. The proposed approach for cross-indexing of semantic

attributes can be used as a basis for developing a local or a web-based 3-D CAD model database

indexing and shape similarity analysis mechanism.

1.5.2 Handling of Dynamic Complexity

The research attempts to address the challenge of handling of dynamic complexities have centered

around four themes: (a) handling of large amounts of data, (b) interactive product visualization

across heterogeneous terminal devices, (c) creating an industrial design engineering Wiki, with

mechanisms for enabling the users to make additions to a common design solutions repository, and

(d) representing, prototyping, and pre-implementation testing of new solution ideas early on. The

following subsections describe our attempts to adress these challenges.

20 Systematic Enablers for Advanced Product Design Support

FIGURE 1.10: Model data reduction procedure—for reducing the amounts of data in the product

model through model simplification and pre-processing, refer also to [46].

Handling of Large Amounts of Data

Large amounts of data in product models can cause delays, i.e., slow response to users actions and

slow interaction speed in devices with limited computational capabilities. These problems, along

with the problem of poor display resolution, have historically prevented some prominent display

devices such as 3-D holographic displays from being applied universally as visual display units of

design support systems. In general terms, visualization of large amounts of data requires sophis-

ticated and efficient rendering algorithms that take into consideration the limitations of terminal

devices, user preferences, and task requirements. As an attempt to deal with the challenges arising

from large amounts of data in product models, we have proposed rigorous strategies for model sim-

plification and data reduction. Data reduction is achieved through a model simplification process

in which the task is specified, the visualization demands for that task at hand are indentified, and

the model is subsequently tailored to meet the visualization demands as well as the computational

device constraints. In order to tailor product data or a virtual model for a particular task at hand,

an application-dependent model simplification and pre-processing procedure, as depicted in Figure

1.10, is proposed. It is designed to ensure efficient execution of the task while taking into account

the amount of data and device limitations. It ensures that only the data needed in accomplishing the

task at hand is rendered (i.e., computed and displayed). This pre-processing front-end procedure

help reduce the amounts of data in the product model. Basically, the pre-processing stage consists

of a set of front-end simplification and pre-processing operations in which, based on the knowledge

of the task at hand, an appropriate and less-complicated representation of the product data or scene

in the form of a 3-D holographic object or holographic scene is generated.

The proposed model pre-processing algorithmic procedure (see also Figure 1.10) involves: (a)

specifying the design task at hand (which could be, e.g., design concept evaluation, aesthetics re-

view, etc.); (b) analyzing alternative ways of representing product data or model for the design task

Realization of Advanced Design Support Enablers 21

at hand and selecting an appropriate way (could be, e.g., 2-D, 2.5-D, or 3-D representation of prod-

uct data depending on the nature of the design task); (c) choosing how to view the product data or

virtual model—this could be, for instance, representing and viewing a virtual object as a wireframe

model, as a point cloud model, or as a product model with a given number of meshes, and with

all details in the model—including color, material, or surface texture; (d) choosing and applying a

suitable simplification method and simplifying the model further (if necessary); and (e) repeating

step (d) until the desired level of model data reduction is achieved. The institutionalization of step

(d) may typically involve selecting and applying a particular data reduction method or combinations

of methods, which could be, for instance, visual abstraction (i.e., avoiding to display some of the

virtual model details), data clustering, (e.g., applying k-means algorithm, de-featuring (i.e., sup-

pressing features adjudged to be irrelevant), or any simplification method deemed appropriate—to

further reduce data in the product model. A complete review and analysis of possible model data

reduction and simplification methods is available in [46]. Each model or product data simplification

method is unique and brings into the simplification and pre-processing process different advantages

and dissadvantages. Theoretically, the pre-processing procedure should terminate when some dam-

age to the meaning of the visualized data starts to emerge, e.g., when there is significant loss of

image details, when there is noticeable distortion of image, or when the context of the content be-

gins to be different. In using the proposed high-level model data pre-processing and simplification

algorithmic procedure to process content for various dissimilar terminal devices, the user first needs

to specify the task or set of tasks in which the design team is involved. Then, having specified

the task, the user must specify the needs and requirements. These needs and requirements provide

the basis for deciding how to simplify the model. The applicabiliy and the significance of the pro-

posed model data pre-processing algorithmic procedure has been demonstrated by using practical

application examples, see, e.g., [41].

Interactive Product Visualization across Heterogeneous Terminal Devices

It is widely acknowledged that advances in various areas of computing, communications, and mul-

timedia research have long been impacting many technological solutions [36], including terminal

devices. As a result, there are several types of terminal devices today in the market, including those

with miniature displays and limited computational capabilities such as handheld terminal devices.

We argued in this work that the emerging technological solutions such as handheld devices can be

adopted and used in designing products. Therefore, one of our research tasks was to explore the pos-

sibility of using heterogeneous terminal devices (such as desktop computers, handheld devices like

smartphones and tablet PCs, and high-end terminal devices with 3-D displays) together in perform-

ing activities (such as product concepts evaluation and aesthetics review) remotely [45]. Adaptation

of contents for sharing across the devices used by the members of the design team is the principal

challenge that we took up. We first conducted a comprehensive literature review, which revealed the

potentials and shortcomings of the existing content adaptation strategies used elsewhere, and also

raised several questions for further research. The challenges that need to be addressed include (a)

lack of effective mechanisms for adapting contents (such as 3-D product models used in industrial

environments) in context, (b) guaranteeing the coherence of both the meaning and the context of the

content across heterogeneous terminal devices, and (c) meeting both resource constraints and task

requirements. The goal was therefore to address these challenges and to come up with a compre-

hensive adaptation mechanism. The principal requirement was that the adaptation process should

always take into account the task requirements and the specific needs and preferences of the users

(who in the context of this work were the designers and engineers).

Other desirable characteristic features included capability to cope with contents used in product

development environments (such as 3-D models and other forms of product data), and the ability to

address the constraints posed by the heterogeneity of terminal devices and networks during content

adaptation. An initial investigation—see [39], [45]—has led to the identification of the key features

22 Systematic Enablers for Advanced Product Design Support

FIGURE 1.11: Mechanisms and algorithms (a1, a2, ....., an) for handling device-instantaneous

needs combinations to ensure that information is conveyed in the desired context [39].

of an ideal content adaptation mechanism. These features have been used as the basis for assessing

the extent to which the existing content adaptation techniques meet the adaptation requirements in

product development. A concept and a generic framework for content adaptation in product devel-

opment environments has subsequently been proposed (Figure 1.11). The ongoing work includes

implementation of the adaptation mechanism, which is expected to consist of multiple adaptation

agents—to cope with the dynamic needs of various terminal devices and networks as well as to the

ever changing user preferences and to the adaptation needs of the task at hand. This is done with

a view to (a) ensuring consistency of the meaning and context of the content across heterogeneous

terminal devices, (b) taking into consideration the preferences of various users, and to (c) ensuring

that the content adaptation mechanism would sufficiently enable the designer or engineer to perform

the intended task.

The developed data reduction techniques can be used as the basis for developing data reduc-

tion mechanisms for various content adaptation processes. This includes enabling data reduction in

product models while making them suitable, for instance, for exploring various aspects of the de-

sign of the product such as appearance, color, look and feel, surface form, and basic size in various

heterogeneous terminal devices. Tailored product data or virtual product models can then be gener-

ated and shared across heterogeneous terminal devices and used, for instance, in design evaluation,

e.g., in the identification of where further development or improvement of an in-process product

is necessary, in assessing the ergonomic factors, in assembly verification, in design concepts se-

lection, in market research, in executive reviews and approvals, in final design reviews, and in the

evaluation of how the intended product will be used in practice. Each of the above-mentioned tasks

has unique needs which should be met. For instance, dimensional information or the sense of the

basic size of product is vital in ergonomics review while surface information, such as colors, look,

or form, are central to reviewing the appearances of products. Furthermore, some tasks such as

final design review would probably require full-featured 3-D virtual models. All these unique task-

related requirements must be considered when generating appropriate product models for various

heterogeneous terminal devices.

Realization of Advanced Design Support Enablers 23

FIGURE 1.12: WikID structure showing how contents are organized within the semantic reposi-

tory web tool.

Industrial Design Engineering WikID

Designers and engineers in many engineering fields, including in the industrial design engineering

field, typically use a wide range of background knowledge when designing and making key deci-

sions in the design interval, see, e.g., [3]. Just like any other professionals, designers and engineers

cannot master or be fully knowledgeable about everything in the related fields. Often the common

shortcut solution is to rely on rules of thumb. The goal of this part of the research was to provide

reliable knowledge sources for the designers and engineers to rely on when designing, and which

can be updated continuously. To meet this goal, a web-based design tool called WikID (i.e., a

portmanteau of wiki and industrial design) has been developed.

WikID is essentially an open content repository and an online semantic web-based design enabler

created to facilitate collaborative authoring, sharing, storage, and finding of relevant information

online [66]. One of the primary requirements was freedom of the authors (i.e., the designers and

engineers) to input the information freely if they consider it to be useful and relevant. One of the

challenges when developing this tool was therefore to meet the above-mentioned principal demand,

i.e., how to meet the need for both freedom of the individual to input contents and how to ensure

that the content is relevant to the industrial design engineering community at large.

Due to the need for the users of this web-based tool to freely author and edit articles, it was neces-

sary to come up with some guidelines and criteria to help the authors to systematically evaluate and

decide which information is relevant to the industrial design engineering community. To achieve

this, an investigation on the best way to determine the ”design relevance” of an article was con-

ducted. This included field interviews with design experts. As a result of this investigation, some

article-writing guidelines were formulated to simplify the writing and editing process on the web-

tool and to support the authors in making decisions concerning the ”design relevance” of articles.

Specifically, the scope of the industrial design engineering field in terms of product domains, appli-

24 Systematic Enablers for Advanced Product Design Support

cable sub disciplines, and design aspects was defined. The results of this investigation were used

as the basis for creating the semantic industrial design engineering WikID. Overall, in the WikID

environment, the design relevance is not a property of information, but is rather regarded as the

condition that must be met. Therefore, specific article-writing guidelines were formulated for each

category of articles. The results have been implemented in WikID as forms. The categories have

also been incorporated into the implemented ”forms” of the of the web-tool. Figure 1.12 shows

the contents and the structure of WikID. More details on the implementation of this design support

web-tool are available in [67].

Representation, Prototyping, and Pre-implementation Testing of New Design Solution Ideas

Early-on

Prototyping and testing in the early stages of the product development process prior to the imple-

mentation of the actual prototype or product is one of the difficult challenges that product developers

often face [47]. One of the works we engaged in was developing contextualized techniques for pro-

totyping and pre-implementation testing of design support tools [48]. The initial work addressed

the observation that the prevailing software process models and testing techniques cannot precisely

satisfy the needs in the early stages of the processes of development of software tools used in engi-

neering design, variously known as CAD, CAD/CAM, or CAE systems. Therefore, a novel quality

assurance strategy called abstract prototyping was created, and a suite of software tools developed

to support the developers in managing and performing abstract prototyping activities. Abstract pro-

totyping is in a sense a pre-implementation prototyping and review procedure, dedicated to the early

stages of the processes of development of design support systems. It consists of some elements of

well-established methodologies such as spiral software development, participatory design, heuristic

evaluation, extreme programming, and joint application development. It divides the activities in

the early stages of CAD software tools development process into four phases, namely: theories,

methods, algorithms, and pilot implementation development. In each phase, a version of the im-

plementation of a tool is created in a given context, prototyped, and reviewed or tested. The idea

is to help ensure that right foundational theories, methods, and algorithms are deployed, and also

to systematically guide the participation of various stakeholders in the processes of development

of design support tools, see, e.g., [48]. The abstract prototyping methodology is driven by the two

schemes depicted in Figure 1.13, which guides the processes of transforming requirements into the

expected implementation of design support software in a given context.

Abstract prototyping can be incorporated in quality assurance strategies in industrial organiza-

tions that develop design support software tools. Software development methodologies presently

applied in many companies typically guide the developers to first perform needs analysis, and then

to specify requirements, design algorithms, and user interfaces, and to write and test codes. The

design phase is typically not tightly specified, which leaves it open to the interpretation of the in-

dividual developers or organizations. In applying the proposed abstract prototyping methodology,

the emphasis should essentially be on enhancing quality of the in-process deliverables in the early

development stages of a tool or functionality. It was confirmed through a real-world application

case study that proposed methodology guides the developers to systematically conceptualize and

formalize concepts, to consider requirements, and to review or test the in-process deliverables to re-

quirements multiple times (prior to the implementation of the actual tool), namely, at theories level

of abstraction, methods level, algorithms level, and at the pilot prototypes level of abstraction. Fol-

lowing a formal approach in the design interval helped to avoid the potential risks of neglecting the

requirements for the deliverables in these abstraction levels. In short, abstract prototyping allows

reviews or tests to be conducted formally early on in these abstraction levels rather than waiting

until after implementation (i.e., after coding or after producing a tangible prototype or product).

Overall, the proposed method provides a structured way of testing the appropriateness of theories

and engineering principles, methods, or algorithms based on formal requirements.

Realization of Advanced Design Support Enablers 25

FIGURE 1.13: A scheme for abstract prototyping of design support tool theories, methods, algo-

rithms, and pilot implementation levels of abstraction [48].

The abstract prototyping concept has further been extended to address the challenges faced in

representing and prototyping engineering products and their associated use processes in the early

stages of their development processes. One of the main limitations of the existing techniques is that

they are primarily used in prototyping of engineering products in the embodiment and detail design

phases of the product development process, without taking into consideration the use processes

associated with products. The challenge was how to prototype both the product and the processes

related to the operation of the products, including the interactivity of the product developer or the

user with the product (i.e., the thoughts and the manipulative actions of human beings who interact

with the product). The abstract prototyping concept has therefore been extended to address the

above-described challenges [27], [40]. The extended abstract prototyping concept is designed to

help designers and engineers to conceptualize and communicate ideas about products together with

all accompanying use processes. Figure 1.14 shows the information structure and flows in the

framework of the extended abstract prototyping methodology. Three major attributes that determine

the manifestation of abstract prototypes are: (a) artifact-service combination, i.e., description of the

product itself and how it could be used, (b) human actors involved in the prototyping process, and

(c) the environment in which the product is expected to be used or to provide service.

The form of abstract prototype also depends on the stakeholders’ (i.e., the user of the abstract

prototype) demands and perspectives, and on the media that would be used. There are many different

26 Systematic Enablers for Advanced Product Design Support

FIGURE 1.14: An extended abstract prototyping scheme for engineering consumer products

showing the information structure and flows. It is designed to facilitate presentation of product

concepts and use scenarios [40].

tools and multimedia presentation technologies that can be used to create abstract prototypes. It

should be noted, however, that not every media is capable of representing and communicating the

product concepts effectively or in the same way. Decisions on how to represent information or

on the multimedia technology or combination of technologies to use depends on the conditions on

ground (e.g., type of product and the target users of the abstract prototype) and is typically left to the

prototype developer. Basic tools such as papers, markers, and scanners can be used to sketch or scan

objects to create images. CAD systems also could be used to model 3-D virtual objects that could

later on be augmented with real-world scenes. Furthermore, low-cost tools and technologies such

as screen-capturing applications (e.g., CamStudio) and Webcams could be used to record footage,

which could then be edited and combined by using editing applications such as Moviemaker and

Adobe to create abstract prototypes. Furthermore, applications such as BuildAR could be used to

create 3-D prototypes (i.e., 3-D augmented reality scenes, to link video content to the real-world

scenes). Figure 1.15 shows an example of an abstract prototype of a product and its use process.

Several application case studies have been conducted to investigate the applicability of this ex-

tended concept. It has been shown that it enables the ideation and representation of product or

system concepts as real-life processes, and that it can be an effective and useful enabler for com-

municating ideas [25], [40]. The major benefit of the extended abstract prototyping approach over

competing approaches such as VR solutions is that it provides a low-cost, but yet very effective

solution for producing prototypes that take into consideration the use contexts and scenarios of

the product in the very early design stages. The developed early-stage prototyping method allows

designers and engineers to effectively express their impressions about the products or their compo-

nents early on, and to check and determine in advance if the design concepts meet functional and

other quality requirements.

Ongoing Research 27

FIGURE 1.15: An example of an abstract prototype of a product and its use process: A screen

snapshot taken from an animation of the storyboard snapshots used to demonstrate a concept of

an electric mixer and how the imagined end-product would be used in practice (see also the corre-

sponding animated footage).

1.6 Ongoing Research

Our most recent research efforts have been directed toward exploration and development of prin-

ciples for designing complex products and systems characterized by qualities and abilities such as

smartness, self-adaptation, and ability to operate distributedly. The preliminary investigations have

focused specifically on the exploration of complex systems and products with a view to gathering

knowledge, e.g., on how they behave and interact, and based on this knowledge, to create suitable

frameworks and enablers for developing complex products and systems. The significance of this

part of research is that, nowadays, more and more products and service systems increasingly consist

of both physical (or mechanical) manifestations and cyber (i.e., computational mechanisms with

software, networking, and electronic elements) manifestations. The ongoing works involve inves-

tigation of the appropriateness of the existing principles and of the possibility of applying these

principles to real-world processes of development of complex systems, as well as developing new

principles from scratch. The idea is to: (a) explore, to profile, and to characterize novel principles,

(b) experiment with combinations of principles, to obtain know-how, and subsequently to create

suitable principles, and (c) to create enablers for developing complex products and systems (such

as cyber-physical systems or products). One of the ongoing research works has been directed to

developing design principles for mass customization (MC) of cyber-physical consumer durables

(CPCDs) [52]. Specifically, this work aims to: (a) discover the affordances of the existing MC

approaches, (b) identify the principles that could be transferred, extended, or adapted for use in the

context of CPCDs, and to (c) propose relevant and effective novel principles and approaches for MC

of CPCDs.

Some of the initial research efforts have also been directed at studying and analyzing the trends of

ubiquitous technology advances and attempting to form a vision about possible manifestation of fu-

ture ubiquitous design support environments [26]. The focus is on three interrelated principal issues:

28 Systematic Enablers for Advanced Product Design Support

FIGURE 1.16: Applications and impacts of the developed concepts, methods, and tools.

(a) identifying possible application areas and scenarios for future design support environments, (b)

investigating and integrating multiple technologies into an ad hoc interconnected heterogeneous in-

frastructure, with a view to increasing efficiency and improving performances, and (c) exploring the

possibility of utilizing new ubiquitous computing technologies in supporting product innovation,

including how efficiently they could be used and how the designers and engineers could benefit

from their affordances. New novel functionalities of the ubiquitous design support environment

have been proposed in [26].

1.7 Summary and Conclusions

The design support enablers used in performing product development activities must have the desir-

able functional features and capabilities. Any mismatches can have immense adverse consequences

on the performance and productivity of designers and engineers, as well as on quality of products.

There is therefore the need to continuously develop new and more effective design support enablers

to match the pace of rapidly changing needs on ground. Our research activities over the past five

years have contributed to the efforts to develop new design support enablers and have led to real-

ization of several novel modeling, prototyping, and interaction solutions (i.e., in the form of frame-

works, reference schemes, and novel computational methods) to enable designers and engineers to

accomplish various design tasks.

The principal activities and contributions of the works presented in this chapter can be summa-

rized as follows. We have: (a) analyzed the available 3-D visualization and interaction techniques

with a view to identifing the capabilities and limitations of these techniques in enabling designers

and engineers to visualize product models and data interactively; and, based on the findings, we

developed an architecture of an interactive 3-D visualization environment and built a pilot proto-

type; (b) evaluated the developed interactive 3-D visualization environment with a view to pinning

down the actual desirable features and to discover the problems that designers and engineers might

encounter in using an interactive 3-D visualization environment in practice; and (c) we attempted

to address some of the identified problems, developed several theoretical and methodological solu-

References 29

tions, and tested the validity and applicability of the proposed solutions.

The effectiveness, applicability, and validity of the proposed solutions, as well as their potential

impacts have been explored through various application case studies. Two broad areas of application

of the developed solutions can be identified, namely industrial and educational applications (Figure

1.16). These solutions can be used as the basis for developing guidelines and advanced enablers

(i.e., methods and tools) that can potentially be used to support various design tasks. For instance,

the proposed spatial product visualization framework can be used as the basis for developing a

virtual environment for visualization or prototyping of product concepts in virtual workspaces. The

main anticipated industrial application area is engineering design, where there is often the real need

to provide enhanced and advanced solutions to address the continuously changing needs on ground.

The developed theoretical and methodological solutions could be used as the basis for developing

design support enablers that can be applied, e.g., (a) to represent design concepts (i.e., particularly in

product modeling, modeling of users actions, and modeling of use environments), (b) in prototyping

(i.e., building and testing experimental prototypes) in the early phases of the product design process,

and (c) in interactive visualization of product data. The activities that can be supported include

conceptualization (e.g., studying feasibility of product concepts, development and evaluation of

design concepts, and generating alternative architectures in the early stages of the design process),

product modeling (e.g., feature-based modeling—by using tools realized based on the developed

free-form feature recognition and free-form feature manipulation techniques), modeling of users

actions, design concepts modeling and representation, and modeling of use environments. Some

materials and knowledge generated from the research presented in this chapter have already been

incorporated into our industrial design engineering education curriculum.

It is important to note, however, that the pilot implementations presented in this chapter were

only intended for proof of concept investigations, demonstrations, and evaluations—to uncover the

difficulties that target users might face in using the proposed solutions. Further extensive research is

needed to come up with fully developed and proven solutions. Testing of the actual implementations

in real industrial settings is another subject of future research.

References

[1] M. M. Andreasen and L. Hein. Integrated Product Development. IFS Publications Ltd.

/Springer-Verlag, Bedford, UK, 1987.

[2] I. G. Angus and H. A. Sowizral. “Embedding the 2-D Interaction Metaphor in a Real 3-D

Virtual Environment”. Proceedings of the Society of Photographic Instrumentation Engineers

(SPIE) conference, 2409:282–293, 1995.

[3] P. Ashton. “Transferring and Transforming Design Knowledge”. Proceedings of the Expe-

riential Knowledge Conference, 29 June 2007, the University of Hertfordshire, UK, 1:1–9,

2007.

[4] R. Balakrishnan, G. W. Fitzmaurice, and G. Kutenbach. “User Interfaces for Volumetric

Displays”. Computer, 34:37–45, 2001.

[5] M. Bergh, E. Koller-Meier1, F. Bosche, and L. Gool. “Haarlet-based Hand Gesture Recogni-

tion for 3-D Interaction”. Proceedings of the Workshop on Applications of Computer Vision

(WACV), Snowbird, UT, 7-8 December, 2009, 1:1–8, 2009.

[6] O. Bimber. “Combining Holograms with Interactive Computer Graphics”. Computer, 37:85–

91, 2004.

30 Systematic Enablers for Advanced Product Design Support

[7] M. Bloomenthal, R. Zeleznik, R. Fish, L. Holden, A. Forsberg, R. Riesenfeld, M. Cutts,

S. Drake, H. Fuchs, and E. Cohen. “Sketch-N-Make Automated Machining of CAD

Sketches”. Proceedings of ASME International Design Engineering Technical Conference

Atlanta, GA, September 13-16, Paper No. DETC98/CIE-5708, 1998.

[8] W. Buxton and G. W. Fitzmaurice. “HMDs Caves and Chameleon - A Human Centric Anal-

ysis of Interaction in Virtual Space”. Computer Graphics, 32:64–68, 1998.

[9] J. Carmigniani, B. Furht, M. Anisetti, P. Ceravolo, E. Damiani, and M. Ivkovic. “Aug-

mented Reality Technologies, Systems, and Applications”. Multimedia Tools and Applica-

tions, 51:341–377, 2011.

[10] C. P. Chu, T. H. Dani, and R. Gadh. “Evaluation of Virtual Reality Interface for Product Shape

Designs”. IIE Transactions, 30:629–643, 1998.

[11] C. Cruz-Neira, D. Sandin, T. DeFanti, R. Kenyon, and J. Hart. “The CAVE - Audio Visual

Experience Automatic Visual Environment”. Communications of the ACM, 35:64–72, 1992.

[12] T. H. Dani and R. Gadh. “A Framework for Designing Component Shapes in a Virtual Re-

ality Environment”. Proceedings of 1997 ASME Design Engineering Technical Conferences,

September 14-17, 1997, Sacramento, CA, Paper No. DETC97/DFM-4372, 1997.

[13] R. P. Darken and R. Durost. “Mixed-Dimension Interaction in Virtual Environments”. Pro-

ceedings of 2005 ACM Symposium on Virtual Reality Software & Technology (VRST05,

November 79, 2005, Monterey, CA, 1:38–45, 2005.

[14] L. Eggli, B. D. Bruderlin, and G. Elber. “Sketching as a Solid Modelling Tool”. Proceedings

of the Third Symposium on Solid Modelling and Applications, May 1719, 1995, Salt Lake

City, UT, 1:313–321, 1995.

[15] C. Esposito. “User Interface Issues for Virtual Reality Systems”. Proceedings of CHI’96

Conference on Human Factors in Computing Systems, April 13-18, 1996, Vancouver, Canada,

1:340–341, 1996.

[16] D. Fiorella, A. Sanna, and F. Lamberti. “Multi-touch User Interface Evaluation for 3-D Object

Manipulation on Mobile Devices”. Journal of Multi-Modal User Interfaces, 4:3–10, 2010.

[17] J. D. Foley, V. L. Wallace, and P. Chan. “The Human Factors of Computer Graphics Interac-

tion Techniques”. IEEE Computer Graphics and Applications, 4:13–48, 1984.

[18] J. E. Freeman and R. S. Gold. “Method and Apparatus for Displaying Volumetric 3-D Im-

ages”. Patent Treaty Application, 1999.

[19] T. Grossman and R. Balakrishnan. “Pointing at Trivariate Targets in 3-D Environments”.

Proceedings of the International Conference for Human-Computer Interaction (CHI2004),

April 24-29, 2004, Vienna, Austria, 6:447–454, 2004.

[20] T. Grossman, D. Wigdor, and R. Balakrishnan. “Multi-Finger Gestural Interaction with 3-

D Volumetric Displays”. Proceedings of 2004 ACM Symposium on User Interface Software

Technology, 2427 October 2004, Santa Fe, NM, 1:61–70, 2004.

[21] P. Hariharan. Optical Holography - Principles Techniques, and Applications. Cambridge

University Press, Cambridge UK, 1984.

[22] C. M. Hoffmann and J. E. Hopfcroft. “Simulation of Physical Systems from Geometric Mod-

els”. Computer Science Technical Reports, Paper 552, 1:1–22, 1986.

[23] I. Horvath. “Shifting Paradigms of Computer Aided Design”. Delft University Press, Delft,

1998.

References 31

[24] I. Horvath. “Differences between Research in Design Context and Design Inclusive Re-

search”. Design Research, 7:61–83, 2008.

[25] I. Horvath and E. du Bois. “Using Modular Abstract Prototypes as Evolving Research Means

in Design Inclusive Research”. Proceedings of ASME 2012 International Design and En-

gineering Technical Conferences & Computers and Information in Engineering Conference

(IDETC/CIE 2012), August 12-15, 2012, Chicago, IL, Paper No. DETC2012-70050, 1:475–

486, 2012.

[26] I. Horvath, Z. Rusak, E. Z. Opiyo, and A. Kooijman. “Towards Ubiquitous Design Support”.

Proceedings of ASME 2009 International Design and Engineering Technical Conferences

& Computers and Information in Engineering Conference (IDETC/CIE 2009, August 30 -

September 2, 2009, San Diego, CA, DETC2009-87573, 1:1629–1638, 2009.

[27] I. Horvath, Z. Rusak, W. F. van der Vegte, A. Kooijman, E. Z. Opiyo, and D. P. Peck. “An In-

formation Technological Specification of Abstract Prototyping for Artifact and Service Com-

binations”. Proceedings of ASME 2011 International Design and Engineering Technical Con-

ferences & Computers and Information in Engineering Conference (IDETC/CIE 2011), Au-

gust 28-31, 2011, Washington, DC, Paper No. DETC2011-47079, 1:209–223, 2011.

[28] H. Ishii and B. Ullmer. “Tangible Bits: Towards Seamless Interfaces between People, Bits and

Atom”. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems,

March 22-27, 1997, Atlanta, GA, 1997.

[29] G. Kima, S. Han, H. Yang, and C. Cho. “Body-based Interfaces”. Applied Ergonomics,

35:263–274, 2004.

[30] D. W. F. Krevelen and R. Poelman. “A Survey of Augmented Reality Technologies, Applica-

tions and Limitations”. The International Journal of Virtual Reality, 9:1–20, 2010.

[31] G. P. Kurtenbach, G. W. Fitzmaurice, and R. Balakrishnan. “Three Dimensional Volumetric

Display Input and Output Configurations”. United States Patent-2004 6-753-847, 2004.

[32] T. R. Langerak. “Free-form Feature Recognition and Manipulation to Support Shape De-

sign”. Delft University Press, Delft, the Netherlands, 2008.

[33] K. Langhans, C. Guill, E. Rieper, K. Oltmann, and D. Bahr. “Solid Felix: A Static Volume

3D-Laser Display”. IS&T Reporter - The Window on Imaging, 18:1–8, 2003.

[34] G. A. Lee., G. J. Kim, and C. Park. “Modeling Virtual Object Behavior within Virtual En-

vironment”. Proceedings of the ACM symposium on Virtual reality software and technology-

VRST ’02, November 11-13, 2002, Hong Kong, China, 2002.

[35] M. Lucente. “Computational Holographic Bandwidth Compression”. IBM Systems Journal,

35:349–365, 1996.

[36] W. Malyj, R. E. Smith, and J. M. Horowitz. “Impact of Advances in Microprocessor Archi-

tecture and System Design”. Computer Programs in Biomedicine, 18:149–161, 1984.

[37] J.-C. Martin, C. dAlessandro, C. Jacquemin, B. Katz, and A. Max. “3-D Audiovisual Render-

ing and Real-Time Interactive Control of Expressivity in a Talking Head”. Intelligent Virtual

Agents (IVA) Lecture Notes in Computer Science, 4722:29–26, 2007.

[38] E. Z. Opiyo. “Developing Interfaces for Interactive Product Visualization in Truly 3-D Virtual

Workspaces”. Proceedings of ASME 2011 International Design and Engineering Technical

Conferences & Computers and Information in Engineering Conference (IDETC/CIE 2011),

August 28-31, 2011, Washington, DC, Paper No. DETC2011- 47088, 1:1429–1438, 2011.

32 Systematic Enablers for Advanced Product Design Support

[39] E. Z. Opiyo. “Assessing Multimedia Content Adaption Techniques with a view to Using

Heterogeneous Handheld Devices in Performing Product Development Tasks”. Proceedings

of ASME 2012 International Mechanical Engineering Congress and Exposition, November

915, 2012, Houston, TX, Paper No. IMECE2012-85471, 1:275–288, 2012.

[40] E. Z. Opiyo. “Supporting the Ideation Process and Representation of the Design of a Product

as Part of a Real Life Use Process”. Proceedings of ASME 2012 International Mechani-

cal Engineering Congress and Exposition, November 915, 2012, Houston, TX, Paper No.

IMECE2012-85938, 1, 2012.

[41] E. Z. Opiyo. “Visual Content Adaptation in Context to Facilitate Collaboration in Engineering

Design”. CoDesign: International Journal of CoCreation in Design and the Arts, 9:190–205,

2013.

[42] E. Z. Opiyo and I. Horvath. “Exploring the Viability of Holographic Displays for Product

Visualization”. Journal of Design Research, 8:169–188, 2010.

[43] E. Z. Opiyo and I. Horvath. “Interface Modes for Interactive Visualization of Airborne Prod-

uct Virtual Models”. Proceedings of ASME 2010 World Conference on Innovative Virtual

Reality, May 1214, 2010, Ames, IA, Paper No. WINVR2010-3710, 1:241–250, 2010.

[44] E. Z. Opiyo and I. Horvath. “Towards an Interactive Spatial Product Visualization: A Compar-

ative Analysis of Prevailing 3-D Visualization Paradigms”. International Journal of Product

Development, 11:4–24, 2010.

[45] E. Z. Opiyo and I. Horvath. “Heterogeneous Remote Visualization Framework for Ubiquitous

Product Development Activities”. International Journal of Virtual Reality, 10:57–68, 2011.

[46] E. Z. Opiyo, I. Horvath, and Z. Rusak. “Strategies for Model Simplification and Data Re-

duction in Holographic Virtual Prototyping and Product Visualization through Application

Dependent Model Pre-processing”. Proceedings of ASME 2009 International Design and En-

gineering Technical Conferences & Computers and Information in Engineering Conference

(IDETC/CIE 2009), August 30 - September 2, 2009, San Diego, CA, Paper No. DETC2009-

86126, 1:1483–1494, 2009.

[47] E. Z. Opiyo, I. Horvath, and J. S. M. Vergeest. “Quality Assurance of Design Support Soft-

ware: Review of the State of the Art”. Computers in Industry Journal, 49:195–215, 2003.

[48] E. Z. Opiyo, I. Horvath, and J. S. M. Vergeest. “Extending the Scope of Quality Assurance

of CAD Systems: Putting underlying Engineering Principles, Theories, and Methods on the

Spotlight”. Journal of Computing and Information Science in Engineering, 9:1–7, 2009.

[49] F. S. Osorio, S. R. Musse, R.Vieira, M. R. Heinen, and D. C. Paiva. “Increasing Reality in

Virtual Reality Applications through Physical and Behavioural Simulation”. Tutorial Book of

Virtual Concept, 1:1–45, 2006.

[50] G. Pahl and W. Beitz. Engineering Design A Systematic Approach. Springer-Verlag, Berlin,

1993.

[51] C. Peng. “In-situ 3-D Concept Design with a Virtual City”. Design Studies, 27:439–455,

2006.

[52] S. Pourtalebi, I. Horvath, and E. Z. Opiyo. “Multi-Aspect Study of Mass Customization in the

Context of Cyber-Physical Consumer Durables”. Proceedings of ASME 2013 International

Design and Engineering Technical Conferences & Computers and Information in Engineer-

ing Conference (IDETC/CIE 2013), August 4-7, 2013, Portland, OR, Paper No. DETC2013-

12311, 1:V004T05A006.

References 33

[53] D. Prabu. “News Concreteness and Visual-Verbal Association: Do News Pictures Narrow

the Recall Gap Between Concrete and Abstract News?”. Human Communication Research,

25:180–201, 1998.

[54] S. Pugh. “Concept Selection - A Method that Works”. Proceedings of the International

Conference on Engineering Design (ICED), March 9-13, 1981, Rome, Italy, 49:497–506,

1981.

[55] R. Raskar and K. L. Low. “Interacting with Spatially Augmented Reality”. Proceedings of

AFRIGRAPH ’01 - the 1st international conference on Computer graphics, virtual reality and

visualisation, November 5-7, 2001, Camps Bay, Cape Town, South Africa, 1, 2001.

[56] N. F. M. Roozenburg and T. Eekels. Product Design: Fundamentals and Methods. John

Wiley & Sons, Chichester, 1995.

[57] Z. Rusak, C. Antonya, I. Horvath, and D. Talaba. “Comparing Kinematic and Dynamic

Hand Models for Interactive Grasping Simulation”. Proceedings of ASME 2009 International

Design and Engineering Technical Conferences & Computers and Information in Engineering

Conference (IDETC/CIE 2009, August 30 - September 2, 2009, San Diego, CA, Paper No.

DETC2009-86520, 1:1527–1535, 2009.

[58] D. J. Sandin, T. Margolis, J. Ge, J. Girado, T. Peterka, and T. A. DeFanti. “The Varrier(TM)

Autostereoscopic Virtual Reality Display”. ACM Transactions on Graphics (TOG), 24:894–

903, 2005.

[59] C. Slinger, C. Cameron, and M. Stanley. “Computer-Generated Holography as a Generic

Display Technology”. Computer, 38:46–53, 2005.

[60] Y. Song, J. S. M. Vergeest, I. Horvath, T. Wiegers, and A. Kooijman. “The Framework of

a Feature-based Free-form Shape Indexing System”. Proceedings of Tools and Methods of

Competitive Engineering (TMCE) Symposium, April 12 - 16, 2010, Ancona, Italy, 1:291–302,

2010.

[61] Y. Song, J. S. M. Vergeest, and T. Wiegers. “Identifying Feature Handles of Free-form

Shapes”. Proceedings of ASME 2008 International Design and Engineering Technical Con-

ferences & Computers and Information in Engineering Conference (IDETC/CIE 2008), Au-

gust 3-6, 2008, New York City, NY, Paper No. DETC2008-49438, 1:627–635, 2008.

[62] W. F. van der Vegte. Testing Virtual use with Scenarios. VSSD, Delft, the Netherlands, 2009.

[63] W. F. van der Vegte and I. Horvath. “Theoretical Underpinning and Prototype Implementa-

tion of Scenario Bundle-based Logical Control for Simulation of Human-artifact Interaction”.

Computer-Aided Design, 44:791–809, 2012.

[64] E. Varga, I. Horvath, Z. Rusak, and J. Verlinden. “On the Framework of Information Process-

ing in a Hand Motion-based Shape Conceptualization System”. Proceedings of ASME 2005

International Design and Engineering Technical Conferences & Computers and Information

in Engineering Conference (IDETC/CIE 2005), September 24-28, 2005, Long Beach, CA,

Paper No. DETC2005-84929, 1:1121–1130, 2005.

[65] J. Verlinden, E. Doubrovski, and I. Horvath. “Assessing the Industrial Impact of Interactive

Augmented Prototyping on several Abstraction Levels”. Proceedings of Tools and Meth-

ods of Competitive Engineering (TMCE) Symposium, May 7-11, 2012, Karlsruhe, Germany,

2:1205–1214, 2012.

[66] R. W. Vroom and A. Olieman. “Design Relevance in an Industrial Design Engineering Wiki”.

Proceedings of Tools and Methods of Competitive Engineering (TMCE) Symposium, April 12

- 16, 2010, Ancona, Italy, 2:1069–1083, 2010.

34 Systematic Enablers for Advanced Product Design Support

[67] R. W. Vroom and A. Olieman. “Sharing Relevant Knowledge within Product Development”.

International Journal of Product Development, 12:34–52, 2011.

[68] T. Wiegers and J. S. M. Vergeest. “Interaction for Shape Design - Terms used and their Effec-

tiveness”. Proceedings of Tools and Methods of Competitive Engineering (TMCE)Symposium,

May 7-11, 2012, Karlsruhe, Germany, 1:138–152, 2012.

[69] R. C. Zeleznik, K. P. Herndon, and J. F. Hughes. “Sketch An Interface for Sketching 3-D

Scenes”. Proceedings of Computer Graphics Conference - SIGGRAPH 96, August 4-9, 1996,

New Orleans, LA, 1:163–170, 1996.