Tải bản đầy đủ (.pdf) (15 trang)

The international journal of advanced manufacturing technology, tập 59, số 9 12, 2012

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (1.57 MB, 15 trang )

Int J Adv Manuf Technol (2012) 59:1245–1259
DOI 10.1007/s00170-011-3575-0

ORIGINAL ARTICLE

Tangible user interface of digital products in multi-displays
Jae Yeol Lee & Min Seok Kim & Jae Sung Kim &
Sang Min Lee

Received: 21 February 2011 / Accepted: 7 August 2011 / Published online: 8 September 2011
# Springer-Verlag London Limited 2011

Abstract Early attempts at supporting interaction with
digital products for the design review were based on CAD
and virtual reality (VR) systems. However, it is not easy to
build a virtual environment of fine quality and to acquire
tangible and natural interactions with VR-based systems,
which are expensive and inflexible to be adaptable to
typical offices or collaboration rooms. We present a new
method for supporting tangible interactions with digital
products in immersive and non-immersive multi-display
environments with inexpensive and convenient optical
tracking. The provided environment is more intuitive and
natural to help participants to review digital products
through their functional behavior modeling and evaluation.
Although vision-based image processing has been widely
used for interaction tracking, it cannot be effectively used
under a low illumination condition since most of the
collaborations and meetings take place in such conditions.
To overcome this problem, the proposed approach utilizes
Wiimote™ for infrared (IR)-based optical tracking and for


capturing user's interactions and intents. Thus, users can
easily manipulate and evaluate digital products with
inexpensive tools called IR tangibles in more natural and
user-friendly environments such as large displays, tabletops, and situational displays in typical offices and
workspaces. Furthermore, the multi-view manager is
J. Y. Lee (*) : M. S. Kim
Chonnam National University,
300 Yongbong-dong, Buk-gu,
Gwangju 500-757, South Korea
e-mail:
J. S. Kim : S. M. Lee
KISTI,
Daejeon, South Korea

suggested to effectively support multi-views of digital
products among participants by providing public and
private views. We will show the effectiveness and usefulness of the proposed approach by demonstrating several
implementation results and by evaluating user study of the
proposed approach.
Keywords Tangible user interface . Human–computer
interaction . Multi-display . Wiimote .
IR-based optical tracking

1 Introduction
Design review of digital products is required to test their
functionalities and characteristics, which can result in
higher stability, better maintainability, and less potential
errors of the products before production. Shortening of
development cycles demands the use of an intelligent
interface for testing human–computer interactions of digital

products and an efficient method for evaluating their
functional behaviors [1]. Many attempts at supporting
digital product design and evaluation were based on
traditional visual environments such as virtual reality (VR)
and cave automatic virtual environment (CAVE). However,
these environments are very expensive and not flexible.
Meanwhile, as displays increase in size and resolution
while decreasing in price, various types of inexpensive
multi-displays will be soon available such as situated
displays or tabletops providing high-resolution visual
output. These large and high-resolution displays will
provide the possibility of working up close with detailed
information in typical office or workplace environments
[2]. For example, when people work collaboratively with a


1246

digital product, its related information is often placed on a
wall or tabletop, where it is easy to view, annotate, and
organize. The information can be rearranged, annotated,
and refined in order to evaluate the functional behavior of
the digital product to solve a problem. For this reason,
multi-displays have been considered to be used for
collaboration, robot, engineering, and realistic visualization
and interaction [3–7]. Furthermore, the availability of smart
devices and their interactions have increased dramatically
over the last decade, which provides new possibilities of
interacting techniques such as multi-touch and sensor-based
interactions [8, 9].

Usually, a single-user design task in a multi-display
environment mainly requires the visualization capabilities
of a large display and demands long hours. Similarly, in a
collaborative discussion where users gather around a large
conference room table, various digital contents frequently
need to be displayed on a large screen for others to see.
However, it is not sufficient to simply move existing
graphical user interfaces onto multi-displays. For example,
large displays afford different types of interactions than
workstations or desktops for several key reasons. The large
visual display can be used to work with large quantities of
simultaneously visible material. On the other hand, interaction is directly on the screen with a pen-like device or by
touch rather than with a keyboard and an indirect pointing
device. Also, people often work together at a wall,
interweaving social and computer interactions. However,
direct manipulation through pointing and clicking is still
considerably the dominant interaction paradigm in conventional user interfaces [10, 11].
Most of the proposals in the previous research works
require expensive display and considerable space [12]. In
addition, they cannot be effectively applied to another type
of display to interact with digital products in a typical office
or workspace. Furthermore, in order to support useroriented and tangible interactions, a vision-based tracking
has been widely used. But, the vision tracking cannot be
effectively used under a low illumination condition. Note
that most of the collaborations and meetings take place in
such conditions. Another related problem with most vision
algorithms is the difficulty experienced when segmenting
objects under varying lighting conditions and shadows and
requires a large amount of processing time [13].
This paper presents a new method for supporting

tangible interactions with digital products in immersive
and non-immersive multi-display environments with inexpensive and convenient optical tracking. It can be adaptable
to a large set of multi-displays such as a projected display, a
tabletop, and a situated remote display to perform multitouch interactions with digital products. In addition, the
proposed approach makes participants review and evaluate
the functional behavior of digital products efficiently with

Int J Adv Manuf Technol (2012) 59:1245–1259

infrared (IR) tangibles. To overcome the generic problem of
the vision-based image processing approach, the proposed
approach utilizes Wiimote [14] for optical tracking and for
capturing user's various interactions and intentions effectively. This approach can support the tangible user interface
of digital products in immersive and non-immersive
environments where users can easily manipulate and
evaluate digital products, providing more effective and
user-friendly circumstances. Moreover, the proposed approach can be easily set up to run various environments
without any difficulty such as large displays, tabletops, and
situational displays in typical offices and workspaces. The
multi-view manager is suggested to effectively support
multi-views of digital products among participants, which
provides public and private views of the shared digital
product. We will show the effectiveness and usefulness of
the proposed approach by demonstrating several implementation results and by evaluating user study of the
proposed approach. Section 2 presents previous work.
Section 3 explains tangible user interactions in multidisplays with IR tangibles and IR tracking. Section 4
proposes how to effectively support interactions with digital
products in multi-displays for design review. Section 5
presents implementation results. Finally, Section 6 concludes with some remarks.


2 Previous work
Early attempts at supporting interactions with digital
products for the design review were based on computeraided design (CAD) and VR systems. Powerful and
expensive tools including stereoscopic display systems,
head-mounted display, data gloves, and haptic devices have
been utilized and combined to construct virtual prototyping
systems that provide realistic display of digital products and
offer various interaction and evaluation methods[1, 15].
Since it is not easy to build a virtual environment of fine
quality and to acquire tangible and natural interaction with
VR-based systems, many alternative solutions have been
proposed.
Another type of VR known as augmented reality (AR) is
considered to be an excellent user interface. Interacting in
AR environments can provide convincing feedback to the
user by giving the impression of natural interaction since
virtual scenes are superimposed on physical models in a
realistic appearance. Thus, AR is considered to complement
VR by providing an intuitive interface to a 3D information
space embedded within physical reality. Lee et al. [16]
proposed how to provide car maintenance services using
AR in ubiquitous and mobile environments. Christian et al.
[17] suggested virtual and mixed reality interfaces for elearning which can be applied to aircraft maintenance.


Int J Adv Manuf Technol (2012) 59:1245–1259

Regenbrecht et al. [18] proposed a collaborative augmented
reality system that featured face-to-face communication,
collaborative viewing and manipulation of 3D models, and

seamless access to 3D desktop applications within the
shared 3D space. However, AR depends on the marker
tracking such that a vision-based image processing is
intensively used. This is a severe drawback for supporting
realistic visualization in a large display or tabletop display
since the image processing deteriorates when the resolution
increases. In addition, it is very difficult to interact directly
with digital objects in AR environments since it is not
convenient and natural to interact with them through
marker-based paddles which are widely used in AR
applications.
Meanwhile, to support effective and natural interactions
with digital objects in various displays, vision-based image
processing techniques have been used [4, 5, 7]. The
approach in [7] tracks a laser pointer and uses it as an
input device which facilitates interactions from a distance.
While the laser pointer provides a very intuitive way to
randomly access any portion of the wall-sized display, the
natural shaking of the human hand makes it difficult to use
for precise target acquisition tasks, particularly for smaller
targets. The VisionWand [19] uses simple computer vision
algorithms to track the colored tips of a simple plastic wand
to interact with large wall displays both close up and from a
distance. A variety of postures and gestures are recognized
in order to perform an array of interactions. A number of
other systems use vision to track bare, unmarked hands
using one or more cameras, with simple hand gestures for
arms-reach interactions [20, 21]. Dynamo was proposed
and implemented for a communal multiuser interactive
surface [21]. The surface supported the cooperative sharing

and exchange of a wide range of media that can be brought
to the surface by users outside of their familiar organizational settings.
Recently, mobile devices have been considered as complementing tools to interact with virtual objects in ubiquitous
environments. Much work has attempted to bridge the gap
between personal devices and multi-displays. Many
researches tried to augment mobile devices of limited
capabilities with enhanced sensing or communication capabilities such as remote controller [8, 9]. However, this
approach is hard to effectively provide visual information
to multi-users. To overcome this limitation and integrate the
interaction between smartphones and multi-displays, it is
considered to provide visually controlled interaction. Few
research work has dealt with how to effectively support
visually controlled views to support individual and cooperative interactions for collaborative design review in multidisplay and smartphone environment.
Although various ways have been proposed to support
the visualization and evaluation of digital products, more

1247

research is still needed in the following aspects. The
interaction should be more intuitive and natural to help
participants in the digital product design to make a product
of interest more complete and malfunction free before
production. The environment should be available at low
cost without strong restriction of its accessibility and should
be adaptable to various environments and displays. Moreover,
for effective evaluation of the digital product, we need to
define its functional behavior through forms, functions, and
interactions. In this paper, we address these aspects by
proposing a natural interaction approach in multi-displays
using convenient tangible interfaces. Note that the proposed

approach can be easily adaptable to various environments
with low cost and much convenience.

3 Proposed approach
This section explains how to effectively support tangible
interactions with digital products in multi-displays using
low-cost IR tracking, which provides much convenience
and effectiveness for the design review of digital products.
It overviews the proposed system. Then, it explains tangible
interfaces for directly interacting with digital products.
3.1 System overview
The proposed approach consists of four layers: (1) tangible
interface layer, (2) resource layer, (3) collaboration and
evaluation layer, and (4) visualization layer as shown in
Fig. 1. The tangible interface layer supports tracking of IR
tangibles and interpreting user intent through analyzing IR
tangible inputs. The result is used for natural and direct
interactions with digital products in multi-displays such as
large display, tabletop, and remote display. For natural user
interactions, IR tangibles are devised for the direct multitouch interfaces. To transform the user space to the
visualization space, the perspective transform is calculated.
The collaboration and evaluation layer manages multiviews among participants and generates graphic scenes
adaptable to participants' devices and contexts. In particular, it provides private and public views of the shared digital
product among participants. Each view is systematically
generated by the multi-view manager which controls all the
views of participants and generates adaptive views considering the display context and user's preference. To support
the design review of digital products, the finite-state
machine (FSM)-based functional model is linked to the
actions that occur during the interaction with digital
products [1, 22]. According to the user's actions and

functional evaluation, the adaptive rendering of the digital
product is executed and the generated scene is sent to the
reviewer. In addition, according to the actions related to


1248

Int J Adv Manuf Technol (2012) 59:1245–1259

Fig. 1 System overview

FSM, the system loads and renders corresponding virtual
objects or guides users to manipulate them on multi-

displays. All the necessary digital models and functional
models are stored in the resource layer. Thus, participants

Fig. 2 Overall process for tangible interactions with digital products in multi-displays


Int J Adv Manuf Technol (2012) 59:1245–1259

1249

Fig. 3 Wiimote and infrared
camera: a Wiimote, b IR
camera module in Wiimote

can use multi-displays for collaborative and private interactions for the design review and discussion.
The overall process involves three stages: (1) digital

product design, (2) tangible interaction in multi-displays,
and (3) collaboration and evaluation. In the digital product
design stage, the product designer creates a product model
using a commercial CAD system, and considers design
specification and customers' requirements during the product design that correspond to the overall functional
behavior and interface. However, this consideration is
limited and subjective, and therefore cannot guarantee the
complete functional evaluation of the digital product.
When the product design is completed, it is used to
construct the virtual product model in the interaction stage.
The tangible digital model consists of geometry, assembly
relation, and other attributes for visualization and interaction. In addition, its functional model and related multimedia contents are generated and created. Eventually, the
multimedia contents will be overlaid into the virtual model
in multi-display environments. The FSM-based functional
model is linked to actions that occurred during the
interaction with the product model. Each action can be
linked to a multimedia content visualization and interaction
such as menu manipulation, movie and music playing, and
color change.
Then, participant(s) can evaluate and simulate the
functional behavior of the digital product by tangible

interaction in multi-displays at the collaboration and
evaluation stage. To support a new interface that is able to
directly manipulate virtual objects in multi-touch aspect. IR
tangibles are provided for cost-effective and convenient
interactions. Two Wiimotes are used to effectively track IR
tangibles fast and robustly under low illumination conditions. Moreover, multi-views are generated since participants have different views as well as common view of the
digital product. To evaluate the functional behavior of the
product, an FSM is embedded into the tangible interaction

and visualization. The concrete execution according to each
action or activity is conducted during the functional
simulation. Finally, participants from different areas share
their ideas and collaborate to find design problems and
revise the overall shape and its functional behavior
according the result of the design review.
Figure 2 shows a tangible interaction with a digital
mobile phone and its functional evaluation in multidisplays based on the above overall process. Firstly, the
designer designs a digital product of a new mobile phone.
Then, its functional behavior is modeled with an FSM, and
communications between the digital model and the FSM
are made by IR tangible-based interaction. Each interaction
plays a role in linking an action in the FSM and the actual
action in the digital model. Finally, its virtual model is
displayed in multi-displays as a private view or common
view, and thus, the user can easily evaluate its functional

Fig. 4 IR tangibles interfaces: a tangible wand, b tangible cube, c tangible ring


1250

Int J Adv Manuf Technol (2012) 59:1245–1259

Fig. 5 Multi-displays and tangible interactions: a large display: direct control, b tabletop, c large display: remote control

behavior as well as the appearance on the prototype. During
the evaluation, corresponding virtual objects and multimedia contents are overlaid on the digital phone model to help
the evaluation. When the collaboration space is synchronized, participants perform collaboration and evaluate the
functional properties using tangible interfaces. Normally,

the space shares the visualization model, functional model,
and related multimedia contents. Thus, the proposed
tangible interface and visualization can make participants
experiment more touchable and tangible feelings compared
to existing virtual model visualization and its simulation
[15, 23, 24].
3.2 IR tangibles and multi-displays
Wiimote shown in Fig. 3 plays the main role in efficient
and robust tracking of IR tangibles in multi-display
environments. It integrates an in-built infrared camera with
on-chip processing and accelerometers, and supports Bluetooth communication. This characteristic makes it possible
to communicate with external hardware that supports
Bluetooth, while several open-source libraries are available
to capture and process the derived information. In particular, the proposed approach is very flexible since it is easily
adaptable to various displays such as large displays,
tabletops, desktops, and situated displays. Moreover, it is
robust in a low illumination condition where most of the
collaborations and discussions occur, since it utilizes
infrared optical tracking (rather than vision-based tracking)

Fig. 6 Interfaces for remote
control: a reflective tape, b
LED array

which has the advantage of an enhanced sense of presence
and increased interaction among participants. Furthermore,
the environment setup is quite simple as it is sufficient to
mount two Wiimotes on a portable support in front of
multi-displays in a typical office or workplace. The
information derived from the Wiimote camera is used to

track an IR tangible and to generate graphics corresponding
to the movements of the user. These data are successively
shared across multi-display environments.
An IR tangible can be easily created from IR lightemitting diodes (LEDs), or alternatively by shining IR light
generated via an LED array on a reflective marker attached
to the participant's hand. Figure 4 shows tangible interfaces
made by IR LEDs which can be effectively used depending
on the type of display and application. The 3D coordinates
of the IR tangible are calculated by a stereo vision
technique. Using a real-time optical tracking algorithm that
simultaneously tracks multiple IR tangibles, we can explore
techniques that allow for direct manipulation on multidisplays using multi-touch gestures.
By pointing a Wiimote at a projection screen or large
display, we can create different types of interactive multidisplays as shown in Fig. 5. Since the Wiimote can track up
to four points, up to four pens can be used. In particular,
using reflective tape and the LED array shown in Fig. 6a and
b, we can control and interact with digital products remotely.
This allows us to interact with various applications simply by
waving one's hands in the air, similarly to the interaction as
shown in Fig. 5c. Fig. 5 demonstrates a variety of interaction


Int J Adv Manuf Technol (2012) 59:1245–1259

1251

find a transform that maps one arbitrary 2D quadrilateral
into another [25].
A property of the perspective transform is its ability to
map straight lines to straight lines. Thus, given the

coordinates of the four corners of the first quadrilateral,
and the coordinates of the four corners of the second
quadrilateral, the task is to compute the perspective
transform that maps a new point in the first quadrilateral
onto the appropriate position on the second quadrilateral.
Let us assume that the perspective transform is written as
X = Hx, where x is the vector of multi-display coordinates,
and X is the vector of the virtual world of digital products.
We can write this form in more detail as:
Fig. 7 Perspective transform

2
techniques that exploit the affordability of the proposed
approach, resulting in effective multi-displays such as large
displays, table tops, and remote displays. There are also
circumstances where users cannot easily approach the
display and can interact only from a distance. Our work
also investigates potential techniques for pointing and
clicking from a distance using the proposed approach as
shown in Fig. 5c. This eliminates issues related to acquiring
a physical input device, and transitions very fluidly to up
close touch screen interaction.
To support interactions with digital products through IR
tangible interfaces, we need to convert the coordinates of
the IR tangibles to those in the computer that actually
manipulate objects. For example, the coordinate (x, y) from
the IR LED should be mapped to the coordinate (X, Y) in
the virtual world of digital products by the perspective
transform as shown in Fig. 7. In other words, we need to
Fig. 8 Tangible user interfaces

using IR tangibles: a select, b
move, c rotate, d scale

XW

3

2

32 3
x
76 7
e f 54 y 5 where W ¼ gx þ hy þ 1
1
h 1

a

b c

7 6
6
4 YW 5 ¼ 4 d
g
W

We can rewrite the above equations as follows:
X ¼

ax þ by þ c

dx þ ey þ f
;Y ¼
gx þ hy þ 1
gx þ hy þ 1

X ¼ ax þ by þ c À gxX À hXy
Y ¼ dx þ ey þ f À gxY À hyY
Since we need to find H which contains eight unknown
variables, we need four points that are already known. We

a

b

c

d


1252

Int J Adv Manuf Technol (2012) 59:1245–1259

need a calibration step to obtain four points that map (x, y)
into (X, Y) from the user before interactions so that we can
find all the unknown variables in H as follows:
2

3


2

x1
X1
7 6
6
6Y1 7 6 0
7 6
6
6 X 2 7 6 x2
7 6
6
7 6
6
6Y2 7 6 0
7 6
6
6 X 3 7 ¼ 6 x3
7 6
6
7 6
6
6Y3 7 6 0
7 6
6
6 X 4 7 6 x4
5 4
4
Y4
0


y1 1

0

0

0

0 0
y2 1
0 0

x1 y1
0 0
x2 y2

1
0
1

y3 1
0

0 0
x3 y3

0
1


y4 1
0 0

0 0
x4 y4

0
1

0

system traces the location of IR tangibles while being
moved as shown in Fig. 8.
&

32 3
Àx1X 1 ÀX 1y1
a
76 7
Àx1Y 1 Ày1Y 1 76 b 7
76 7
6 7
Àx2X 2 ÀX 2y2 7
76 c 7
76 7
Àx2Y 2 Ày2Y 2 76 d 7
76 7
6 7
Àx3Y 3 ÀX 3y3 7
76 e 7

76 7
6 7
Àx3Y 3 Ày3Y 3 7
76 f 7
7
7
Àx4X 4 ÀX 4y4 56
4g5
h
Àx4Y 4 Ày4Y 4

&
&
&

Select: When the IR tangible turns on from the off status
and the location of the IR tangible is close to the display, it
is considered that the user tries to select an object.
Rotate: When one of the two IR tangibles is rotating
around the other, the selected object is rotated.
Scale: When the distance between the two IR tangibles
becomes significant, the selected object is zoomed out.
On the other hand, when it is closer, it is zoomed in.
Translate: When the IR tangible that selects an object
moves, the selected object moves along the IR tangible.

To support the design review and functional evaluation
of a digital product, visualization information as well as its
functional model and related multimedia contents are
generated and visualized. The multimedia contents will be

overlaid into the digital product on multi-displays. To

3.3 User interfaces with IR tangibles
The user manipulates multiple IR tangibles to select, move,
rotate, and scale multimedia and 3D digital products. The

a

b

c

Fig. 9 Tangible user interface of digital products: a rotating, b changing attributes of digital product, c playing multi-media on the digital product


Int J Adv Manuf Technol (2012) 59:1245–1259

evaluate the functional behavior of the product, an FSM is
embedded into the tangible interaction and visualization.
The concrete execution according to each action or activity
is conducted during the functional simulation. Finally,
participants from different areas share their ideas and
collaborate to find design problems and revise the overall
shape and its functional behavior according to the result of the
design review. As shown in Fig. 9, interactions include
playing the multimedia and changing the attribute of a digital
product such as changing color or texture. Based on the
above manipulation operators, the user can perform design
review tasks in immersive and non-immersive multi-display
environments effectively.


4 Collaborative design review of digital products
in multi-displays
This section explains how the proposed approach can be
utilized for the effective design review of digital products
among participants in multi-displays. The user can manipulate multiple IR tangibles in front of multi-displays for
interacting with digital products. The interaction is analyzed
and fed into a design view and visualization of digital
products in a large display, tabletop, and remote display.
Fig. 10 State transition chart: a
concept of state transition, b
state transition of a mobile device

1253

4.1 Functional evaluation
During the design review, modeling and simulation of
digital products are essential to test their functionalities and
characteristics, which can result in higher stability, better
maintainability, and less potential errors of the products
before manufacturing [23, 24]. To effectively support the
design review and collaboration in co-location, VR and AR
have been widely used. However, there is no cost-effective
way to support a tangible user interface of digital products in
multi-displays because most of the previous research work
requires expensive VR systems and inflexible visualization
environments [1]. For this reason, we adopt FSM to simulate
the functional behavior of a digital product [22].
Every digital product has some part components that are
involved in the interaction between the user and the

product. These include switches, buttons, sliders, indicators, displays, timers, and speakers. They are called objects
making up the basic building blocks of the functional
simulation. Every object has a pre-defined set of properties
and functions that describe everything it can do in a realtime situation. The overall behavior of a digital product can
be broken down into separate units of behavior, which are
called states. The state of the product can be changed to
another state as shown in Fig. 10a [15]. This is called state


1254

Int J Adv Manuf Technol (2012) 59:1245–1259

Fig. 11 Multi-view manager

transition. Every state transition is triggered by one or more
events associated with it. Some tasks called actions can be
performed before transition to a new state. In order to
define the actual behavior of the product, all the tasks
performed in each state are specified. These tasks are called
activities. They only occur when their state becomes active.
Actions and activities are constructed using the objects'
properties and functions. Each action or activity consists of
a set of statements. Each statement can be the assignment of

Fig. 12 Interaction process for the design review in multi-displays

some value to a variable, the calling of a function of an object,
or a composite statement with a conditional statement. The
functional behavior model for tangible interactions is used to

generate a state transition chart, which represents all the states
and the possible state transitions between them. Figure 10b
shows a state-transition chart for a mobile phone [1].
When the user creates an input event using IR tangibles
in multi-displays, the proposed approach checks whether or
not the event is related to the functional behavior of the


Int J Adv Manuf Technol (2012) 59:1245–1259

1255

Fig. 13 Tangible and direct interactions of digital products in projected display

product. If so, the FSM module refers to the functional
behavior model of the product and determines if the event
triggers state transition. If the state transition is confirmed,
the FSM module quits the activities of the current state,
changes the state to a new one, and starts the activities of
the new state. Otherwise, it keeps conducting the activities
of the current state. These actions and activities include
tasks such as changing the position and orientation of the
components of the digital product and embedding multimedia into the digital product and playing the multimedia. The
execution of the actions and activities yields state-specific
visual and auditory data.
4.2 Multi-view management
The collaborative or public interaction is considered as
synchronized interaction, whereas the individual or private
interaction is considered as asynchronized interaction. The
collaborative view is generated to synchronize all the views

of multi-users. On the other hand, the individual view is
generated to provide a specific view to a specific user who
is willing to do a private action to the shared space. The
concept of both interactions is very useful for individual
and multi-user interactions.
Our approach can make different displays be involved in
the collaborative design review and interaction. To support the
multi-visualization interface, the system internally manages
all the views of participants, and generates individual views,
each of which corresponds to the private view of each user as
well as the public view which can be shared among them as
shown in Fig. 11. For collaborative design review and
sharing, the system renders each private view based on the

scene graph of 3D objects and multimedia data. When an
individual interaction occurs, the system re-renders the
specific view of the scene and transmits it to the
corresponding user when the user interacts in a private
space. On the other hand, when a cooperative interaction
occurs in a public space, the scene is sent to multi-users to
keep them synchronized in the public view.
4.3 Tangible interaction process
Through the combination of output capabilities of multidisplays, participants can share digital products and
perform the design review. In particular, this approach has
a potential to realize new interaction techniques between
multi-displays. Figure 12 shows an overall process of
tangible and natural interactions in multi-displays. According to the generated events, the proposed approach analyzes
them and evaluates the functional behavior of the
corresponding digital product. Note that the interactive
behavior of the digital product is defined through the three

aspects of form, function, and interaction to effectively
evaluate functional behaviors of the digital product. In
particular, a remote display plays two roles: remote
controller and augmented visualizer. As a remote controller,
the display provides a set of icons and menus, each of
which generates an event for interacting with a digital
product in a shared multi-display. On the other hand, as an
augmented visualizer, the remote display provides the same
view with that in the shared multi-display such that the user
can directly manipulate the digital model. The multi-view
manager generates an adaptive view regarding the capability
of the display. Whenever the user touches and gives an action,

Fig. 14 Tangible and direct interactions of digital products in tabletop (LG LCD TV)


1256

Int J Adv Manuf Technol (2012) 59:1245–1259

Fig. 15 Tangible and remote interactions in large displays

an event is sent to the multi-view manager which analyzes the
event and evaluates it regarding the functional behavior of the
model. Finally, the adaptive view is created and sent to the
user. In particular, the multi-view manager maintains all
different views among participants for providing private and
public views.

5 System implementation and evaluation

This section explains how the proposed approach can support
the tangible user interface of digital products in immersive and
non-immersive multi-displays with cost effective, robust, and
efficient optical tracking. To illustrate the benefits of the
proposed approach, we present several implementation results
applied to multi-displays. These multi-displays can be easily
set up in typical office environments with a projector, TV, and
Wiimote. Furthermore, we will show a qualitative usability
study which ensures the effectiveness and convenience of the
proposed approach.
5.1 System implementation
We will show several case studies to demonstrate the
visualization and review of digital products using simple
but robust IR tangible interactions in multi-displays. In this

research, OpenSceneGraph [26] is used to support the
realistic rendering of digital products. Figure 13 shows the
tangible user interface of digital products in a projected
large display. Two Wiimotes are mounted on a fixed
location, while a user moves a set of IR tangibles in front
of a display. The IR tangible is captured by both Wiimote
cameras and this information is transmitted to the virtual world
of digital products which runs immersive and non-immersive
applications in various multi-displays. Figure 13 shows how to
interact with the digital product of a smart phone. The user
can change its color or run various multimedia by tangible
user interface. Figure 14 shows a similar environment but it
can run on a table-top display. In this case, Wiimote is
mounted on the ceiling of the office or workspace.
Figure 15 shows a remote interaction with digital

products. Using an LED array and some reflective tape,
the user can track objects as fingers in 2D space. This
makes it possible to interact with digital products by
waving one's hands in the air similarly to an interaction.
Figure 16 demonstrates another tangible user interaction
with the digital product of a car and another smartphone.
As the figure shows, it is possible to multiple IR tangibles
to interact with the digital model. The user can rotate the
digital model or a multimedia data with one or two IR
tangibles shown in Fig. 16a. Similarly, the user can zoom in
or out with two IR tangibles as shown in Fig. 16b. These

Fig. 16 Tangible user interface: a rotating the digital product of the car and multi-media with one or two IR tangibles, b scaling the size of a car
and multi-media with two IR tangibles


Int J Adv Manuf Technol (2012) 59:1245–1259

1257

Fig. 17 Collaborative design
review and view management

interactions show the ease-of-use, tangibility, and cost
effectiveness of the proposed approach.
Figure 17 shows how participants can collaborate with
each other in different multi-displays. The multi-view
manager plays the main role in managing all the public
and private views of the participants and generating
adaptive scenes considering user's device context. According

to the type of interaction, the result is automatically updated
to the participant's displays.
One of the main reasons to use Wiimote for interactions
is that each Wiimote contains a 1,024×768 infrared camera
with built-in hardware blob tracking of up to four points at
100 Hz, which demonstrates that the tracking speed is very
fast and effective compared with vision-based image
processing. This significantly outperforms any webcam
available today. The 2D coordinates of the IR tangible as
seen by each Wiimote camera were read by the perspective
transform. However, the transformed coordinate does not
have depth information. In other words, it is simply a 2D
transformed coordinate rather than a 3D coordinate. For this
reason, we need to find another coordinate value or depth
value. Two Wiimotes are therefore used. In this paper, a
standard stereo-vision technique is applied to find the depth
of the IR tangible in 3D as shown in Fig. 18. Where the
Z-axis is the direction towards which the cameras are
pointing, D is the distance between the cameras, f is the
focal length, and xl and xr are the x IR source coordinates
on the left and right view planes, respectively [27].
À
Á
T À x l À xr
T
fT
¼ ¼> Z ¼ l
Z
x À xr
Z Àf

To evaluate the accuracy of the depth value with two
Wiimotes, we have measured the real data and calculated their
depth values. As shown in Fig. 19, we have measured values
at 1 m (or 1,000 mm) and 2-m distances. Mean values are
1,001.547 and 2007.15 mm. Standard deviations are 12.12 and
14.76 mm. We found that it is possible to utilize Wiimote for
tangible interactions considering accuracy. As a further test, we
need to find better configurations to minimize errors by
changing the distance between two Wiimotes. Furthermore, it
is necessary to calibrate Wiimotes [25, 27].

5.2 User study
We performed a qualitative user study of the proposed
approach. The 12 participants were given a short introduction and performed several tasks in multi-displays. After
following the introduction and performing the tasks shown
in Figs. 13, 14, 15, and 16, a questionnaire was given that
included questions concerning ease-of-use, tangibility, and
usability as shown in Table 1. All responses were scored on
a 5-point scale (ranging from “5 strongly agree” to “1
strongly disagree”) with some comments.
The data collected were analyzed with the Statistical
Package for Social Sciences (SPSS™). Firstly, we utilized
Cronbach's alpha which is a coefficient of reliability. It is
commonly used as a measure of the internal consistency or
reliability of statements in the questionnaire [28]. It is
generally agreed that it is internally reliable if Cronbach's
alpha α>0.7. The calculated Chronbach's alpha was 0.841
α=0.841, which implies that the suggested statements in
the questionnaire have consistency. In addition, the mean,
standard deviation, and significance were collected from the

responses to analyze the usability of each statement
(Fig. 20). A t test was applied to analyze the participants'

P

xl
Z

xr

f

f

T
Ol

D =xl - xr

Fig. 18 Stereo vision for finding the depth value of P

Or


1258

Int J Adv Manuf Technol (2012) 59:1245–1259

1. Distance = 1m (1000mm)


2. Distance = 2m (2000mm)

Range(mm) Frequency
980
193
990
1220
1000
1314
1010
1259
1020
1739
1030
177

Mean
Std Dev

1001.547
12.12

Range(mm)
Frequency
1970
0
1990
293
2000
642

2010
566
2030
626
2050
201

2000

Mean
Std Dev

2007.15
14.76

700

1800

1600

600

1400

500

1200

400


1000

Frequency

Frequency

800

300

600

200

400

100

200
0

0
980

990

1000

1010


1020

1030

1970

1990

2000

2010

2030

2050

Fig. 19 Evaluation of the accuracy of the depth value with two Wiimotes

responses. A significance level of p<0.05 was adopted for
the study, and the analysis result has shown that all the
statements satisfy the significance level of p<0.05
Regarding the user study, we found that most of the
participants are satisfied with the tangible user interface of the
proposed approach in a typical office environment since they
can easily interact with 3D virtual objects and multimedia
with multi-touch on a wall, tabletop, and even desktop in an
immersive and intuitive way. In particular, they do not need an
expensive VR room or tabletop since two Wiimotes are
sufficient to be set up in front of them. They have also

expressed of the ease of selecting, scaling, translating, and
zooming operations. However, they had difficulty in selecting
menus when the display is large and the direct manipulation is
needed. Note that this type of problem can be solved when the
remote interaction is allowed.
Throughout the system implementation and user study,
we can expect that the proposed approach presents a new
approach to the tangible user interface that can be
effectively used in a design review and collaboration even
for casual users. The proposed approach extends tangible

interfaces to enable interaction with different combinations
of multi-displays with an easy setup of a cost effective and
flexible Wiimote-based optical tracking device.

6 Conclusion
We presented a new method for supporting tangible
interactions in immersive and tangible environments that
involves inexpensive optical tracking. The provided environment is more intuitive and natural to help participants to
review digital products through their functional behavior
modeling and evaluation. Although vision-based tracking is
widely used in detecting motions and interactions, the most
significant problem of vision algorithms is the darkness of
the space. Therefore, darkening a conference room is not an

Table 1 Questionnaire statements
S1:
S2:
S3:
S4:

S5:

It
It
It
It
It

is
is
is
is
is

easy
easy
easy
easy
easy

to
to
to
to
to

apply tangible user interactions in multi-displays.
select menu items.
perform multi touch tasks.
translate digital products in multi-displays.

rotate digital products in multi-displays.

S6: It is easy to scale digital products in multi-displays.
S7: It is easy to share multi-media data among other participants for
collaboration.
S8: It is easy to understand the intent of other participants.

Fig. 20 Analysis of each statement for usability


Int J Adv Manuf Technol (2012) 59:1245–1259

option. However, most of the collaborations and meetings
occur in rooms that have low illumination condition and
vision-based tracking is therefore not adequate for various
multi-display interactions. Another problem is that most multidisplays are expensive and inflexible and they further require
special devices. For these reasons, they are not proper in
typical office or workspace environments where participants
need to easily take part in sharing ideas, reviewing digital
products, and collaborating with each other. To overcome these
problems, the proposed approach utilizes Wiimote for optical
tracking and for capturing user's various interactions and
intents. In addition, it can be set up in environments without
difficulty such as large displays, tabletops, and situational
displays depending on circumstances. Furthermore, the multiview manager is suggested to effectively support multi-views
of digital products among participants by providing public
and private views. Throughout the system implementation
and user study, we can expect that the proposed approach can
present a new approach to the tangible user interface that can
be effectively used in a design review and in collaboration to

share, manipulate, and visualize 3D and multimedia data even
for casual users.
Note that the proposed approach addresses the following
aspects by proposing a natural interaction approach in multidisplays using convenient tangible interfaces. The interaction
should be more intuitive and natural to help participants in the
digital product design. The environment should be available at
low cost without strong restriction of its accessibility and
should be adaptable to various environments and displays.
Moreover, for effective evaluation of the digital product, it is
necessary to define its functional behavior through forms,
functions, and interactions. Some future research work still
needs to be considered and improved. We need to devise a
sophisticated way to support distributed collaboration and
design evaluation. We are further investing how to support
these services using smartphones.
Acknowledgments This research was supported by Basic Science
Research Program through the National Research Foundation of
Korea (NRF) funded by the Ministry of Education, Science and
Technology (2009–0069050). This research was also supported by
Platform Technology of Design Supporting for c-MES (10033162) by
the Ministry of Knowledge Economy.

References
1. Lee JY, Rhee GW, Park H (2009) AR/RP-based tangible
interactions for collaborative design evaluation of digital products.
Int J Adv Manuf Tech 45:649–665
2. Buxton W, Fitzmaurice G, Balakrishnan R, Kurtenbach G (2000)
Large displays in automotive design. IEEE Comput Graph Appl
20:68–75


1259
3. Robertson G, Czerwinski M, Baudisch P, Meyers B, Robbins D,
Smith G, Tan D (2005) The large-display user experience. IEEE
Comput Graph Appl 25:44–51
4. Ni T, Schmidt GS, Staadt OG, Livingston MA, Ball R, May R
(2006) A survey of large high-resolution display technologies,
techniques, and applications. Proc IEEE Conf Virtual Reality:
223–236
5. Bowman DA, Kruijff E, LaViola JJ, Poupyrev I (2004) 3D user
interface: theory and practice. Addison-Wesley Professional
6. Kato J, Sakamoto D, Inami M, Igarashi T (2009) Multi-touch
interface for controlling multiple mobile robots. Proc CHI
2009:3443–3448
7. Davis J, Chen X (2002) Lumipoint: multi-user laser-based
interaction on large tiled displays. Displays 23:205–211
8. Hardy R, Rukzio, E (2008) Touch & interact: touch-based
interaction of mobile phones with displays. Proc Mobile HCI:
245–254
9. Pears N, Jackson DG (2009) Smart phone interaction with
registered displays. IEEE Pervasive Computing 8:14–21
10. Vogel D, Balakrishnan R (2005) Distant freehand pointing and
clicking on very large, high resolution displays. Proc UIST
2005:33–42
11. Malik S, Ranjan A, Balakrishnan R (2005) Interacting with large
displays from a distance with vision-tracked multi-finger gestural
input. Proc UIST 2005:43–52
12. Murgia A, Wolff R, Sharkey PM, Clark B (2008) Lost-cost optical
tracking for immersive collaboration in the CAVE using the Wii
remote. Proc ICDVRAT 2008:103–109
13. Chow Y-W (2008) The Wii remote as an input device for 3D

interaction in immersive head-mounted display virtual reality.
Proc IADIS Int Conference Gaming 2008:85–92
14. Wiimote project, />15. Park H, Moon H-C, Lee JY (2009) Tangible augmented prototyping
of digital handheld products. Comput Ind 60:1873–1882
16. Lee JY, Rhee GW (2008) Context aware adaptable ubiquitous car
services using augmented reality. Int J Adv Manuf Technol 37:431–442
17. Christian J, Krieger H, Holzinger A, Behringer R (2007) Virtual
and mixed reality interfaces for e-learning: examples of applications
in light aircraft maintenance. Lect Notes Comput Sci 4556:520–529
18. Regenbrecht HT, Wagner MT, Baratoff G (2002) MagicMeeting: a
collaborative tangible augmented reality system. Virt Reality
6:151–166
19. Cao X, Balakrishnan R (2003) VisionWand: interaction techniques for large displays using a passive wand tracked in 3D. Proc
UIST 2003:173–182
20. Davidson DL, Han JY (2006) Synthesis and control on large scale
multi-touch sensing displays. Proc. of the 2006 International Conf.
on New Interfaces for Musical Expression: 216–219
21. Izadi S, Brignull H, Rodden T, Rogers Y, Underwood M (2003)
Dynamo: a public interactive surface supporting the cooperative
sharing and exchange of media. Proc UIST 2003:159–168
22. Harel D (1987) Statecharts: a visual formalism for complex
systems. Sci Comput Program 8:231–274
23. Antonya C, Talana D (2007) Design evaluation and modification
of mechanical systems in virtual environment. Virt Reality
11:275–285
24. Regenbrecht H, Haller M, Hauber J, Billinghurst M (2006)
Carpeno: interfacing remote collaborative virtual environments
with table-top interaction. Virt Reality 10:95–107
25. Matlab, />26. OpenSceneGraph, />27. Bradski G, Kaehler A (2008) Learning OpenCV: computer vision
with the OpenCV library. O'REILLY

28. Chronbach's alpha, />


×