Tải bản đầy đủ (.pdf) (20 trang)

Design and simulation of interactive 3d computer games

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (2.11 MB, 20 trang )

PII:

Comput. & Graphics, Vol. 22, No. 2±3, pp. 281±300, 1998
# 1998 Elsevier Science Ltd. All rights reserved
Printed in Great Britain
S0097-8493(98)00038-7
0097-8493/98 $19.00 + 0.00

Technical Section

DESIGN AND SIMULATION OF INTERACTIVE 3D
COMPUTER GAMES
KAMEN KANEV1{ and TOMOYUKI SUGIYAMA2
Visual Science Laboratory, Inc., Ochanomizu-Kyoun Building, 2-2 Kanda-Surugadai, Chiyoda-ku,
Tokyo, 101 Japan, e-mail:
2
Digital Hollywood Corp., Ochanomizu-Kyoun Building, 2-2 Kanda-Surugadai, Chiyoda-ku, Tokyo,
101 Japan, e-mail:
1

AbstractÐDesign and development of attractive and competitive computer games is no longer a oneman task, but a complex multistage process with many participants. Discovering new game ideas and
their further development, game world and characters design and modeling, game evaluation and testingÐall these are conducted by specialists teamed to work together. In this paper we discuss tools and
facilities supporting the collaborative game design and development process through rapid prototyping
and simulation of 3D game worlds, characters, behaviors and other game functionality. Single player
and multi-player games are addressed in the context of di€erent hardware platforms and software
approaches. We report our experience in building a Game Design and Simulation testbed environment
(GDS) and its usage in location-based entertainment projects. Work on GDS has been carried out in
the scope of the VirtuaFly project and during the development of the physical motion based commercial game VirtuaFly2. # 1998 Elsevier Science Ltd. All rights reserved.
Key words: computer game simulations, (VR) virtual reality games, 3D shared game worlds, networked
virtual environments (VE), distributed interactive simulation (DIS).


1. INTRODUCTION

Dedicated game hardware, being crafted for playing
games and not for software development, o€ers
very little to facilitate game prototyping, presentation, redesign, testing and gathering experimental
data. On the other hand, computer game developers
have always been striving to push the available
game hardware to the limits of its sustainable performance. Specialized game development toolkits
and dedicated software and hardware environments
have been utilized for achieving this goal. Most of
such available facilities are platform dependent,
but, while very useful at the game implementation
stage, usually o€er little help at the game design
and prototyping stages. In contrast, higher level
tools which are good for game design and prototyping tend to be more platform independent, but with
limited real-time performance.
In this work we discuss a game design and simulation testbed environment (GDS) for supporting
3D game design, prototyping and evaluation. It
could be used for game prototyping and evaluation
of design ideas for a wide range of computer
games, including single player, multiplayer and networked games. We are aiming to facilitate game design and evaluation not only for dedicated
platforms such as game consoles but also for per{ Corresponding author.
281

sonal computers and other general purpose computer systems. To ensure adequate game simulation
and real-time performance over a wide range of
platforms, we need a scalable software which would
allow us to bring in as much computing and visualization power as needed. Design and simulation of
games with low computing and/or graphics
demands should be possible on a€ordable, low

range general purpose computer systems. For more
demanding game simulations, more powerful computer systems would be needed, such as those with
multiple CPUs and graphics engines. The game
simulation software should be capable of making
ecient use of both: limited resources of low end
computer systems and full computing and graphics
power of high end multiprocessor systems.
Another important and highly desirable feature is
ecient handling and simulation of multiplayer networked games. To some extent, multiplayer networked games could be simulated on a single,
suciently powerful general purpose computer
graphics system. Such an approach though has
many limitations and could hardly compete with
networked distributed simulations largely available
nowadays. Therefore, distributed networking capabilities should be incorporated in the game design
and simulation testbed environment. That should
make distributing the simulation computation and
visualization tasks on multiple networked computers possible whenever desirable.


282

K. Kanev and T. Sugiyama
2. GAME DESIGN AND SIMULATION STAGES

The game design and simulation environment is
intended to be used throughout the entire game
development process: general game design, game
world and character modeling, game functionality
implementation, game evaluation and gathering experimental information, and ®nal game implementation. Specialists with di€erent pro®les are
involved in each of the above game development

stages and GDS should provide appropriate, distinct services to all of them.
2.1. General game design
When new games are conceived designers need to
evaluate their ideas. Problems of novelty, originality, public acceptance, feasibility, implementability and performance, etc. require careful
consideration. The creative process could greatly
bene®t if game design ideas are shared and widely
discussed. Unfortunately new game concepts are
very dicult to communicate. Writing, talking,
using drawings and even animation helps but does
not completely overcome the communication gaps.
The most that we could hope to convey by such
traditional means would be a bleak impression of
the newly conceived game. Moreover, it is quite impossible to experience the excitement that the game
would bring without adequate game simulation facilities. Game designers would like to be able to see
and feel their ideas working at a very early stage,
before the actual game implementation has even
begun. At the game design stage, the way the game
feels is much more important than the speci®c
details in the underlying graphics or game character
behaviors. The latter two could be simulated in a
quite general way while still conveying the genuine
feeling of the game.
2.2. Game world and character modeling
In contrast to game designers, graphics designers
are much more concerned with the models of the
game world and the game characters rather than
with the way the game feels when played.
Nowadays game world models are signi®cantly
large and adequate facilities for model partitioning
and concurrent modeling and design are essential.

Often, many graphics designers contribute to the
same game, designing di€erent parts of the game
model, sometimes taking over and continuing each
other's work. When graphics designers are working
on partitions of a given game world, they would
like to be able to see how their work would integrate with that of their colleagues. This means that
the game world should be properly structured for
easy integration and interchange of partitions. Such
a structuring would also facilitate reusability of
models and partitions. In fact the structuring of the
game world is more a game design decision than a
graphics design one. Therefore, an appropriate

game world structuring scheme should be adopted
at the game design stage and then re®ned during
the graphics design stage. This will enable graphics
designers to plug in and see re®ned partitions in the
context of the general game world testbed model
whenever desired.
2.3. Game functionality implementation
Another important computer game component is
the story and all functionality associated with it.
This includes game characters and their behaviors,
game rules and objectives, etc. There are generic
types of functionality such as Newtonian objects,
point awarding facilities, character controls, etc.
which could be used in many di€erent game simulations. Other, speci®c functionality might be simulated through some of the available generic types,
while more peculiar functionality might need dedicated implementation. We would like to keep the
character behaviors separate from their graphics
representations whenever possible. That would give

us more freedom to manipulate character appearances and their behaviors independently, and eventually to build up new characters on the ¯y. Game
developers could implement speci®c game functionality in the context of the generic game design
model, which could be upgraded with re®ned game
partitions and character models as they become
available. Game functionality and game character
behaviors could be expressed in terms of actions,
simple responses to stimulus and more complex
behaviors. While dealing with such functionality,
general facilities such as multichannel record and
play, interpolation and extrapolation etc. should
also be provided.
2.4. Game evaluation and gathering experimental information
Adequate game simulation is important for gathering experimental information and successful
evaluation of game ideas. The previous stages as
discussed would help set up an appropriate environment for this simulation stage. We are dealing here
with an environment approximating and simulating
the real game appearance, performance etc. The
main objective is to let third parties, including potential customers, experience the new game ideas in
conditions close to a real game play, so that we
could gather extensive feedback information.
During the simulation, facilities to simulate di€erent
conditions by changing parts of the game world,
replacing game characters and modifying their
behaviors, revoking and introducing new game
rules, etc. will be necessary along with extensive logging and analysis options.
2.5. Platform speci®c game implementation
Some features of the target game world, characters and functionality have to be implemented in
the process of building the game evaluation model.



Design and simulation of interactive 3D computer games

We would like to secure a high level of re-use of
these simulation components in the latter platform
speci®c game implementation.
First comes the re-use of the world model and
character geometry data. To provide ecient use of
the resources of the target game hardware, geometry data would need to be converted to appropriate
platform related formats. Stand-alone tools should
be used for such conversions.
Second comes the re-use of character behaviors
and game functionality. Most of the simulated
behaviors and other functionality could be implemented as scripts, with the most complex ones
eventually directly coded. Since scripts are generally
platform independent they might be interpreted on
the target game platform too. Directly coded functionality would certainly need some platform
speci®c rewriting and adjustments.
In any case, while direct re-use of code might be
limited, modeling and behavior data should be
freely accessible and reusable.

283

Shared-memory Multi-Processing (S2MP) architecture thus overcoming performance bottlenecks and
opening new dimensions for scalability. With the
combined strength of the Cray and Silicon
Graphics technologies the new line of products
demonstrates unsurpassed performance. SGI systems are executable compatible thus ensuring easy
software migration. SGI IRIX operating system
comes with many additional features as compared

to other UNIX distributions. Networked SGI workstations support multicast as a standard feature and
provide a very good environment for distributed
VR applications. Convenient graphically-oriented
tools and APIs are available including IRIS
Performer, which is a vehicle to extract maximum
performance from the SGI graphics hardware at all
levels.
We are also considering bringing systems from
other vendors within the scope of our simulations.
Di€erent approaches for platform independent networked simulation and visualization are addressed
later in the text.

3. SIMULATION COMPUTER PLATFORM AND ITS
IMPLICATIONS

We would like to achieve real-time performance
of our interactive 3D game simulations. It should
be comparable to what we would get from the real
game say running on a dedicated game console with
optimized software and well-tuned geometry database. Yet we would like to postpone developing target platform speci®c software and model tuning
until after the game simulation and evaluation is
complete. To achieve this we would most probably
need to bring in more computing and visualization
power than that of the target implementation platform. Recent models of Sony PlayStation, Sega,
Nintendo64 are delivering a level of performance at
which no PC-based real-time simulation seems to
be feasible. Therefore, considering these game consoles as potential targets, we elected to run our
simulations on suciently powerful workstations
with adequate graphics capabilities.
While high grade workstations with advanced

graphics capabilities are nowadays available from
many di€erent vendors we have chosen the Silicon
Graphics, Inc. line of products, mainly for reasons
of previous in house experience. The base of SGI
machines currently installed at our sites is quite
extensive and immediately available. There are also
several classrooms equipped with networked SGI
workstations that could be used for multiplayer
game simulations and evaluations. Apart from this,
the SGI line of products o€ers a range of speci®c
features that are highly desirable for our GDS.
SGI o€ers low range systems like Indy and O2,
going through mid-range like Indigo Impact and
Octane and extending to the high range Onyx2 and
Origin product line. Recent models are based on
Uni®ed Memory Architecture (UMA) and Scalable

4. VIRTUAL ENVIRONMENTS AND GAME SIMULATIONS

The notion of virtual environment (VE) is often
used to denote the speci®c software architecture
and the underlying data models used in virtual reality applications [1, 2]. In the computer game simulations, VE refers to the game world and character
models, and the game simulation software architecture. Networked game simulations incorporate additional communication model components of the
VE.
A presentation and discussion of di€erent VE
models and their components follow. In this discussion we will pay special attention to the communication components since they often play crucial role
in shaping the entire simulation environment.
4.1. VE data models
Appropriate structuring of the game VE has
always played an important role in the game design

and development process. One classic way of imposing a structure over a particular game is to divide it
into stages. This provides a means to organize the
game world, game characters, their behaviors and
other functionality in separate groups associated
with each stage and to treat them more or less independently. Unfortunately, if no spatial relations
exist between the game stages, the feeling of game
continuity is easily lost in the game stage transitions.
Game stages have been successfully used to
enhance the game performance on low-grade computer platforms which are not able to process large
graphical databases in real-time. To achieve this,
data is loaded and interpreted on a stage-by-stage
basis while text, music and simple animation is


284

K. Kanev and T. Sugiyama

being presented during the stage changes to hide
the loading and initialization delays.
For more powerful game hardware, the general
tendency seems to be a rather complex graphical environment to be selected and then loaded before the
game begins. Then, even if game stages are present
the game would still be played in that preloaded,
and thus predetermined, environment. Nevertheless,
depending on the player's actions, some games may
load di€erent graphical models during the play and
switch between, say, exterior/interior world, underwater world etc. In most cases that could be done
in the background while the game play still continues. Also spatial relationships play more important role on advanced game platforms as compared
to the low grade ones.

Another level of complexity arises in the networked multiplayer games where players share a
simulated virtual game world by using many networked computers. Most game implementations
assume that each player has a copy of the complete
game database on his own computer. In the course
of the game, state changes of di€erent entities are
communicated between the players' computers in
order to keep the game VE synchronized.
As envisaged, our game simulations might be performed either on a single computer system or on
several networked computer systems. We would like
to adopt a VE structuring scheme which would be
equally applicable in both cases. In a broader context, di€erent structuring approaches and data
models pertinent to VE have been explored recently.
Most prevalent models could be classi®ed as replicated homogenous, shared centralized and shared
distributed peer-to-peer or client±server [2].
The large scale, mainly military related simulations have been adhering to the replicated homogenous model [2, 3]. Providing a local copy of a
large homogenous virtual world database to all the
participants before the simulation starts saves a lot
of network trac latter, since during the simulation
only changes in object states would need to be communicated [1]. Nevertheless, as complexity and size
of virtual worlds grow, it becomes next to impossible to maintain local copies at every participating
host. Attempts to decrease the trac through
grouping of entities and mobile agents have been
reported in [4±6]. Further complications arise with
the increase of discrepancies between the local
world representations over the simulation time
incurred by loss of messages. This happens because
replicated homogenous models are usually based on
best-e€ort, non-reliable message delivery protocols.
While ensuring better scalability in comparison to
other reliable network protocols [7±9], there is a

clear tradeo€ in regards to replicated database synchronization.
Shared centralized models rely on a specialized
server computer which is solely responsible for
maintaining the entire world state and communicat-

ing it to all of the participants as needed. The
model is simple and easy for implementation and
maintenance but has some important limitations.
First, it does not scale well since all the trac goes
through a single server node [1, 10]. A second problem is the additional delay that rerouting through
a server would incur as compared to peer-to-peer
multicast and broadcast [3].
Despite their limitations, shared centralized
models have been widely used in the gaming community. Apart from the many MUDs, MOOs,
enhanced chatrooms etc., many highly specialized
game servers are currently in operation. The shared
centralized model evolves to new realms as more
computing and graphics power becomes largely
accessible with the recent Pentium based PC
models. For example, Ultima on Line, although
making use of an underlying centralized data model
and a specialized game server, adopted distribution
of the initial generic world and character game
database through a retail channel on a CD-ROM.
Recently, more and more attention has been paid
to distributed models and new mixed approaches.
The central problem pertinent to the distributed
models is ensuring database consistency and synchronization. Attempts have been made to address
the problem by using reliable message delivery protocols [11]. Unfortunately, maintaining reliability
and consistency incurred signi®cant communication

costs and the DIVE system [11] could support very
limited number of simultaneous users. The Virtual
Society Project [9, 10, 12] adopts some ideas from
DIVE and attacks the scaling problems by reducing
the level of data sharing so that consistency and
synchronization protocols would not need to work
across the entire system. In the VS project, the
notion of aura [13] is used which represents a
dynamic portion of the virtual space, say a region
of interest. Objects can register their auras with an
aura manager in order to be noti®ed when other
objects enter their region of interest. The aura manager tracks database partitions, controls spatial interactions and maintains di€erent levels of
consistency. Its functions are based on the caching
of static and some dynamic data, combined with
non-reliable, locally ordered and globally ordered
reliable message delivery mechanisms. A further
drop in the network bandwidth is achieved through
generalized dead-reckoning techniques, called movement behavior [9].
At the Swiss Federal Institute of Technology and
University of Geneva, VLNET [14, 15] has been
developed. The VR world is distributed on several
VLNET servers which maintain a constant link
between themselves. Each server acts as a shared
centralized server for all the clients directly connected to it. But clients are permitted to migrate
from server to server which means that users could
freely move through the entire VR world. A specialized motor function is suggested, which when acti-


Design and simulation of interactive 3D computer games


vated would carry out the transition from the current server to the new one. A similar idea for transition between di€erent worlds is exploited in
MASSIVE (Model Architecture and System for
Spatial Interaction in Virtual Environments) [16].
MASSIVE also uses aura much alike the Virtual
Society project [9, 10, 12].
At the Stanford Distributed Systems Group, the
PARADISE Project is under development. Within
its scope, work is done on reliable logging and multicast channel directory service [8], advanced entity
aggregation dead reckoning [5, 17] and objectoriented RPC [18]. Another large scale project,
GreenSpace [19], is engaged in developing a new
global communications and information environment for the 21st century. The prototype GSnet
supports networked communications and shared
database
among
distributed
applications.
GreenSpace world consists of internal and external
parts, the latter managed by external video, audio
and reliable multicast protocols. The internal parts
are managed by a special application called
``Mr.N'' which is responsible for the networked
database synchronization. The GreenSpace world
database is based on groups which are represented
as collections of chunks.
All the approaches that we have discussed so far
establish some database model which is subsequently used for producing views into the virtual
world. An approach which ®rst assumes a view and
then generates a model, only sucient for that view
is described in [20, 21]. This approach uses Entity
State Estimator and Network Link processes and

potentially reduces the network load while providing di€erent levels of resolution.
4.2. VE software architecture
Bringing more computing power into the simulation by assigning part of the computations to
some other hosts on the network is obviously less
costly than upgrading to a single, more powerful
computer. To potentially facilitate such load distribution, we design our GDS as a set of separate concurrently
executable
tasks.
The
intertask
communications could go through network channels, thus allowing the tasks to be spread over
di€erent workstations connected to a high-speed
LAN.
Another level of complexity lies in using a heterogeneous, multiplatform computing environment. A
widely exploited approach enabling multiplatform
hosts to participate in the same simulation is to run
dedicated, platform dependent simulation software
on each of them. For example, in DIS based simulations [1], each host runs its own variant of the
simulation software but with common algorithms.
Other promising approaches use platform independent scripting languages such as Telescript, Tcl,
Java and JavaScript, etc. Mobile Agents and Smart
Networks have been suggested as a method to

285

enhance DIS simulators [4]. The VR-protocol [22]
from MAK Technologies provides for platform
independent program execution environment and
dynamic linking.
A new generation of software technology is emerging with High Level Architecture (HLA). There is

a hope that HLA would help products from di€erent vendors evolve as fully interoperable over the
network. In Gustavson [23] the Microsoft's multiplayer gaming solution for Windows 95/NT
DirectPlay is evaluated in the context of HLA.
Similarly, features of HLA that support the VRProtocol, as well as complimentary capabilities that
VR-Protocol could provide to HLA, are discussed
in Taylor [22].
5. THE GDS AND ITS VE MODEL

5.1. The GDS data model
The target environment for GDS is a high speed
LAN where we could expect predictable network
performance. This allows us to focus on game simulation problems, rather than to deal with the problems
of
reliable
data
distribution
and
synchronization over large WANs. We assume that
all the initial geometry data representing the game
world and the game characters resides somewhere
on the LAN. It can be provided in a number of
®les on di€erent network nodes which are accessed
by the simulation tasks whenever necessary.
Standard facilities like NFS, HTTP, etc. could be
used to ensure such an access over the LAN.
An internal representation of the game world and
the game characters is built by each simulation task
from the available on the LAN geometric data.
Such internal representation is later used for visualization of the game world with respect to the di€erent aspects of the the current simulation. Many
simulation tasks might run in parallel say, each servicing di€erent participants in the game or providing di€erent views in the game world etc.

Obviously, the internal representations of the game
world for all these tasks need not be the same. In
fact, as in Michael and Brock [20, 21], if a view in
the VR world is assumed ®rst, then we only need to
build an internal world representation satisfactory
for that particular view. In GDS we adopt a
dynamic internal game world data model which
supports similar functionality.
The support of multiple internal game world representations is also in line with the potential di€erence in the computing and graphics power of the
workstations hosting each simulation. This way, we
could support internal world models at di€erent resolutions adjustable to the hosting workstation performance level. Resolution here refers to the levels
of detail or granularity at which the internal representation should be maintained. Resolution parameters could control the way internal model is


286

K. Kanev and T. Sugiyama

Fig. 1. Internal organization of the game client application.

built, either when geometry data is brought in or
when reassembled later.
5.2. The GDS software components
As mentioned before, GDS consists of a number
of tasks executed in parallel and communicating
through the network. While all of the tasks could
be executed on a single, powerful enough computer
system, it would be more practical to distribute
them over several workstations. The types of tasks
currently included in the GDS are game clients,

game servers, game interfaces, control interfaces and
sound servers. Each such task is a separate application which is replicated and executed on all or
some of the networked hosts participating in the
game simulation.
The game client application (Fig. 1) is responsible
for maintaining an internal game world model and
visualizing it with respect to speci®ed views. Both
independent and player-related views are supported.
The client application also maintains local and
remote entities.
The game server application (Fig. 2) is responsible
for tracking the player-related views and guiding
the clients to modify their internal game worlds
appropriately. It also functions as a multichannel

recorder/player which can record or disburse prerecorded sequences on demand.
The game user interface application (Fig. 3) is responsible for collecting and processing the players
motion data which is then put on the network.
Game clients and game servers use the players
motion data provided by this interface task.
Alternative input streams from joysticks and game
console controllers are also supported.
The sound server application (Fig. 4) plays prerecorded sound ®les on demand. Its functionality is
described in more detail in Section 6.3.
The simulation control interface is used for controlling the entire simulation. It is implemented as a
menu script with several underlying executables and
other scripts.

6. PILOT IMPLEMENTATION OF THE GDS


6.1. Game world prototyping
6.1.1. Components. At the general game design
stage, we are interested in an approximation of the
game world as envisaged by its creator. This should
be done at the lowest acceptable resolution so that
time and e€ort would be saved. As our vision of


Design and simulation of interactive 3D computer games

287

Fig. 2. Internal organization of the game server applications.

the game evolves in the course of the game development and simulation, we will gradually move to
higher resolution models. We would like to convey
the right feeling of space and distance through the
simplest possible geometry with appropriate texture.
For example at the lowest level of resolution the
game world could be represented as a texture
mapped extruded shape with the view point placed
close to its center line. Appropriate scaling and texturing could make it look either as a narrow tunnel
or as a wide open space. For example, in Fig. 10,
both the far fog and the sky are represented by textures mapped on the surrounding extruded shape.
Acceptable appearance could be maintained as long
as the view point is suciently far from the texture
mapped walls and remains relatively static. The
di€erence between entirely texture mapped and
sculptured walls can be seen by comparing the
images in Fig. 8 and Fig. 9. The simplicity of the

tunnel model in Fig. 8 becomes more apparent
when seen in stereo or from a viewpoint moving in
the vicinity of the walls. Better appearance is
achieved by introducing models at higher levels of

resolution. One possible way is by re-texturing lowresolution models and adding more geometry. In
terms of the simple textured extruded shape, this
means that some of the objects initially painted on
the walls would be established as true geometric
entities inside it. The resulting higher resolution
models approximate the envisaged game world
more closely. There are standard ways for dealing
with resolution adjustments, for example by using
LOD [24]. In our approach, we chose to handle this
in a di€erent way in order to support the dynamic
assembly of internal world models, rather than just
di€erent views in a preloaded database. In the ®nal
game, we may seek true realism and thus we may
need sophisticated geometry and LOD. For the
simulation itself though, less should be sucient
since we only need to build a convincing impression
of the simulated game.
The dynamic world models are based on atomic
objects organized in sections (Fig. 5). All simple
artefacts which expose no internal structure related
to the game simulation should be considered atomic
objects. Sections are structural objects which may

Fig. 3. Game user interface for the players.



288

K. Kanev and T. Sugiyama

Fig. 4. Internal organization of the sound server application.

have geometry and other attributes and can be used
as containers for any number of atomic objects or
sections. For example, rooms in a building could be
represented as separate sections while furniture in
the rooms could be considered as atomic objects.
Similarly, simple game worlds could be constructed
from sections representing rooms and connecting
corridors.
By structuring the game world into sections we
e€ectively introduce levels of hierarchy which can
be kept separate from the actual geometry. Then, a
preliminary culling could be done to identify the
sections relevant to a particular view and maintain
a database associated with it. It is important to
point out that the structuring into sections does not
have to be spatially related. Of course, in most
practical cases spatial organization of sections may
be a good choice. But many games nowadays,

although being played in a 3D world, could be represented by one or two dimensional sequences of
sections. We discuss our experiments with such
game worlds late in this paper. Nevertheless the
world descriptions that we use are more general

and allow us to associate lists of sections with ndimensional coordinate values. This way in the one
dimensional case one coordinate is used for positioning in the game world while others could be
treated as describing section properties, etc.
Sections may be adjacent so that players could physically move between them. But sections may also
overlap or contain each other, for example when
representing di€erent levels of resolution.
The internal game world representation is built
by selecting appropriate elements from the world
description. This process is controlled by a metric
in n-dimensional space and may be considered as a


Design and simulation of interactive 3D computer games

Fig. 5. Game world descriptions, views and underlying
data structures.

combination of priority ordering and culling which
takes place in the gameserver task. This gameserver
(Fig. 2) tracks a view, refers to the game world
description and produces a list of sections (Fig. 5).
Then, an internal game world representation is built
on the basis of this view by the client application
(Fig. 1). Many views can be supported simultaneously.
In our model, the global game world description
is kept separate from the actual geometry. Thus the
game world could be reshaped by changing the
world description ®les which have to be kept synchronized at all the simulation hosts. Also this
makes rapid prototyping and reuse of model data
easier as di€erent world descriptions could refer to

the same geometry entities.
6.1.2. Importing. Game worlds are usually constructed using geometric modelers. This is an interactive process in which designers create and modify
complex geometric shapes. Unfortunately the resulting geometry could hardly be modi®ed and restructured outside of the modeling software used to
create it. Alternatively, a procedural approach
could be used for creating geometry with minimal
human assistance. The problem is that arbitrary
shaped objects are dicult to parametrize using
such a procedural approach.
In our approach for the construction of game
worlds we are bringing together the interactive and
procedural ways of construction by interactive design of components and procedural assembly of
these components into sections. In order to facilitate approximate representation of game worlds, we
are investigating intermediate levels of complexity.
It appears to be advantageous to provide pro-

289

cedural ways of generating such intermediate complexity geometry and still use advanced modelers
when needed. Procedural modeling could quickly
provide a crude substitute for the simulated game
world which could be used as an initial testbed.
Then, more re®ned geometry prepared by human
designers is gradually integrated. Another reason
for adopting such an approach is the fact that surprisingly few of the current VR modelers o€er automatic LOD generation [24]. And when it is done,
general polygon reduction algorithms are usually
applied o€-line. This means that simpler models are
generated on the basis of reducing the geometrical
complexity of detailed models. In contrast, we
incrementally build more and more detailed world
models at run time.

To achieve this, we need full access to the underlying data structures. Since we use the SGI
Performer for our simulations we need to access its
internal geometry data representations. As in most
modern visualization systems, the graphical data in
Performer is organized as a tree structure containing graphics state information and geometry information. Specialized node types for grouping and
geometry, for transformations, level of detail, animation and morphing etc. are supported.
The task of importing VR world data into
Performer consists of building an appropriate tree
structure from what is provided in a given graphics
®le. This data conversion process is called importing
and the software responsible for it is called importer. Currently, Performer comes with more than 30
standard importers, most of them provided by geometric modeler suppliers. The diculty which arises
is in combined use of such models and modelers.
Although the importers e€ectively convert di€erent
graphics ®les into a common internal Performer
data structure they do not provide means to integrate and blend such structures. One solution is to
do the integration by writing speci®c application
code and including it into the target Performer application. This is obviously not suitable for our
simulations, as we would like to minimize the need
for customizing the application code. Therefore, we
decided to develop some standard integration functions and make them available to all performer applications through some standard mechanism. We
opted to implement these functions as shared
objects (DSO) so that they could be accessed by the
applications only when needed at run time with no
overhead if not accessed. Basic or standard integration functions are dicult to identify and in fact
would vary depending on the application. That is
why we decided to de®ne our basic functions not
on the basis of the application needs, but on the
basis on the underlying Performer data structures.
As all the graphics data is ®nally converted into a

tree built from Performer nodes, we developed a
generic way of building such trees on a node by
node basis. Building such internal structures in fact


290

K. Kanev and T. Sugiyama

becomes part of the support for our dynamic internal data model. Yet the data importing is not
part of the game simulation application because it
is provided as shared data objects and thus could
be dynamically changed during the simulation.
Di€erent DSO objects could build internal game
world representations at di€erent resolutions while
still using the same data ®les.
The basic importers that we have developed correspond roughly to the types of nodes supported by
SGI Performer and exhibit common general functionality. First, when a ®le name with model data is
supplied, it is analyzed and, depending on its extension, appropriate DSO is loaded and initialized by
the operating system. Then, this DSO is executed
with the content of the ®le being supplied as input
data. The new importers parse this input for valid
tokens and interpret them while treating everything
else as comments or ®le references in a uniform
way. This is a recursive parsing which produces
data structures of arbitrary complexity and depth
while keeping the parsing and interpretation of individual ®les quite simple. The basic importers provide for building of internal game world
representations possibly at di€erent resolutions
from predesigned objects. But to generate actual
geometry other types of nodes are needed. In

Performer, this is done by geode nodes and they
have to be used when geometry data ®les are
loaded. We would also like to have specialized geometry nodes to represent di€erent types of geometry.
We consider the conformance with the defacto
and emerging standards as one of the priorities of
our application. The VRML97 proposal for ISO
standard deserves a special mention in this context.
VRML1.0 emerged from the SGI OpenInventor
ASCII format. In VRML 2.0 or VRML97 important new features have been included in order to better support dynamic geometry and interactions.
With the MovingWorlds this is carried further
toward multi-user networked environments. The
nodes that we discussed so far could be directly
handled in a VRML 2.0 compliant application. We
also experimented with building structures similar
to those supported by the VRML 2.0 extrusion
node which has no analog in Performer.
For example in our implementation, ®lenames
such as 3_1_0.tunnel are handled by a dedicated
DSO importer and tunnel-like extruded geometry is
generated as a result. The ability to supply numerical parameters directly in the ®le name is introduced as a convenience tool for easy generation of
regularly shaped tunnels for test purposes. A true
®le would have to be created and its content provided only if parameters other than those in the ®le
name are required. Similarly to the VRML 2.0
extruded node, the tunnel can be de®ned by the following parameters:

. a 2D crossSection piecewise linear curve,
described as a series of connected vertices;
. a 3D spine piecewise linear curve, also a series of
connected vertices;
. list of 2D scale parameters;

. list of 3D orientation parameters.
This de®nition however is rather restrictive. In
particular, while the intermediate cross-section
orientations and scales could be controlled along
the spine, it is not possible to adjust the shape in
other ways. To produce smooth-looking geometry,
additional parameters and more sophisticated calculations than those described in the VRML 2.0 speci®cation are necessary. We support a number of
such additional global and per vertex parameters,
for example the integer values of the shape per-vertex parameter control the wall normals at a given
vertex in u and v directions. Material handling and
texture mapping is also enhanced. In addition to
the automatic generation of texture coordinates as
for the VRML 2.0 extrusions, other texture mappings can be directly speci®ed. That makes easier to
produce outlooks as in Figs 8, 14 and 15, etc.
where walls, the ¯oor and the ceiling are mapped
with di€erent textures.
We also provide a set of ®tting algorithms as
alternatives for direct supply of cross-section orientations and scales. With no speci®c algorithm however, the selected extrusions are generated much in
the same way as prescribed in the VRML 2.0 speci®cation. Other VRML 2.0 compliant node types are
also under implementation. In particular, more information about the sound node type is given in
Section 6.3.
6.2. Active objects
Object behaviors in our implementation are supported by scripts of actions that must be performed
over a given time interval. Although we support
features functionally similar to the event-processing
mechanisms as described in the VRML 2.0 speci®cation, full conformance is beyond the scope of our
current implementation.
We use the term active object to refer to objects
which could be controlled by scripts. The lowest
level of control is by direct manipulation of object's

attributes such as position, orientation, etc. This is
done by assigning new values to the appropriate
nodes in the objects Performer tree. At the next
level, time dependent parameters such as velocities,
accelerations, forces etc. could be speci®ed which
determine the object behavior. These parameters
depend on the underlying simulation model. Object
attributes and simulation parameter changes, when
done in a script, are not an instant, but rather a
continuous, process. Scripts are executed by the
gameserver application (Fig. 2) and can contain
branches and repetitions. Scripts could be envisaged
as descriptions of high level object behaviors.


Design and simulation of interactive 3D computer games

Ecient description of complex object behaviors
might be greatly facilitated if we could identify
some simple actions and determine general patterns
for building more complex behaviors from them.
Basically behaviors of active objects can be divided
into two independent categories, namely initial
behaviors and action behaviors. Initial behavior is an
object behavior which takes place at the section
initialization stage. More precisely, when a section
is added to the current view or, say, is about to be
entered by the player, the initial behavior scripts for
all objects in that section will be executed. The
object action behavior in contrast is triggered on a

proximity basis or by sending messages. When sections are dropped from the current view, we might
need to execute some maintenance scripts, but at
this time we do not treat them as describing object
behaviors.
Some examples of initial object behaviors that we
believe to be suitable for general game simulations
follow. Object behaviors in the gaming world might
be independent or linked to the position of the
player. Maybe the simplest case would be a ®xed
position object that is placed somewhere within the
currently activated section. The object would not
move, although it may spin, change its shape, color
etc. Depending on the object, the player may have
to hit it and get a reward or may have to avoid it,
otherwise risking a penalty. A moving object might
be initially placed close to the player and then start
moving away from him. Such a behavior could
prompt the player to chase it in order to get points
etc. (Fig. 14). Alternatively an object could be initially placed somewhere far from the player and
then start approaching (Fig. 13). Depending on the
object, this would prompt the player to try to
escape a direct hit, to wait for such a hit or to
hurry to hit the object as soon as possible. A moving object could also follow a predetermined path
which does not fall in the previous two categories.
When combined with carefully chosen timing and
other parameters, initial object behaviors of the
above described types could signi®cantly enhance
the gaming experience. For example, active objects
may be set up to disappear after a given time interval elapses. This way, delayed player reactions
would result in losing chances to collect points etc.

Objects incurring a penalty when being hit may
e€ectively act as obstacles in narrow passages thus
slowing the player down and making him wait until
objects move or disappear (Fig. 15).
Similarly to the initial behaviors objects could be
also assigned action behaviors which take place
when an object is activated. In contrast to the single
initial behavior, each object could have more than
one action behavior associated with it. Here we will
discuss some examples of possible action behaviors.
Quite often, small prizes like coins, ¯owers etc. are
scattered around in the game world and the player
is awarded points for collecting them. A typical

291

behavior of such objects when picked up or just
approached is to disappear. Such a behavior could
be implemented by a regular object action script
containing an instant object position change e.g. a
jump to a distant place, unreachable by the player.
Alternatively the new object position could be
elected to be in the vicinity of the player. If it lies
on the player's projected movement path, he will
have a chance to get more and more points.
Depending on the speci®c positioning algorithm,
the object might prompt not only for forward, but
also for backward, movements of the player, thus
e€ectively slowing down the pace of the game
(Fig. 16). Generic object behaviors could also be

based on simple Newtonian object models representing kicking a ball (Fig. 13) or escaping from
cannon-®re. Object behaviors could be further
diversi®ed by introducing pseudo-random parameters that control the ultimate displacements,
directions, velocities etc. This will ensure that the
player never knows what is to happen next time he
faces the object. It is also a game design choice if
objects should interact or not with the other parts
of the model. For example, a ball may be permitted
or not to go through the walls in the model.
The examples above which describe di€erent initial action behaviors are by no means complete.
Nevertheless they represent a minimal set of behaviors that could be used in developing simple test
games. All of them are supported by the simple
simulation model, developed by us, which was used
for experiments and game implementations as
described in the following chapters.
6.3. Sound support
The game world as constructed may consist of
parts with di€erent sound environments which are
relatively isolated from each other. For example,
the sound environment in two rooms with a closed
door between them might be quite di€erent. On the
other hand, when the door opens, noise from the
nearby room should penetrate more easily. And this
is just one of the diculties we have had to deal
with in attempting to create a convincing sound environment within a given game world with respect
to each player.
Maybe the sound type that is simplest to handle
and which should be considered in this context is
ambient sound. This sound type functions as a
background sound which penetrates everywhere

independently of the geometry model features. Such
a sound has no source position and therefore is not
subjected to any attenuation. Ambient sound is represented by a sound node with large enough
minBack and minFront values in order to guarantee
that the entire zone of interest falls into the inner
sound ellipsoid as described in the VRML 2.0 speci®cation. More dicult to implement are sounds associated with particular positions in the game world
e.g. those that have a source point. Sound sources


292

K. Kanev and T. Sugiyama

could be ®xed, attached to static positions within
the game world. They may be also attached to
some active, moving objects. For example, ¯ying vehicles like planes, helicopters, etc. each may produce particular distinct sound which, if properly
implemented, could considerably augment the game
experience. And obviously separate sound channels
for each player and observer would be needed.
The sound server application (Fig. 4) is designed
to run on INDY workstations. Each INDY workstation has four output sound channels which could
be used independently or combined in two stereo
pairs. The sound server application maps on these 4
hardware channels up to 16 software output sound
queues which are automatically mixed for each
channel. This means that a maximum of 16 simultaneously played sounds may exist. The number of
the actual sound streams (VirtualSoundChannels) is
256 and they are mapped to the 16 queues following the recommendations of the VRML 2.0 speci®cation. The sound queues are supplied with data
from sound ®les and controlled through the network. The sound servers can be con®gured to
accept commands from di€erent multicast groups,

for example corresponding to di€erent rooms in the
game world. There is a separate group for the
ambient sound, also used for priority voice messages. When moving sound sources enter a room,
their sound commands have to be sent to the current multicast group and thus go to the appropriate
sound server. A moving listener is represented in
much a similar way. The di€erence is that the
sound server group channel number is modi®ed,
which e€ectively ®lters all other sound commands
but those of the group representing the listener's
close environment.
One diculty on the way to a more rigorous implementation of the guidelines of VRML 2.0 seems
to be volume adjustment dependent on the distances between the sound sources and the listeners.
The sound support hardware currently available on
SGI INDY does not allow independent attenuation
on a per channel basis. Doing this in the software,
on the other hand, requires signi®cant real time
processing. However, our implementation as
depicted in Fig. 4 employs a virtual sound channel
which can be independently attenuated in principle.
6.4. Game interface
During the game simulations and evaluations, we
need two functionally distinct user interfaces. One
for controlling the entire simulation environment
and another for actually playing the game under
evaluation. More than one instance of these interfaces might be needed and simultaneously operated.
6.4.1. Overall simulation control. The control of
the simulation environment is a task whose complexity varies depending on the game under simulation. For simple game simulations running on a
few workstations close to each other, most of the

setup and maintenance could be handled directly.

Nevertheless larger simulations, which involve distant workstations, do require a specialized control
interface. Some of the standard functions that
should be provided are those such as starting and
initialization of the simulation application tasks on
all the participating workstations, monitoring of
their status, gathering and logging experimental
data, etc. Apart from that, the interface should be
able to support game speci®c details which need
special handling. In an attempt to satisfy these
requirements, we designed our simulation control
interface as a general menu script wrapped around
a set of executables and control scripts.
When the simulation control interface is ®rst
invoked a menu of selectable options is presented
to the simulation manager. The menu consists of a
few generic functions pertinent to the menu interface itself and many other options derived on the
basis of the content of a directory given as a parameter. Both executables and control scripts may
be present there. Most of the functionality is
handled through generic record and play tasks,
which are supplied with appropriate parameters and
executed concurrently. This way, di€erent action
sequences could be simultaneously transmitted over
the network by simple selection of menu options.
Depending on the simulation environment, the
actual network distribution would vary, but it
remains hidden from the simulation manager since
the executables and the simulation speci®c parameters are supplied automatically. The simulation
setups could be organized in hierarchical structures
of subdirectories for easy maintenance. The main
simulation script is parametrized in such a way that

it could be used for controlling various types of
simulations without modi®cations.
6.4.2. Game interfaces. The player interface to the
game used in our experiments is depicted in Fig. 3.
Along with common input devices as mice, joysticks, game console controllers, etc. we also support camera based input. Our belief is that game
control by natural human motions, not restricted
by wiring or any other physical links could signi®cantly augment the game experience and satisfaction (Fig. 6). One of the technologies that provide
for such capturing of unconstrained human motion
is the motion tracking from Motion Analysis Corp.
A system based on this technology has been in use
for several years at our company. It consists of six
cameras, ®tted with strobelights and high speed
shutters controlled by a specialized interface computer. The system captures human motions by tracking re¯ective markers attached to the perfomer's
body. Nevertheless, given the price and the complexity of its operation, the system could hardly be
used as a general computer game interface.
Consequently, we decided to use a simpli®ed,
single camera motion tracking system for our game
simulations. As shown in Fig. 3, the high signal to


Design and simulation of interactive 3D computer games

293

Fig. 6. Controlling the ¯ight of the simulated plane by human body motions. The image of the player
was mirrored and then pasted over the actual screen shot.

noise ratio is achieved again by using strobe LED
lights and a high speed shuttered video camera.
Then, inexpensive sticky re¯ective tape as well as

handheld markers can be used by the player. While
the markers themselves do not restrict the player's
motions in any way, for practical reasons, we still
have to limit the motions that we process.
Essentially that should be motions which could be
distinguished on the basis of a single camera view.
Indeed, as shown in Fig. 3, video images are processed by the dedicated motion analysis computer
system, responsible for the marker tracking. Then a
stream of 2D marker coordinates is transmitted
over a RS232 link to a INDY workstation. These
coordinates are matched to a physical motion
model for the player's human body that resides on
the INDY as a part of a separate user interface
task. A motion analysis algorithm is used to determine headings, velocities and accelerations which
are then communicated to the game simulation applications on the network. While there is much
room for experiments with di€erent human motion
models, our original idea was to control a ¯ying
motion in a way much similar to a bird. If we imagine a ¯ying human with wings stretched to his
arms, then what should be the motions to control
the ¯ight? The sample in Fig. 6 shows one possible
mapping in which the plane inclination and the consecutive turn follow the one of the player.

In addition to simple ¯ight controls, the mapping
also provides for game ¯ow control by menu access
and selection functions. Experiments with di€erent
number of markers and other motion detection algorithms are in progress. Options for direct camera
connection and image processing on a SGI workstation are also being investigated.
7. EXPERIMENTS WITH THE GDS

First experiments with physically based ¯ight

simulations at VSL have been carried out in the
scope of the VirtuaFly project. The resulting
VirtuaFly demo application proved the feasibility of
our ideas and was successfully shown at several
events. Unfortunately, as initially implemented,
VirtuaFly required signi®cant computing and
graphics power so we needed a multiple CPU Onyx
machine to run it. In order to make a commercial
product out of it we needed to optimize its performance for more a€ordable systems such as Indigo
Impact. At the same time improving the game interface, the game design and functionality was also
planned.
7.1. VirtuaFly2
The project VirtuaFly2 has been launched with
the objective to develop a commercial amusement
product based on the experience from VirtuaFly.


294

K. Kanev and T. Sugiyama

Fig. 7. The opening scene of VirtuaFly2 with the game logo ¯ying over the city.

This project was the ®rst one to take advantage of
the game design and simulation testbed environment.
VirtuaFly2 is a multiplayer virtual reality game
based on simulated physical motions of the player's
body. Player movements are detected by motion
tracking interface subsystem and mapped to its avatar in the virtual game world. Then the player
could control the ¯ight of its avatar by natural

body movements. VirtuaFly2 is played in front of a
large screen so that the public could observe both
the players and their representations in the virtual
game world. A stereo image is projected which
enhances the game immersion e€ect. Since screen
stereo images could only be seen through special
glasses, in some cases with particularly large audiences we opted for plain non-stereo projections.
Although, at that time, players could wear HUDs
and still see the game world in stereo.
VF2 is a game which brings new exciting immersive VR experience through a relaxed casual ¯ight
in the world of the future (Fig. 7). This is a truly
interactive, real time game and its true sense can
only be felt during the play. Unfortunately here we
could only provide static images recorded in the
course of some real games. The color image pairs in
Figs 8±10 are intentionally reduced in size in order
to be easily seen in stereo with naked eye.

Both stereo and non-stereo version have been
installed and exposed to the public on many occasions throughout Japan including, but not only
at:
. The Science Museum of Electricity in Nagoya
city in July 1996, with more than 4000 visitors;
. The Softpia Japan Opening Event in Oogaki city,
Gifu prefecture in August 1996 with almost
20000 visitors;
. The NTT New Life Fair in Nagoya city in
September 1996 with more than 3000 visitors.
VirtuaFly2 has been shown in stereo at the
Virtual Reality Society of Japan Annual Conference

in October 1996 [25], and in non-stereo at
NICOGRAPH'96 and MULTIMEDIA'96 (Tokyo,
20±22 November). There was a stand with
VirtuaFly2 at the 78th Annual Convention and
Trade Show of the International Association of
Amusement Parks and Attractions IAAPA'96 (New
Orleans, 20±23 November). VirtuaFly2 has been
presented in a VisionDome brought in January
1997 for the ®rst time to Japan by CWC Studios.
The VisionDome provides interactive immersive VR
without the discomfort of head-mounted displays or
special glasses. The non-stereo version of VF2 when
played and observed from within the VisionDome


Design and simulation of interactive 3D computer games

295

Fig. 8. A tunnel exit to the city. The tunnel has ¯at, texture mapped walls.

Fig. 9. A tunnel connecting the hangar with the electric room. The walls of the tunnel are non-¯at,
sculptured surfaces.

Fig. 10. The special goal section for those who like competitions. The far fog and the sky are represented as textures mapped on the surrounding extruded surface.

brought an unbelievable feeling of presence and
truly 3D immersion with stunning e€ect to most of
the visitors.
At most of the events a 250 MHz Maximum

Impact with 128 MB RAM was used which could
support up to two players. A sample printout of
the screen image for a two-player setup is given
in Fig. 11. Alternatively VirtuaFly2 could run in
a single player mode when installed on an O2
machine. More players can enjoy the game
together when several workstations are connected
to a high speed LAN. The local representation of
two vehicles engaged in a high speed pursuit and
actually simulated on remote hosts is shown in
Fig. 12.

Design and development of VirtuaFly2 was conducted in parallel with that of the game simulation
environment itself. In this process general components from GDS have been customized and later
incorporated in the ®nal version of VF2.
Alternatively, components originally designed for
VF2 have been optimized, upgraded with more general functions and established as components of the
GDS. For example, some core functionality from
the gameserver in GDS has been brought to the
®nal version of VF2 in the form of a component
called action server. Here we would like to mention
again that GDS addresses a wide range of game
simulations including single player and multiplayer
games on LAN with possible extensions to WAN.


296

K. Kanev and T. Sugiyama


Fig. 11. Views for the two players as they appear on the SGI Maximum Impact screen. The yellow
plane serves as a tour guide for the vehicle following it.

In contrast, VirtuaFly2 is a commercial product
with its scope being limited mostly to LBE
(Location Based Entertainment).
7.2. Continuing work
Currently we are conducting experiments with
game simulations targeting lower level platforms.
We believe that games simulated on a network of
O2 machines could ®nally be hosted on Pentium-

based PCs and played on Internet. This expectation
is also in line with the recent developments bringing
Open GL to PC platforms and coupling it with
advanced graphics chipsets.
At this stage the prototype game worlds we experiment with are generated on run time from a
database of shapes, paths and textures following
one-dimensional world descriptions as discussed in
Section 6.1. We are interested in enhancing the

Fig. 12. A local vehicle (in the center), directly controlled by the player and representations of two
remote vehicles simulated on another machine.


Design and simulation of interactive 3D computer games

Fig. 13. A section with ¯at walls that appear convex due to the applied texture patterns. The green
object in the center is coming toward the player and will respond to hits in a realistic way.


Fig. 14. An underground channel with two sleeves suitable for navigation training. The red plane is
engaged in a pursuit of the green star.

297


298

K. Kanev and T. Sugiyama

Fig. 15. A narrow tunnel with sharp edges and an approaching yellow star that will explode if touched.

Fig. 16. A spacious tunnel with a violet star that jumps in the vicinity of the approaching vehicles and
bring bonus points when hit.


Design and simulation of interactive 3D computer games

game impact by appropriately combining particular
geometry, textures and object behaviors. A low resolution game world model is built from sections implemented as extruded textured objects (Figs 13±
16). Automatic doors are implemented as active
objects with proximity sensors. All the initial and
action behaviors as discussed in Section 6.2 are supported. Currently, experiments are being carried out
with di€erent game strategies, point award and timing systems, etc. We simulate game challenges
through generic actions as previously discussed. We
create incentives for the players to collect coins,
spare vehicles, bonus points etc., while avoiding
con¯icts with other dangerous objects as well as
furniture, doors and walls. Color, shape and sound
clues are brought in to prompt certain user actions,

to facilitate object discrimination and encourage
early recognition. Player attitude is investigated in
regard to such clues as well as the response to misleading signals, delayed and distorted clue patterns.
In the course of the experiments, actions and
object behaviors can be derived on a pseudo-random basis. This guarantees a unique experience
since exactly the same game is never played again.
For example, simulations of di€erent trac conditions could be performed in one and the same
database, but with a set of active objects controlled
by a stand-alone stochastic model. Di€erent simulations for the same trac conditions will produce
statistically equivalent results, but still look di€erent
to the user. More precisely, while the general
appearance, like the average number of vehicles,
etc. will not vary much, the actual events, their timing and order will be di€erent. This means that the
player will be meeting di€erent vehicles at di€erent
times and thus have a chance to experience a variety of simulated situations.

2.
3.

4.

5.

6.
7.

8.

9.


10.
11.
12.

13.

8. CONCLUSIONS

The Game Design and Simulation environment
discussed in this paper proved to be a powerful tool
signi®cantly facilitating and enhancing the entire
game design and development process. Its usage
provides for measurable evaluation of game designs
and functionality and enforces ecient strategies
for building game world and character models.
That results in more rigorous planning and implementation with guaranteed level of performance.

16.

AcknowledgementsÐWe would like to thank Jun Hong for
its leading role in the VirtuaFly project, Takayuki Kondo
for its creative design and also all other colleagues who
supported our research.

18.

14.
15.

17.


19.
REFERENCES

1. Brutzman, D., Macedonia, M. R. and Zyda, M. J.,
Internetwork Infrastructure Requirements for Virtual
Environments. Virtual Reality Modeling Language

20.

299

(VRML) Symposium, San Diego California,
December 13±15, 1995.
Macedonia, M. R. and Zyda, M. J., A Taxonomy for
Networked Virtual Environments. IEEE Multimedia,
4(1), January±March 1997.
Brutzman, D., Graphics Internetworking: Bottlenecks
and Breakthroughs. Digital Illusion: Entertaining the
Future With Interactive Technology, Clark Dodsworth
ed., Addison-Wesley, Reading Massachusetts, 1997.
Stone, S. et al., Mobile Agents and Smart Networks
for Distributed Simulations. Proc. 14th Distributed
Simulations Conference, Orlando, FL, March 11±15,
1996.
Singhal, S. K. and Cheriton, D. R., Using Projection
Aggregations to Support Scalability in Distributed
Simulation. Proc. 16th International Conference on
Distributed Computing Systems, IEEE Computer
Society Press, Hong Kong, May 1996.

Pratt, S. et al., Implementation of the IsGroupOf
PDU for Network Bandwidth Reduction. Proc. 15th
DIS Workshop, Orlando, FL, Sept 16±20, 1996.
Smith, W.G. and Koifman, A., A Distributed
Interactive Simulation Intranet Using RAMP, a
Reliable Adaptive Multicast Protocol. Proceedings
from the Fourteenth Workshop on Standards for the
Interoperability of Distributed Simulations, Orlando,
Florida, March, 1996.
Holbrook, H. V., Singhal, S. K. and Cheriton, D. R.,
Log-based Receiver-Reliable Multicast for Distributed
Interactive Simulation. Proc. SIGCOMM `95, ACM
Press, Cambridge, MA, August, 1995.
Lea, R. et al., Issues in the Design of a Scalable
Shared Virtual Environment for the Internet. 30th
Hawaii International Conference on System Sciences,
January, 1997.
Lea, R. et al., Technical Issues in the Design of a
Scalable Shared Virtual World. Sony Research Forum
SRF'95, Tokyo, 1995.
Carlsson, C. and Hagsand, O., DIVEÐA Multi-User
Virtual Reality System. IEEE Virtual Reality Annual
International Symposium, 1993, pp. 394±400.
Honda, Y. et al., Virtual Society: Extending the
WWW to support a Multi-User Interactive Shared
3D Environment. Proc. VRML `95, San Diego, USA,
December 1995.
Benford, S. et al., Managing Mutual Awareness in
Collaborative Virtual Environments. Proc. ACM
SIGHI Conference of Virtual Reality (VRST `94),

August 23±26 1994, Singapore, ACM Press.
Thalmann, D. et al., Sharing VLNET Worlds on the
Web. Proc. Compugraphics `96.
Capin, T. K. et al., Virtual Human Representation
and Communication in VLNET. IEEE Computer
Graphics and Applications, 17(2), March±April 1997.
Greenhalgh, C. and Benford, S. D., MASSIVE: a virtual reality system for tele-conferencing. ACM
Transactions on Computer Human Interfaces
(TOCHI), 1995, 2(3), 239±261.
Singhal, S. K. and Cheriton, D. R., Exploiting
Position History for Ecient Remote Rendering in
Networked Virtual Reality. Presense: Teleoperators
and Virtual Environments, 4(2), Spring 1995.
Zelesko, M. J. and Cheriton, D. R., Specializing
Object-Oriented RPC for Functionality and
Performance. Proc. 16th International Conference on
Distributed Computing Systems, IEEE Computer
Society Press, Hong Kong, May 1996.
Mandeville, J. et al., GreenSpace: Creating a
Distributed Virtual Environment for Global
Applications. IEEE Proc. Networked Reality
Workshop, Boston, MA, October 26±28, 1995.
Michael, D. and Brock, D. L., A Multiresolution
Synthetic Environment Based on Observer Viewpoint.


300

K. Kanev and T. Sugiyama


15th Workshop on Distributed Interactive Simulation,
Orlando, FL, September 1996.
21. Michael, D. and Brock, D. L., A 3D Environment for
an Observer Based Multiresolution Architecture. 1997
Spring Simulation Interoperability Workshop, Orlando,
FL, March 3±7, 1997.
22. Taylor, D., The VR-Protocol and What it O€ers
HLA. Orlando, FL, March 3±7, 1997.

23. Gustavson, P., DirectPlay DISÐAnother Way to
HLA? 1997 Spring Simulation Interoperability
Workshop, Orlando, FL, March 3±7, 1997.
24. Reddy, M., A Survey of Level of Detail Support in
Current Virtual Reality Solutions. Virtual Reality,
1(2), 1995, Virtual Press.
25. Kanev, K. and Sugiyama, T., Virtua-Fly. Proc.
Virtual Reality Society of Japan Annual Conference,
Tokyo, Japan, October 1996.



×