Tải bản đầy đủ (.doc) (70 trang)

An Investigation Into the Use of Synthetic Vision for NPC’sAgents in Computer Games

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (625.96 KB, 70 trang )

Licenciatura Thesis

An Investigation Into the Use of

Synthetic Vision
for NPC’s/Agents in

Computer Games
Author

Enrique, Sebastian


Director

Co-Director

Mejail, Marta

Watt, Alan
University of Sheffield
United Kingdom

Universidad de Buenos Aires
Argentina





Departamento de Computación


Facultad de Ciencias Exactas y Naturales
Universidad de Buenos Aires
Argentina
September 2002


Abstract
The role and utility of synthetic vision in computer games is discussed. An
implementation of a synthetic vision module based on two viewports rendered in realtime, one representing static information and the other dynamic, with false colouring
being used for object identification, depth information and movement representation is
presented. The utility of this synthetic vision module is demonstrated by using it as
input to a simple rule-based AI module that controls agent behavior in a first-person
shooter game.
Son discutidos en esta tesis la utilidad y el rol de la visión sintética en juegos de
computadora. Se presenta una implementación de un módulo de visión sintética
basado en dos viewports renderizados en tiempo real, uno representando información
estática y el otro dinámica, utilizando colores falsos para identificación de objetos,
información de profundidad y representación del movimiento. La utilidad de este
módulo de visión sintética es demostrada utilizándolo como entrada a un módulo
simple de IA basado en reglas que controla el comportamiento de un agente en un
juego de disparos en primera persona.


A mis padres,
quienes sacrificaron todo por mi educación.


An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games
Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002


Table of Contents
TABLE OF CONTENTS..........................................................................................................4

ACKNOWLEDGMENTS........................................................................................................7

INTRODUCTION.....................................................................................................................8
GENERAL OVERVIEW
IN DEPTH PRE-ANALYSIS

8
8

PREVIOUS WORK................................................................................................................11

PROBLEM STATEMENT....................................................................................................13

SYNTHETIC VISION MODEL...........................................................................................14
STATIC VIEWPORT
DEFINITIONS
LEVEL GEOMETRY
DEPTH
DYNAMIC VIEWPORT
BUFFERS
REMARKS

14
14
15
16
17

18
18

BRAIN MODULE: AI............................................................................................................20
AI MODULE
FPS DEFINITION
MAIN NPC
POWER-UPS
BRONTO BEHAVIOUR
BEHAVIOUR STATES SOLUTION
DESTINATION CALCULATION
BEZIER CURVE GENERATION
WALK AROUND
LOOKING FOR A SPECIFIC POWER-UP

20
20
21
23
25
26
27
27
28
37
Game Applications Analysis - 4


An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games
Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002


LOOKING FOR ANY POWER-UP
LOOKING QUICKLY FOR A SPECIFIC POWER-UP
KNOWN PROBLEMS
EXTENDED BEHAVIOUR WITH DYNAMIC REACTIONS
DON’T WORRY
AVOID
INTERCEPT
DEMONSTRATION
LAST WORD ABOUT DYNAMIC AI

38
38
39
44
45
45
46
46
46

GAME APPLICATIONS ANALYSIS..................................................................................47
ADVENTURES
FIRST PERSON SHOOTERS
THIRD PERSON ACTION GAMES
ROLE PLAY GAMES
REAL TIME STRATEGY
FLIGHT SIMULATIONS
OTHER SCENERIES


47
47
47
47
48
48
48

CONCLUSIONS.....................................................................................................................49

FUTURE WORK....................................................................................................................50

REFERENCES........................................................................................................................52

APPENDIX A – CD CONTENTS.........................................................................................54

APPENDIX B – IMPLEMENTATION................................................................................55
FLY3D_ENGINE CLASSES
FLYBEZIERPATCH CLASS
FLYBSPOBJECT CLASS
FLYENGINE CLASS
FLYFACE CLASS
LIGHTS CLASSES
SPRITE_LIGHT CLASS
SVISION CLASSES
AI CLASS

55
55
55

55
55
56
56
56
56

Game Applications Analysis - 5


An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games
Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

SVOBJECT CLASS
VISION CLASS
VIEWPORT CLASSES
VIEWPORT CLASS
WALK CLASSES
CAMERA CLASS
CAMERA2 CLASS
OBJECT CLASS
PERSON CLASS
POWERUP CLASS

57
57
57
57
57
58

58
58
58
59

APPENDIX C – SOFTWARE USERS’ GUIDE.................................................................60
SYSTEM REQUIREMENTS
INSTALLING DIRECTX
CONFIGURING FLY3D
RUNNING FLY3D
RUNNING SYNTHETIC VISION LEVELS
MODIFYING SYNTHETIC VISION LEVELS PROPERTIES

60
60
60
60
61
63

APPENDIX D – GLOSSARY................................................................................................65

APPENDIX E - LIST OF FIGURES....................................................................................66

APPENDIX F - LIST OF TABLES.......................................................................................70

Game Applications Analysis - 6


An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games

Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Acknowledgments
I will be forever grateful with Alan Watt, who accepted to guide me through the whole
project, from the very beginning. I won’t ever forget all the help he offered me during
my little trip to Sheffield, despite the hard time he was having. Special thanks to his
wife, Dionéa, who is an extremely kind person.
From Sheffield as well, I want to thank Manuel Sánchez and James Edge, for being very
friendly and easy-going. Special thanks to Steve Maddock for all the help and support
he gave to me, and for the “focusing” talk that we had.
Very special thanks to Fabio Policarpo, for many reasons: he gave me access to the
early full source code and following versions of the engine; he helped me on each piece
of code when I was stuck or when I just didn’t know what to do; he even gave me a
place in his offices at Niterói. Furthermore, all the people from Paralelo –Gilliard Lopes,
Marcos, etc.- and Fabio’s friends were very kind with me. Passion for games can be
smelled inside Paralelo’s offices.
I shall be forever in debt with Alan Cyment, Germán Batista and Javier Granada for
making possible the presentation of this thesis in English.
Thanks to all the professors from the Computer Sciences Department who make a
difference, like Gabriel Wainer, who not only teaches computer sciences, but a work
philosophy as well.
I must mention all of my university partners and friends, with whom I shared many
suffered funny study days and nights. In special to Ezequiel Glinsky, an incredible
teammate who was next to me on every academic step.
Thank you Cecilia for standing by my side all of these years.
I don’t want to forget to give thanks to Irene Loiseau, who initiated contact with Alan;
and to all the people from the ECI 2001 committee, who accepted my suggestion to
invite Alan to come to Argentina to give a very nice, interesting, and successful course.
And, finally, very special thanks to Claudio Delrieux, who strengthened my passion for
computer graphics. He was always ready to help me unconditionally before and during

every stage of this thesis.

Game Applications Analysis - 7


An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games
Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Introduction
General Overview
Today, 3D computer games usually uses artificial intelligence (AI) for the non-player
characters (NPC’s) taking information directly from the internal database. They control
the NPC’s movements and actions having knowledge of the whole world information,
probably cheating if the developer does not put any constraint over this.
Since a large quantity of computer controlled opponents or friends are human-like, it
seems to be interesting and logic to give senses to those characters. This means that
the characters could have complete or partial systems, such as vision, aural, tactile,
smell, and taste. They could then process the information sensed from those systems
in a brain module, learn about the world, and act depending on the character’s
personality, feelings and needs. The character becomes a synthetic character that lives
in a virtual world.
The research field of synthetic characters or autonomous agents investigates the use of
senses combined with personality in order to make characters’ behaviour more
realistic, using cognitive memories and rule based systems, producing agents that
seem to be alive and interacting in their own world, and maybe with some human
interaction.
However, not too much effort has been given to the investigation of the use of
synthetic characters in real time 3D computer games. In this thesis, we propose a
vision system for NPC’s, i.e. a synthetic vision, and analyze how useful and feasible its
usage might prove in the computer games’ industry.

We think that the use of synthetic vision together with complex brain modules could
improve gameplay and make for better and more realistic NPC’s.
We have to note that our efforts were focused on the investigation of the use of the
synthetic vision, and not to the AI that uses it. For that reason, we developed only a
simple AI module in a 3D engine, where we implemented our vision approach.

In Depth Pre-Analysis
We can regard synthetic vision as a process that supplies an autonomous agent with a
2D view of his environment. The term synthetic vision is used because we bypass the
classic computer vision problems. As Thalmann et al [Thal96] point out, we skip the
problems of distance detection, pattern recognition and noisy images that would be
appertain for vision computations for real robots. Instead computer vision issues are
addressed in the following ways:
1) Depth perception – we can supply pixel depth as part of the synthetic vision of an
autonomous agent’s vision. The actual position of objects in the agent’s field of
view is then available by inverting the modeling and projection transform.
2) Object recognition – we can supply object function or identity as part of the
synthetic vision system.

Game Applications Analysis - 8


An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games
Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

3) Motion determination – we can code the motion of object pixels into the synthetic
vision viewport.
Thus the agent AI is supplied with a high-level vision system rather than an
unprocessed view of the environment. As an example, instead of just rendering the
agent’s view into a viewport then having an AI interpret the view, we instead render

objects in a colour that reflects their function or identity (although there is nothing to
prevent an implementation where the agent AI has to interpret depth – from binocular
vision, say, and also recognize objects). With object identity, depth and velocity
presented, the synthetic vision becomes a plan of the world as seen from the agent’s
viewpoint.
We can also consider synthetic vision in relation to a program that controls an
autonomous agent by accessing the game database and the current state of play.
Often a game database will be tagged with extra pre-calculated information so that an
autonomous agent can give a game player an effective opponent. For instance, areas
of the database may be tagged as good hiding places (they may be shadow areas), or
pre-calculated journey paths from one database node to another may be stored. In
using synthetic vision, we change the way the AI works, from a prepared, programmeroriented behaviour to the possibility of novel, unpredictable behaviour.
A number of advantages accrue from allowing an autonomous agent to perceive his
environment via a synthetic vision module. First, it may enable an AI architecture for an
autonomous agent that is more ‘realistic’ and easier to build. Here we refer to an ‘on
board’ AI for each autonomous agent. Such an AI can interpret what is seen by the
character, and only what is seen. Isla and Blumberg [Isla02] refer to this as sensory
honesty and point out that it “…forces a separation between the actual state of the
world and the character’s view of the state of the world”. Thus the synthetic vision may
render an object but not what is behind it.
Second, a number of common games operations can be controlled by synthetic vision.
A synthetic vision can be used to implement local navigation tasks such as obstacle
avoidance. Here the agent’s global path through a game level may be controlled by a
high level module (such as A* path planning or game logic). The local navigation task
may be to attempt to follow this path by taking local deviations where appropriate.
Also, synthetic vision can be used to reduce collision checking. In a games engine this
is normally carried out every frame by checking the player's bounding box against the
polygons of the level or any other dynamic object. Clearly if there is free space ahead
you do not need every-frame collision checking.
Third, easy agent-directed control of the synthetic vision module may be possible, for

example, look around to resolve a query, or follow the path of a moving object. In the
case of the former this is routinely handled as a rendering operation. A synthetic vision
can also function as part of a method for implementing inter-agent behaviour.
Thus, the provision of synthetic vision reduces to a specialized rendering which means
that the same technology developed for fast real-time rendering of complex scenes is
exploited in the synthetic vision module. This means that real-time implementation is
straightforward.
However, despite the ease of producing a synthetic vision, it seems to be only an
occasionally employed model in computer games and virtual reality. Tu and
Terzopoulous [TuTe94] made an early attempt at synthetic vision for artificial fishes.
The emphasis of this work is a physics-based model and reactive behaviour such as
obstacle avoidance, escaping and schooling. Fishes are equipped with a “cyclopean”

Game Applications Analysis - 9


An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games
Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

vision system with a 300 degree field of view. In their system an object is “seen” if any
part of it enters the view volume and is not fully occluded by another object.
Terzopoulos et al [Terz96] followed this with a vision system that is less synthetic in
that the fishes’ vision system is initially presented with retinal images which are
binocular photorealistic renderings. Computer vision algorithms are then used to
accomplish, for example, predator recognition. This work thus attempts to model, to
some extent, the animal visual processes rather than bypassing these by rendering
semantic information into the viewport.
In contrast, a simpler approach is to use false colouring in the rendering to represent
semantic information. Blumberg [Blum97] use this approach in a synthetic vision based
on image motion energy that is used for obstacle avoidance and low-level navigation. A

formula derived from image frames is used to steer the agent, which, in this case, is a
virtual dog. Noser et al [Nose95] use false colouring to represent object identity, and in
addition, introduce a dynamic octree to represent the visual memory of the agent.
Kuffner and Latombe [Kuff99] also discuss the role of memory in perception-based
navigation, with an agent planning a path based on its learned model of the world.
One of the main aspects that must be addressed for computer games is to make the
synthetic vision fast enough. We achieve this, as discussed above, by making use of
existing real-time rendering speed-ups as used to provide the game player with a view
of the world. We propose that two viewports can be effectively used to provide for two
different kinds of semantic information. Both use false colouring to present a rendered
view of the autonomous agent’s field of view. The first viewport represents static
information and the second viewport represents dynamic information. Together the two
viewports can be used to control agent behaviour. We discuss only the implementation
of simple memoryless reactive behaviour; although far more complex behaviour is
implementable by exploiting synthetic vision together with the consideration of
memory and learning. Our synthetic vision module is used to demonstrate low-level
navigation, fast object recognition, fast dynamic object recognition and obstacle
avoidance.

Game Applications Analysis - 10


An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games
Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Previous Work
Beyond the mere mention of present research in the previous section, it is necessary to
give a short description of each of the relevant works on the field.
We can think of the following two opposing approaches:



Pure Synthetic Vision, also called Artificial Vision in [Nose95b], quoting the same
reference, it is “a process of recognizing the image of the real environment
captured by a camera (…) is an important research topic in robotics and artificial
intelligence.”



No Vision At All, it est., not using any vision system for our characters.

We can imagine a straight line, with Pure Synthetic Vision in one extreme, and No
Vision At All in the other end. Each of the previous approaches falls somewhere in
between.
Pure Synthetic
Vision

No Vision At All

Figure 1. Approaches: a graphical view.

Bruce Blumberg describes in [Blum97a] a synthetic vision based on motion energy that
he uses for his autonomous characters for obstacle avoidance and low-level navigation.
He renders the scene with false colouring, taking the information from a weighted
formulae that is a combination of flow (pixel colours from the last frames) and mass
(based on textures), dividing the image in half, and taking differences in order to steer.
A detailed implementation can be found in [Blum97b]. Related research on Synthetic
Characters can be found in [Blum01].
James Kuffner presents in [Kuff99a] a false colouring approach that he uses for digital
actors navigation with collisions detection, implementing visual memory as well.
Details can be found in [Kuff99b]. For a fast-related introduction and other papers you

can navigate through [Kuff01].
Hansrudi Noser et al use in [Nose95a] synthetic vision for digital actors navigation.
Vision is the only connection between environment and actors. Obstacle avoidance as
well as knowledge representation, learning and forgetting problems are solved based
on actors vision system. A voxels-based memory is used to do all these tasks and path
searching as well. Their vision representation makes difficult to quickly identify visible
objects, which is one of our visual system’s goals. Even though it would be very
interesting to integrate their visual memory proposals with this thesis. Mentioned ideas
are used in [Nose98], plus aural and tactile sensors, to make a simple tennis game’s
simulation.
Kuffner and Noser vision works were the most influential ones during the making of this
thesis.
Olivier Renault et al [Rena90] develop a 30x30 pixels synthetic vision for animation,
using the front buffer for a normal rendering, the back buffer for objects’ identification

Game Applications Analysis - 11


An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games
Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

and the z-buffer for distances. They propose high level behaviours to go through a
corridor avoiding obstacles.
Rabie and Terzopoulos implement in [Rabi01] a stereoscopic vision system for artificial
fishes’ navigation and obstacle avoidance. Similar ideas are used by Tu and Terzopoulos
in [Tute94] to develop fishes behaviours more deeply.
Craig W. Reynolds explains in [Reyn01] a set of behaviours for autonomous agents’
movement, such as ‘pursuit’, ‘evasion’, ‘wall following’, ‘unaligned collision avoidance’,
among others. This is a very interesting work that should be used as a computer games
NPC’s complex AI module development guide.

Damián Isla discusses in [Isla02] AI potential using synthetic characters.
John Laird discusses in [Lair00a] human level AI in computer games. He proposes in
[Lair01] autonomous synthetic characters design goals. Basically his research is
focused on improving artificial intelligence for NPC’s in computer games; an example is
the Quakebot [Lair00b] implementation. For more research from J. Laird refer to
[Lair02].
Most of other investigations are based on previously mentioned authors’ ideas.

Game Applications Analysis - 12


An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games
Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Problem Statement
We will define our concept of Synthetic Vision as the visual system of a virtual
character who lives in a 3D virtual world. This vision represents what the character
sees from the world, the world part sensed by his eyes. Technically speaking, it takes
the form of the scene rendered from his point of view.
However, in order to have a vision system useful for computer games, it is necessary to
find vision representations with which we can take enough information to make
autonomous decisions.
The Pure Synthetic Vision approach is not useful today for computer games, since the
amount of information that can be gathered in real-time is very limited: almost only
shapes recognition and obstacle avoidance could be achieved. This is a research field
on its own. You can read any Robot Vision literature to see all the problems related to it.
No Vision At All is what we do not want.
So, our task is to find a model that falls some place in between (Figure 2).
Pure Synthetic
Vision


No Vision At All
Our Model
Figure 2. Aimed model place.

Game Applications Analysis - 13


An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games
Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Synthetic Vision Model
Our goal was to create a special rendering from the character’s point of view, in order
to produce a screen representation in a viewport, so that the AI system could
eventually undertake:





Obstacle avoidance.
Low-level navigation.
Fast object recognition.
Fast dynamic object detection.

We must understand “fast” from a subjective point of view, dependent on results, as
“fast enough” to produce an acceptable job in a 3D real time computer game.
To reach this goals, we propose a synthetic vision approach that uses two viewports:
the first one represents static information, and the second one dynamic information.
We will assume a 24bits RGB model to represent colour per pixel.

In figure 3 there is an example of the normal rendered viewport and the corresponding
desired static viewport.

Figure 3. Static viewport (left) obtained from the normal rendered viewport (right).

Static Viewport
The static synthetic vision is mainly useful to identify objects, taking the form of a
viewport with false colouring, similar to that described in [Kuff99].

Definitions
We will make the following definitions:
Object. An item with 3D shape and mass.
Class of Object. Grouping of objects with the same properties and nature.
In our context, for example, the health power-up located near the fountain on some
imaginable game level is an object, whilst all health power-ups of the game level
conform a class of object.

Game Applications Analysis - 14


An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games
Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Each class of object has a colour id associated. That is, there exists a function c that
constitutes a mapping from classes of objects to colours. This is an inyective function
since no two different classes of objects have the same colour.
So, given
CO : Set of classes of objects.
and
C : [0, 1] x [0, 1] x [0, 1]

Set of 3D vectors with real values between zero and one in each component. Each
vector represents a colour in RGB model: the first component corresponds to the red
colour quantity, the second to the green, and the third to the blue.
c : CO  C
The mapping function from classes of objects to colours.
∀ co1, co2 ∈ CO : c(co1) = c(co2) ⇒ co1 = co2
The function is inyective.
Since CO is a finite set, and C infinite, not all the possible colours are used, and the
function is not suryective, neither biyective.
However, we can create a mapping table in order to know which colour corresponds to
each class of object. See an example in Table 1.
Class of Object
Health Power-Ups
Ammo Power-Ups
Enemies

Colour
(1, 1, 0)
(0, 1, 1)
(1, 0, 1)

Table 1. Classes of Objects and Colour Ids Mapping Table.

Level Geometry
Typically, besides objects as defined in the previous section, computer 3D games make
a difference between ‘normal objects’ or items, and level geometry. The definitions are:
Item. Any object, as defined in the previous section, that is not part of the level
geometry.
Level Geometry. All polygons that conform to the structure of each game level. It est.,
floor, walls, etc.

However, the level geometry as defined may not strictly be an object, because it is
usually made of a peel of polygons that will not necessarily make up a solid object with
mass.

Game Applications Analysis - 15


An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games
Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Despite what has just been said, we will divide level geometry in three classes that will
be part of the CO classes of object set:




Floor : Any polygon from the level geometry with Z normal normalized component
greater or equal than 0.8.
Ceiling : Any polygon from the level geometry with Z normal normalized component
less or equal than -0.8.
Wall : Every polygon from the level geometry that does not fit in any of the two
previous cases.

We assume a coordinate system where Z points up, X right, and Y forward (figure 4).
Z
Y

X

Figure 4. Coordinate System used.


In Table 2, we give an extended mapping table adding level geometry classes of
objects.
Class of Object
Health Power-Ups
Ammo Power-Ups
Enemies
Wall
Floor
Ceiling

Colour
(1, 1, 0)
(0, 1, 1)
(1, 0, 1)
(0, 0, 1)
(0, 1, 0)
(1, 0, 0)

Table 2. Extended Classes of Objects and Colour Ids Mapping Table with Level Geometry.

Depth
Identification is not enough: we need to know something about position. The variables
that every 3D game engine manages and for which we have specific use are:
Camera Angle. It is the angle of vision, from the character’s point of view, with which
the scene is rendered.
Near Plane (NP). The near plane of the scene view frustum volume [Watt01].
Far Plane (FP). The far plane of the scene view frustum volume [Watt01].
We can achieve knowledge about position combining the previous variables and the
depth information of each pixel rendered in the static viewport.

When the static viewport is being rendered, it will be using a depth buffer in order to
know if it is necessary to draw a given pixel. When the rendering is finished, each pixel
on the static viewport has a corresponding value in the depth buffer.

Game Applications Analysis - 16


An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games
Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Be dx,y = DepthBuffer[x,y] the depth value of the pixel at coordinates x,y.
dx,y ∈ [0, 1]
In world coordinates, the perpendicular distance between the character and the object
that the pixel belongs to is defined as:
P = (FP – NP) * dx,y + NP

Dynamic Viewport
The dynamic synthetic vision is useful, as its name implies, to represent instantaneous
movement information of the objects being seen in the static viewport. Like this one, it
takes the form of a viewport with false colouring but, in place of object identification,
colours represent velocity vector of each object. As we use RGB model, we will define
how to obtain each pixel colour component. The dynamic viewport has the same size
than the static one.
R, G, B ∈ [0, 1]
Colours Red, Green, and Blue are real numbers between 0 and 1.
And given
Vmax ∈ ℜ+
The maximum velocity allowed in the system, a constant positive real number.
If Vx,y is the velocity vector of the object located at coordinates (x, y) at the static
viewport,

R(x,y) = min(||Vx,y|| / Vmax, 1)
The red colour component is the minimum value between 1 and the velocity magnitude
of the object located at (x, y) coordinates on the static viewport divided by the
maximum velocity allowed.
If D is the direction/vision vector of the agent, we can normalize this and the velocity
vector V:
DN = D / ||D||
Vx,yN = V / ||Vx,y||
And then we obtain,
c = Vx,yN . DN = Vx,yN1 DN1 + Vx,yN2 DN2 + Vx,yN3 DN3
c ∈ [-1, 1]

Game Applications Analysis - 17


An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games
Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

c is the dot product or cosinus of the angle between the normalized V x,y, velocity vector
of the object located at (x, y) coordinates on the static viewport, and D, the non-player
character direction and vision vector. The angle ranges from 0 to 180º.
c is a real number between –1 and 1.
G(x,y) = c * 0.5 + 0.5
The green colour component is a mapping between the cosinus c into the interval [0,
1]. A cosinus value of zero will produce a green colour component of 0.5.
s = √(1 – c2)
s ∈ [0, 1]
s is the sinus of the angle between the normalized V x,y, velocity vector of the object
located at (x, y) coordinates on the static viewport, and D, the non-player character
direction and vision vector. It is calculated using the cosinus of the same angle.

s is a real number between 0 and 1.
B(x,y) = s
The blue colour component is a direct mapping between the sinus s into the interval [0,
1]. A cosinus value of zero will produce a green colour component of 0.5, and a blue
colour component of 1.
With this definitions, a fully static object will have a colour of (0.0, 0.5, 1.0). Dynamic
objects will have different colours depending on the movement direction and velocity.

Buffers
The three kinds of information types defined so far (static, depth, and dynamic) could
be kept in memory buffers for their use. Each element of the static and dynamic buffers
contains three values that correspond to each colour component. The depth buffer
contains a single value. The size of each buffer is fixed: screen height times screen
width; say, a two dimensional matrix. So, given a viewport coordinate (x,y), you can
obtain from the buffers: object id, its depth and dynamic information.

Remarks
We say that our vision system is ‘perfect’ because the character does not have any
occlusions caused by lightning conditions.
To make the dynamic system more realistic, it is possible to add some noise in order to
obtain less precise data. Why so? Humans, through the use of their eyes, can only
make estimations (sometimes very good ones!) but it is improbable that they shall
determine the exact velocity of any given moving object.

Game Applications Analysis - 18


An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games
Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002


Figure 5. From left to right, static viewport, normal rendered viewport, and dynamic viewport.

See Appendix B for details of our synthetic vision implementation over Fly3D [Fly01;
Watt01; Watt02].
The expected dynamic viewport is shown together with the static and normal viewports
in figure 5.

Game Applications Analysis - 19


An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games
Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

Brain Module: AI
Static and dynamic information of what are you seeing in a given moment is useful only
if you have a brain that knows how to interpret those features, decide, and act
according to a given behavior. Synthetic vision is a simple layer that makes an
abstraction of the world through the character eyes and represents it in a useful way.
The layer in charge of taking the information sensed by the synthetic vision module as
input, processing it, and acting after it, is what we call the Brain Module, or the AI
module.

AI Module
So as to demonstrate a possible usage of the described synthetic vision, an artificial
intelligence module was developed, in an effort to give autonomous behavior to a NPC
within a FPS game.
Due to its simplicity, the developed module could not possibly be used in commercial
games without the addition of new features and behaviors, and refinements on
implemented movements. It is basically intended to show how our synthetic vision
approach could be used, without having deeply developed AI techniques.


FPS Definition
FPS games main characteristics are:


It takes place through a series of 3D modeled regions (interiors, exteriors, or a
combination of both environments) called levels.



The player must fulfill different tasks in each level in order to be able to reach the
following one. That is to say, every level consists of a general goal: reach the next
level; that is made of a set of specific goals as: get the keys to open and walk
across the door, kill level’s boss, etc. Also, it is very common to find sub-goals not
necessary to achieve the general one, such as “getting (if possible) 100 gold units”.



The player can acquire different weapons during its advance in the game. In
general, he will have 10 different kinds of weapons at his disposal, most of them
with finite ammunition.



The player has an energy or health level that as soon as it reaches 0 level, he dies.



The player has to face enemies (NPC’s) constantly, who will try to attack and kill
him, according to their characteristics, with weapons or in hand-to-hand combat.




It is possible that some NPC’s collaborate with the player.



The player is able to gather items, called power-ups, that increase health or weapon
values, or give special powers for a limited time range.

Game Applications Analysis - 20


An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games
Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002



The player sees the world as if he were seeing it through the eyes of the character
that he is personifying. It est., a first person view.

It is recommended that you browse a little over the Internet to become more familiar
with this kind of games. Wolfenstein 3D [Wolf01] was one of the pioneers, whereas
Unreal [Unre01] and Quake [Quak01] series are more than enough good examples.
Next the basic characteristics of the FPS to be implemented with our synthetic vision
model will be described.

Main NPC
The FPS is inhabited by the main character to whom we will give an autonomous life.
His name is Bronto. He has two intrinsic properties: Health (H) and Weapon (W).

Although he contains a weapon property, in our implementation he cannot shoot.
As an invariant,
H ∈ ℵ0, 0 ≤ H ≤ 100
Bronto’s health ranges between 0 and 100, natural numbers. Initially,
Hini = 100
The first rule that we have is:
H = 0 ⇒ Bronto dies
Something similar happens with the weapon property: it will follow the same invariant:
W ∈ ℵ0, 0 ≤ W ≤ 100
Bronto’s weapon value ranges between 0 and 100, natural numbers. Initially,
Wini = 100
But, in this case, the only meaning of W = 0 is that Bronto has not ammunition.
In addition to the fact that Bronto cannot shoot, he will not be able to receive enemy
shoots. Given these two simplifications, we have decided that both health and weapon
diminish their value during time linearly and discretely. That is to say, given:
t ∈ ℜ+0 as actual time.
tw0 ∈ ℜ+0 as the time that weapon starts to diminish from.
W0 ∈ ℵ as weapon value at tw0 time. W0 = Wini initially.
tdw ∈ ℵ as the time interval in seconds of weapon decreasing.
Dw ∈ ℵ as weapon decreasing factor.
We define that,
t = tw0 ⇒ W(t) = W0

Game Applications Analysis - 21


An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games
Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

W assumes a present value W0 when the system is at initial counting time tw0.

k ∈ ℵ0, tw0 + k tdw ≤ t < tw0 + (k+1) tdw ⇒ W(t) = max(W(tw0 + k tdw) – Dw, 0)
If the system is at a greater time than t w0, W linearly decreases its value according to
tdw intervals of time, until reaching zero and continuing in that value.
It is analogous for health:
th0
H0
tdh
Dh

∈ ℜ+0 as the time that health starts to diminish from.
∈ ℵ as health value at th0 time. H0 = Hini initially.
∈ ℵ as the time interval in seconds of health decreasing.
∈ ℵ as health decreasing factor.
t = th0 ⇒ H(t) = H0
k ∈ ℵ0, th0 + k tdh ≤ t < th0 + (k+1) tdh ⇒ H(t) = max(H(th0 + k tdh) – Dh, 0)
Health (H) and Ammo (W)

100
90
80
70
60

W

50

H

40

30
20
10

17
0

16
0

15
0

14
0

13
0

12
0

11
0

10
0

90


80

70

60

50

40

30

20

10

0

0

Time (sec)

Figure 6. An example of weapon and energy diminishing system: game starts, then both properties decrease gradually.
Weapon reaches 0 value at 100 seconds, whereas health does the same at 170 seconds. At that moment Bronto dies.

Taking as an example:
t0 = 0, start of game.
H0 = Hini = 100.
tdh = 5 seconds.
Dh = 3.

W0 = Wini = 100.
tdw = 4 seconds.
Dw = 4.

Game Applications Analysis - 22


An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games
Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

The situation described in figure 6 is produced when Bronto’s weapon value diminish
gradually until it reaches 0 value at 100 seconds; whereas health value reaches 0 value
at 170 seconds, in that instant Bronto dies.

Power-Ups
Initial health and weapon values as well as the diminishing system have already been
defined. It has been specified that when health reaches zero value Bronto dies. We
have to define now some way of increasing Bronto’s properties: it will be by means of
power-ups.
Our FPS will count solely with two power-ups: Health (Energy) and Weapon
(Ammunition or Ammo). When Bronto passes ‘above’ one of them, an event that will
increase corresponding Bronto’s property value in a fixed amount will be executed:
Hafter = Hbefore + Ah
Wafter = Wbefore + Aw
Being,
Ah ∈ ℵ, the NPC health property’s increment value when he takes a power-up of the
same one.
Aw ∈ ℵ, the NPC weapon property’s increment value when he takes a power-up of the
same one.
The action of taking a power-up, besides to increasing the corresponding property

value, will set a new tw0 or th0 value, depending on the power-up taken, to present time.
In addition, W0 = Wafter or H0 = Hafter will be set also.
This means that taking a power-up reinitiates the cycle by which Bronto’s property
value is decreased.
Let’s take as an example the same one that was given during the previous section,
where:
t0 = 0, start of game.
H0 = Hini = 100.
tdh = 5 seconds.
Dh = 3.
W0 = Wini = 100.
tdw = 4 seconds.
Dw = 4.
Let’s add now:
Aw = 10.
Ah = 5.
And suppose that Bronto obtains:

Game Applications Analysis - 23


An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games
Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002





A weapon power-up at 37 seconds; its ammunition value will increases from 64 to
74, establishing a time t w0 of 37, from which the weapon value must be discounted

every tdh seconds.
Another weapon power-up, but this time at 43 seconds, its value increases from 70
to 80, and establishing again tw0 time, but now at 43.
A health power-up at 156 seconds, its energy value increases from 7 to 12, and
reestablishes time th0 to 156.
Health (H) and Ammo (W)

100
90
80
70
60
W

50

H

40
30
20
10

17
0

16
0

15

0

14
0

12
0
13
0

11
0

10
0

90

80

70

60

50

40

30


20

10

0

0

Figure 7. Same example as figure 6 but Bronto takes power-ups this time: two ammo power-ups at 37 and 43 seconds,
and one energy power-up at 156 seconds. Weapon value reaches 0 level at 123 seconds now, whereas energy at 176
seconds, when Bronto dies.

Bronto’s weapon value will first reach zero at 123 seconds, and he will die at 176
seconds, when energy reaches zero value. Situation is graphically represented in figure
7.
The whole process is described in figure 8 by means of a state diagram. When the
game starts (Start Game event) initial values are established. When the game is
running (Game Running state) several events can arise: those that decrease Bronto’s
weapon and energy values, caused when the system reaches a given time, and those
that increase Bronto’s weapon and energy values, caused when a power-up is taken.
The event produced when Bronto reaches a zero value of energy makes the transition
to Bronto’s death.

Game Applications Analysis - 24


An Investigation Into the use of Synthetic Vision for NPC’s/Agents in Computer Games
Enrique, Sebastian – Licenciatura Thesis – DC – FCEyN – UBA - September 2002

EVENT: Start Game

SET
tw0 = 0
th0 = 0
Ho = Hini
W0 = Wini

EVENT: H = 0

GAME
RUNNING

BRONTO
DIES

SET

EVENT: t = tw0 + k tdw,
k∈ℵ

EVENT: t = th0 + k tdh,
k∈ℵ
SET
H = H - Dh

SET
W = W - Dw

EVENT: Health
Power-Up Taken


EVENT: Ammo
Power-Up Taken

SET
th0 = t
H = H + Ah

SET
tw0 = t
W = W + Aw

Figure 8. Events that affect and are affected by Bronto’s health and weapon values during the game.

Bronto Behaviour
Now we will describe the behaviour that Bronto will assume during the game, based on
health and weapon properties’ values.
Being:
Hut ∈ ℵ, health upper threshold.
Hlt ∈ ℵ, health lower threshold.
Wut ∈ ℵ, weapon upper threshold.
Wlt ∈ ℵ, weapon lower threshold.
H and W health and weapon properties’ values, as they where defined in previous
sections.
So that:
0 < Hlt < Hut < 100
and
0 < Wlt < Wut < 100
Bronto will be in any of the following six possible states:
1. Walk Around (WA), when Hut ≤ H and Wut ≤ W. Hut and Wut denote a limit or threshold
where if H and W are over them, Bronto have not any specific objective and his

behaviour is reduced to walk without a fixed path or course.

Game Applications Analysis - 25


×