Tải bản đầy đủ (.pdf) (10 trang)

3D Graphics with OpenGL ES and M3G- P37 pps

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (213.65 KB, 10 trang )

344 LOW-LEVEL MODELING IN M3G CHAPTER 14
If you want to use a background image, you will need the following methods in
Background:
void setImage(Image2D image)
void setImageMode(int modeX, int modeY)
void setCrop(int cropX, int cropY, int width, int height )
Only RGB and RGBA formats are allowed for a background image, and the format
must match that of the rendering target. In other words, you can use an RGBA for mat
background image only when rendering into an M3G Image2D that has an alpha
channel; RGB is the only allowed format for MIDP rendering targets, as Canvas and
mutable Image objects never have an alpha channel. This restriction was incorporated to
save software M3G implementations from having to implement dedicated blitting func-
tions for each combination of formats. However, implementations today typically rely on
OpenGL ES texturing for dr awing background images, making the restriction completely
unnecessary—if you wish, you can also easily implement your background as a textured
quad, a skybox, or any other suitable geometry, and side-step the whole issue.
The size and position of the background image is controlled with setCrop(int cropX,
int cropY, int width, int height). The point (cropX, cropY) in the image is placed in
the top left corner of the viewport, with the width by height pixels to the right and down
from there scaled to fill the viewport.
The setImageMode function controls whether the background image should be tiled
or not, separately in either direction. Specifying BORDER fills areas outside of the image
with the specified background color, whereas REPEAT tiles the image ad infinitum.The
tiling modes can be different for X and Y directions.
Example: scrolling background
Pulling all this together,letus clear our QVGAscreen so that the background image resides
at the top of the screen and is scrolled in the horizontal direction, while the area below the
image is cleared with a light green color and later filled by rendering some 3D content.
The code to do that is shown below, and the end result is shown in Figure 14.3.
// initialization
myBg = new Background();


myBg.setColor(0x00CCFFCC);
myBg.setImage(new Image2D(Image2D.RGB, 256, 128, myBgImage));
myBg.setImageMode(Background.REPEAT, Background.BORDER);
// per frame stuff: scroll the background horizontally.
// the screen is 240 pixels wide
cropX = (cropX+1) % 240;
myBg.setCrop(cropX, 18, 240, 320);
g3d.clear(myBg);
SECTION 14.4 2D PRIMITIVES 345
Figure 14.3: An illustration of what our example code for Background does.
It should be noted, however, that since the crop rectangle is specified in integers, it is not
possible to achieve entirely smooth scrolling or zooming at arbitrary speeds, since there
is no way to address the image at sub-pixel precision. This is another limitation you can
easily overcome with textured quads.
Pitfall: Background images are allowed to have any size, but in practice, M3G imple-
mentations usually render them using OpenGL ES textures. This means that inter-
nally, background images are still subject to the same limitations as images used for
texturing. This may adversely affect the quality or performance of your backgrounds
if you use images that map poorly to the restrictions of the underlying renderer. We
strongly advise you to only use background images that could as well be used as texture
images on your target implementation; in other words, use power-of-two dimensions
and keep the size within the limits allowed for textures. The limit can be queried with
Graphics3D.getProperties.
346 LOW-LEVEL MODELING IN M3G CHAPTER 14
14.4.2 Sprite3D
The name “sprite” originates from Commodore 64-era home computers that had special-
ized circuitry for drawing a movable block of pixels on the screen. That is almost exactly
what Sprite3D does, only with slightly more features and, in all cases we know of, with-
out the specialized circuits. Sprite3D takes an Image2D and draws it on the screen at
the projected position of its 3D location. The image can additionally be scaled with dis-

tance, and different regions of the image can be selected to be displayed.
The Sprite3D class was originally introduced into M3G in order to allow fast 2D pr im-
itives on software engines, and considerable effort was put into specifying how sprites
should function. In retrospect, this was largely a wasted effort, as it soon became evident
that all major implementations would have to be compatible with OpenGL ES, making
proprietary optimizations impractical. Sprite3D also turned out to be a nuisance to
implement using OpenGL ES textures, as its specification is not quite aligned with the
limitations on texture size in OpenGL ES. As a result, sprites remain something of a niche
feature in the API. They are little more than wrappers for textured quads, but they are still
available and do make common 2D effects somewhat easier to implement.
There are two main use cases for Sprite3D: you can use it as an impostor for 3D
objects, or for 2D overlays such as lighting effects or text labels on top of your 3D graph-
ics. For impostors, you can use a static image of a complex object and draw multiple
instances quickly. Alternatively, you can take advantage of the support for using a mutable
Image2D for a sprite, and re-render your 3D object into the impostor when the projec-
tion has changed by a significant amount. A simple lighting effect could be to draw a light
bloom around a light source: use a suitable bloom image, place your sprite at the location
of the light source, and enable additive blending for the sprite.
Performance tip: Do not confuse Sprite3D with point sprites in OpenGL ES.
Because each instance of Sprite3D incorporates its own transformation, it is
too slow for most use cases of point sprites. The scaling computations are also
more complex than for point sprites. Particle systems, for example, are far more
efficiently created in M3G by explicitly constructing quads to represent the particles.
The essential bit here is that you can then draw all the particles at once from a single
VertexBuffer, even though you have to animate them manually.
Sprite functions
Let us create a scaled sprite for starters:
Sprite3D mySprite = new Sprite3D(true, mySpriteImage, myAppearance);
The first parameter tells whether our sprite is scaled or not—in our example, we specified
true for a scaled sprite. A scaled sprite is drawn like a unit quad filled with the sprite

image, centered about the 3D location of the sprite, and facing the camera. An unscaled
sprite is other wise similar, but drawn with a 1:1 match between the sprite image and
SECTION 14.4 2D PRIMITIVES 347
the screen pixels, regardless of the distance between the camera and the sprite. The depth
of the sprite is, however, equal to the depth of its 3D position for both scaled and
unscaled sprites.
As in the Texture2D class, you have to specify the sprite image in the constructor
but can change it with the setImage function later on if you need to. Unlike texture
images, however, Sprite3D imposes no restrictions on the image dimensions—any
image will do.
Performance tip: Like background images, most implementations simply implement
Sprite3D using textured quads, and in practice the limits of texture images apply. To
maximize performance and quality, stick to the texture image restrictions with sprites
as well.
You can specify only a subset of the image to be shown with the function setCrop(int
x, int y, int width, int height). The image rectangle of width by height pixels starting
at (x, y), relative to the upper left corner of the image, is used to draw the sprite. Note
that for scaled sprites, this only changes the contents of the projected rectangle, whereas
for unscaled sprites, the on-screen size is changed to match the crop rectangle size. Addi-
tionally, you can mirror the image about either or both of the X and Y axes by specifying
a negative width or height.
One thing you can do using setCrop is an animated sprite. For example, assume that we
have a set of eight animation frames, 32 × 32 pixels each. If we put those into a 256 × 32
Image2D, called myAnimationFrames in this example, we can easily flip between
the frames to animate the sprite:
// Create the sprite
Sprite3D mySprite = new Sprite3D(true, myAnimationFrames, myAppearance);
int frame = 0;
// Animation (per frame)
mySprite.setCrop(frame * 32, 0, 32, 32);

frame = ++frame % 8;
Compositing sprites
Sprite rendering is also controlled by an Appearance object. The only Appearance
attributes that concern sprites are CompositingMode, Fog, and the layer index.
All of them function exactly as with mesh rendering, whereas all other Appearance
components are simply ignored. You must give the Appearance object to the
constructor, too, but unlike the image, you can specify a null Appearance
initially and set it later with the setAppearance function. The one thing you will
often want to include into your sprite appearance is a CompositingMode with
setAlphaThreshold(0.5f). This lets you set the shape of the sprite with the
348 LOW-LEVEL MODELING IN M3G CHAPTER 14
alpha channel, as pixels below the alpha threshold are discarded when rendering. In
our example above, we should add
myAppearance.setCompositingMode(new CompositingMode());
myAppearance.getCompositingMode().setAlphaThreshold(0.5f);
into the initialization code.
To draw your sprite, call Graphics3D.rendermySprite, myTransform,where
myTransform is the world-space transformation for your sprite. The attached
Appearance object is used for the shading.
Pitfall: Remember that sprites reside at their true depth in the 3D scene. If you want
your sprite as a 2D overlay on top of the 3D graphics, make sure you draw it last with
depth testing disabled.
15
CHAPTER
THE M3G SCENE GRAPH
M3G has been designed from the ground up to be a retained-mode scene graph API.
While low-level rendering is all fine and dandy, to really make the most of M3G you will
want to take advantage of the scene graph functionality. In this chapter we will take what
we learned about low-level rendering so far, and see how that fits into the concept of scene
graphs.

While the scene graph is an elementary component of M3G that you really should under-
stand, you have the freedom to use as much or as little of it as you want. Immediate and
retained mode rendering in M3G are not mutually exclusive—on the contrary, M3G has
been intentionally designed to let you mix the two as you like.
For background information on scene graphs, as well as some insight into the actual
design process of the M3G scene graph model, please refer to Chapter 5.
15.1 SCENE GRAPH BASICS: Node, Group, AND World
Scene graphs are built from Node objects. Node is an abstract base class with certain
common functions and properties. Each node has a transformation relative to its parent,
and rendering can be enabled and disabled individually for each node. There is also an
alpha factor that you can use to control the transparency of each node or group of nodes.
The basic Node class is specialized into different scene graph objects, of which Camera,
349
350 THE M3G SCENE GRAPH CHAPTER 15
Light, and Sprite3D are already familiar. In this chapter, we will introduce Group,
Mesh, and World. With these classes you can create a simple scene graph:
Mesh myCarBody, myCarWheel[];
float wheelX[], wheelY[], wheelZ[];
Light mySpotLight;
Background myBackground;
// Some initialization code would go here omitted for brevity
World myCarScene = new World();
myCarScene.setBackground(myBackground);
Group myCar = new Group();
myCar.addChild(myCarBody);
for(inti=0;i<4;++i) {
myCar.addChild(myCarWheel[i]);
myCarWheel[i].setTranslation(wheelX[i], wheelY[i], wheelZ[i]);
}
myCarScene.addChild(myCar);

myCar.setScale(0.1f, 0.1f, 0.1f);
myCar.setOrientation( — 30.f, 0.f, 1.f, 0.f);
myCarScene.addChild(spotLight);
spotLight.setTranslation(10.f, 20.f, 30.f);
spotLight.setOrientation(40.f, — 1.f, 1.f, 0.f);
Camera camera = new Camera();
myCarScene.addChild(camera);
camera.setTranslation(0.f, 3.f, 20.f);
myCarScene.setActiveCamera(camera);
Note how, unlike in immediate mode, the node transformations are directly used in
the scene graph—there is usually no need to use separate Transform objectstomove
nodes around.
Groups and inherited properties
One special kind of Node is Group. This lets you group several Node objects together
and treat them as one—for example, it lets you transform or animate the entire group
instead of having to do that for each object separately.In other words, grouping allows par-
titioning the scene into logical entities. An obvious example is creating composite objects,
such as a car with individual wheels; another could be putting all objects of a particular
type into a group so that you can apply some operation to all of them at once.
We already mentioned that node transformations are relative to the parent node. Sim-
ilarly, flags enabling rendering and alpha factor values from groups are cumulatively
inherited by their children: disabling rendering of a group will also disable rendering of its
SECTION 15.2 Mesh OBJECTS 351
children, and putting a half-opaque node inside a half-opaque group will result in a one-
quarter-opaque node. Rendering is enabled or disabled via setRenderingEnable,
and alpha factors can be set with setAlphaFactor. The alpha factor is effectively mul-
tiplied into the post-lighting vertex alpha value of Mesh objects, and the per-pixel alpha
value of Sprite3D objects.
Pitfall: Alpha factor is a poor fit with per-vertex colors, as it has to be premultiplied into
them in certain cases. Especially avoid using alpha factors, at least other than 0 and 1,in

combination with per-vertex alpha and texturing. Depending on the implementation,
there may be other performance bottlenecks triggered by alpha factors as well.
We will show some more use cases for groups later, but for now, it suffices to say that to
add a child Node into a Group , you call the function
void addChild(Node child).
The group into which you add child is then said to be the parent of the child. You can
query the parent of any Node with the getParent function. Since M3G does not allow
loops in the scene graph, each node can only have a single parent, and consequently child
must have a null parent when calling addChild. You are still free to reassign nodes
into groups as you please—just call removeChild(Node child)toremoveanodefrom
its current group prior to adding it into a different one.
World
There is also a special kind of Group called World. It is special in one particular way:
it cannot be a child of any other Node. World serves as the root of your scene graph,
a container for all other scene graph objects, and it defines Camera and Background
objects used by your entire scene graph. It is possible to use scene graphs without using
World, but World is what ultimately allows you to draw all of it with what is perhaps
the single most powerful command in M3G: render(myWorld). This call clears the
screen with the selected background, sets up the currently active camera and lights, and
renders the entire scene, all in a single operation.
You should now have a fairly good idea about the basics of using scene graphs in M3G. In
the rest of this chapter, we will look at the new classes in detail, as well as introduce some
advanced scene graph concepts.
15.2 Mesh OBJECTS
In immediate mode, we used VertexArray, VertexBuffer, IndexBuffer, and
Appearance to build our polygon meshes. The scene graph equivalent is the Mesh
class. It takes exactly the same data as one would use in immediate mode rendering:
myMesh = new Mesh(myVertices, myTriangles, myAppearance);
352 THE M3G SCENE GRAPH CHAPTER 15
The parameters above are, respectively, a VertexBuffer,anIndexBuffer, and an

Appearance. In essence, the Mesh object serves as a container for your immediate
mode render call. You can place it in the scene graph, move, rotate, and animate it,
and have it drawn with the rest of the scene.
You can also group multiple batches of triangles, or submeshes in M3G parlance, into a
single Mesh object:
IndexBuffer[] mySubMeshes;
Appearance[] myAppearances;
// Set up the above arrays here
myMesh2 = new Mesh(myVertices, mySubMeshes, myAppearances);
This lets you create composite objects w ith patches having different rendering proper-
ties. It also allows multi-pass rendering for simulating more complex material properties,
especially when combined with the layer index mechanism of Appearance.Wewill
discuss that later in Section 15.4.
Performance tip: The number of submeshes is best kept to a minimum, as rendering
each submesh typically has some fixed amount of overhead. When rendering lots of
triangles in small batches, the individual low-level drawing calls that M3G will have to
perform internally may become the main bottleneck of the whole system.
The submeshes themselves cannot be changed once the mesh is created, but you can
use setAppearance to change the Appearance of each submesh. This lets you
change the material properties on the fly, or to exclude the submesh from rendering alto-
gether by setting its appearance to null. The submeshes can share vertices across the
entire mesh, so you can also, for example, represent different levels of detail with dedi-
cated index buffers and use setAppearance to control which LOD level gets rendered:
// Initialization create an LOD Mesh.
VertexBuffer vertices;
IndexBuffer highDetail, mediumDetail, lowDetail;
// (initialization of buffers omitted each index buffer should
// contain successively fewer polygons)
IndexBuffer triangleLODs[3] = { highDetail, mediumDetail, lowDetail };
static final float maxLODDistance[3] = { 10.f, 20.f, 40.f };

Mesh myLODMesh = new Mesh(vertices, triangleLODs, null);
Appearance myMeshAppearance = new Appearance();
// Rendering time select the LOD to draw based on some distance
SECTION 15.2 Mesh OBJECTS 353
// metric, called ‘‘distanceToMesh’’ here. Note that the mesh will
// not be drawn at all when exceeding the threshold distance for the
// lowest level of detail.
int lod = 0;
for(inti=0;i<3;++i) {
myLODMesh.setAppearance(i, null);
if (distanceToMesh > maxLODDistance[lod]) {
++lod;
}
}
if (lod < 3) {
myLODMesh.setAppearance(lod, myMeshAppearance);
}
Data instancing
All the data you put into a Mesh can be shared by several Mesh instances. While you
cannot directly create several instances of the same Mesh, you canduplicate it as you
please without worrying about excessive memory usage. As an example, assume that you
want to create a new instance of myMesh, but with a different material color. A shallow
copy is made by default, so you can change just the properties you want to:
Mesh copyMesh = myMesh.duplicate();
copyMesh.setAppearance(copyMesh.getAppearance().duplicate());
Material copyMaterial = copyMesh.getAppearance().getMaterial().
duplicate();
copyMesh.getAppearance().setMaterial(copyMaterial);
copyMaterial.setColor(Material.AMBIENT|Material.DIFFUSE,
0xFF88FF44);

In this example, the Appearance object and its Material are duplicated so that they
can be changed without affecting the original, but any other Appearance components
are shared between the two meshes.
There are also two subclasses of the basic Mesh: MorphingMesh and SkinnedMesh,
used for animating your meshes. We will return to them in Chapter 16.
Performance tip: Make sure that you only include the data you really need in your
Mesh objects. In particular, some content authoring tools may include vertex normals
by default, even if you do not intend to use lighting. This can get expensive when skin-
ning is used, as M3G may end up doing unnecessary work transforming the normal
vectors that never get used. It is possible for the implementation to detect the case and
skip processing the normals when lighting is not enabled, but it is equally likely that the
implementation simply assumes that any data you have supplied is needed. In any case,

×