Tải bản đầy đủ (.pdf) (10 trang)

3D Graphics with OpenGL ES and M3G- P32 pdf

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (198.76 KB, 10 trang )

294 BASIC M3G CONCEPTS CHAPTER 13
Performance tip: On hardware-accelerated devices, it is a good idea to clear the color
buffer and depth buffer completely and in one go, even if the whole screen will be
redrawn. Various hardware optimizations can only be enabled when starting from a
clean table, and some devices can clear the screen by just flipping a bit. Even with soft-
ware implementations, clearing everything is typically faster than clearing a slightly
smaller viewport.
Note that the viewport need not lie inside the render target; it can even be larger than
the render target. The view that you render is automatically scaled to fit the viewport.
For example, if you have a viewport of 1024 by 1024 pixels on a QVGA screen, you will
only see about 7% of the rendered image (the nonvisible parts are not really rendered, of
course, so there is no performance penalty); see the code example in Section 13.1.4. The
maximum size allowed for the viewport does not depend on the type of rendering target,
but only on the implementation. All implementations are required to support viewports
up to 256 by 256 pixels, but in practice the upper bound is 1024 by 1024 or higher. The
exact limit can be quer ied from Graphics3D.getProperties.
Pitfall: Contrary to OpenGL ES, there is no separate function for setting thescissor rect-
angle (see Section 3.5). Instead, the scissor rectangle is implicitly defined as the inter-
section of the viewport and the Graphics clipping rectangle.
A concept closely related to the viewport is the depth range,setby
setDepthRange(float near, float far), where near and far are in the range [0, 1].
Similar to the viewport, the depth range also defines a mapping from nor malized device
coordinates to screen coordinates, only this time the screen coordinates are depth values
that lie in the [0, 1] range. Section 2.6 gives insight on the depth range and how it can be
used to make better use of the depth buffer resolution or to speed up your application.
13.1.3 RENDERING
As we move toward more ambitious goals than merely clearing the screen, the next
step is to render some 3D content. For simplicity, let us just assume that we have mag-
ically come into possession of a complete 3D scene that is all set up for rendering.
In case of M3G, this means that we have a World object, which is the root node of
the scene graph and includes by reference all the cameras, lights, and polygon meshes


that we need. To render a full-screen view of the world, all we need to do within the
bindTarget-releaseTarget block is this:
g3d.render(myWorld);
This takes care of clearing the depth buffer and color buffer, setting up the camera and
lights, and finally rendering everything that there is to render. This is called retained-mode
rendering, because all the information necessary for rendering is retained by the World
SECTION 13.1 GRAPHICS3D 295
and its descendants in the scene graph. In the immediate mode, you would first clear
the screen, then set up the camera and lights, and finally draw your meshes one by one
inaloop.
The retained mode and immediate mode are designed so that you can easily mix and
match them in the same application. Although the retained mode has less overhead on
the Java side, and is generally recommended, it may sometimes be more convenient to
handle overlays, particle effects, or the player character, for instance, separately from the
rest of the scene. To ease the transition from retained mode to immediate mode at the end
of the frame, the camera and lights of the World are automatically set up as the current
camera and lights in Graphics3D, overwriting the previous settings.
The projection matrix (see Chapter 2) is defined in a Camera object,whichinturnis
attached to Graphics3D using setCamera (Camera camera, Transform trans-
form). The latter parameter specifies the t ransformation from camera space, also known
as eye space or view space, into world space. The Camera class is described in detail in
Section 14.3.1. For now, it suffices to say that it allows you to set an arbitrary 4 × 4 matrix,
but also provides convenient methods for defining the typical perspective and parallel
projections. The following example defines a perspective projection with a 60

vertical
field of view and the same aspect ratio as the Canvas that we are rendering to:
Camera camera = new Camera();
float width = myCanvas.getWidth();
float height = myCanvas.getHeight();

camera.setPerspective(60.0f, width/height, 10.0f, 500.0f);
g3d.setCamera(camera, null);
Note that we call setCamera with the transform parameter set to null.Asageneral
principle in M3G, a null transformation is treated as identity, which in this case implies
that the camer a is sitting at the world-space origin, looking toward the negative Z axis
with Y pointing up.
Light sources are set up similarly to the camera, using addLight(Light light, Trans-
form transform). The transform parameter again specifies the transformation from local
coordinates to world space. Lighting is discussed in Section 14.3.2, but for the sake of
illustration, let us set up a single directional white light that shines in the direction at
which our camera is pointing:
Light light = new Light();
g3d.addLight(light, null);
Now that the camera and lights are all set up, we can proceed with rendering. There are
three different render methods in immediate mode, one having a higher level of abs-
traction than the other two. The high-level method render(Node node, Transform
transform) draws an individual object or scene graph branch. You can go as far as ren-
dering an entire World with it, as long as the camer a and lights are properly set up in
296 BASIC M3G CONCEPTS CHAPTER 13
Graphics3D. For instance, viewing myWorld with the camera that we just placed at
the world space orig in is as simple as this:
g3d.render(myWorld, null);
Of course, the typical way of using this method is to draw individual meshes rather
than entire scenes, but that decision is up to you. The low-level render methods, on
the other hand, are restricted to drawing a single triangle mesh. The mesh is defined
by a vertex buffer, an index buffer, and an appearance. As with the camera and lights, a
transformation from model space to world space must be given as the final parameter:
g3d.render(myVertices, myIndices, myAppearance, myTransform);
The other render variant is similar, but takes in an integer scope mask as an additional
parameter. The scope mask is bitwise-ANDed with the corresponding mask of the current

camera, and the mesh is rendered if and only if the result is non-zero. The same applies for
lights. The scope mask is discussed further in Chapter 15, as it is more useful in retained
mode than in immediate mode.
13.1.4 STATIC PROPERTIES
We mentioned in Section 12.2 that there is a static getter for retrieving implementation-
specific information, such as whether antialiasing is supported. This special getter is
defined in Graphics3D, and is called getProperties. It returns a java.util.
Hashtable that contains Integer and Boolean values keyed by Strings. The
static properties, along with some helpful notes, are listed in Table 13.1. To illustrate the
use of static properties, let us create a viewpor t that is as large as the implementation
can support, and use it to zoom in on a high-resolution rendering of myWorld:
Hashtable properties = Graphics3D.getProperties();
maxViewport = ((Integer)properties.get("maxViewportDimension")).
intValue();

g3d.bindTarget(graphics, true, hints);
int topLeftX = — (maxViewport — graphics.getClipWidth())/2;
int topLeftY = — (maxViewport — graphics.getClipHeight())/2;
g3d.setViewport(topLeftX, topLeftY, maxViewport, maxViewport);
g3d.render(myWorld);
g3d.releaseTarget();
We first query for maxViewportDimension from the Hashtable.Thevalueis
returnedasajava.lang.Object, which we need to cast into an Integer and
then convert into a primitive int before we can use it in computations. Later on, at the
paint method, we set the viewport to its maximum size, so that our Canvas lies at
its center. Assuming a QVGA screen and a 1024-pixel-square viewport, we would have
a zoom factor of about 14. The zoomed-in view can be easily panned by adjusting the
top-left X and Y.
SECTION 13.2 IMAGE2D 297
Table 13.1: The system properties contained in the Hashtable returned by Graphics3D.getProperties.

There may be other properties, as well, but they are not standardized.
Key (String) Value Notes
supportAntialiasing Boolean true on some hardware-accelerated devices
supportTrueColor Boolean false on all devices that we know of
supportDithering Boolean false on all devices that we know of
supportMipmapping Boolean false on surprisingly many devices
supportPerspectiveCorrection Boolean true on all devices, but quality varies
supportLocalCameraLighting Boolean false on almost all devices
maxLights Integer ≥ 8 typically 8
maxViewportWidth Integer ≥ 256 typically 256 or 1024; M3G 1.1 only
maxViewportHeight Integer ≥ 256 typically 256 or 1024; M3G 1.1 only
maxViewportDimension Integer ≥ 256 typically 256 or 1024
maxTextureDimension Integer ≥ 256 typically 256 or 1024
maxSpriteCropDimension Integer ≥ 256 typically 256 or 1024
maxTransformsPerVertex Integer ≥ 2 typically 2, 3, or 4
numTextureUnits Integer ≥ 1 typically 2
13.2 Image2D
There are a few cases where M3G deals with 2D image data. Texturing, sprites, and
background images need images as sources, and rendering to any of them is also
supported.
Image2D, as the name suggests, stores a 2D array of image data. It is similar in many
respects to the javax.microedition.lcdui.Image class, but the important dif-
ference is that Image2D objects are fully managed by M3G. This lets M3G implementa-
tions achieve better performance, as there is no need to synchronize with the 2D drawing
functions in MIDP.
Similarly to the MIDP Image,anImage2D object can be either mutable or immutable.
To create an immutable image, you must supply the image data in the constructor:
Image2D(int format, int width, int height, byte[] image)
The format parameter specifies the type of the image data: it can be one of ALPHA,
LUMINANCE, LUMINANCE_ALPHA, RGB, and RGBA .Thewidth and height parame-

ters determine the size of the image, and the image array contains data for a total of
width × height pixels. The layout of each pixel is determined by format: each image com-
ponent takes one byte and the components are interleaved. For example, the data for
298 BASIC M3G CONCEPTS CHAPTER 13
a LUMINANCE_ALPHA image would consist of two bytes giving the luminance and
alpha of the first pixel, followed by two bytes giving the luminance and alpha of
the second pixel, and so on. The pixels are ordered top-down and left to right, i.e.,
the first w idth pixels provide the topmost row of the image starting from the left.
Upon calling the constructor, the data is copied into internal memory allocated by
M3G, allowing you to discard or reuse the source array. Note that while the image is
input upside-down compared to OpenGL ES (Section 9.2.2), the t texture coordinate
is similarly reversed, so the net effect is that you can use the same texture images
and coordinates on both OpenGL ES and M3G.
Unfortunately, there is no support in M3G for packed image formats, such as RGB565.
This is partially because OpenGL ES does not give any guarantees regarding the internal
color depth of a texture image, but also because the image formats were intentionally
kept few and simple. In retrospect, being able to input the image data in a packed format
would have been useful in its own right, regardless of what happens when the image is
sent to OpenGL ES.
As a form of image compression, you can also create a paletted image:
Image2D(int format, int width, int height, byte[] image, byte[] palette)
Here, the only difference is that the image arr ay contains one-byte indices into the palette
array, which stores up to 256 color values. The layout of the color values is again as indi-
cated by format. There is no guarantee that the implementation will internally maintain
the image in the paletted format, though.
Pitfall: The amount of memory that an Image2D consumes is hard to predict.
Depending on the device, non-palletized RGB and RGBA images may be stored at 16
or 32 bits per pixel, while palletized images are sometimes expanded from 8 bpp to
16 or 32 bpp. Some implementations always generate the mipmap pyramid, consum-
ing 33% extra memory. Some devices need to store two copies of each image: one in

the GL driver, the other on the M3G side. Finally, all or part of this memory may be
allocated from somewhere other than the Java heap. This means that you can run out
of memory even if the Java heap has plenty of free space! You can try to detect this
case by using smaller images. As for remedies, specific texture formats may be more
space-efficient than others, but you should refer to the developer pages of the device
manufacturers for details.
A third constructor lets you copy the data from a MIDP Image:
Image2D(int format, java.lang.Object image)
Note that the destination format is explicitly specified. The source format is either RGB
or RGBA, for mutable and immutable MIDP images, respectively. Upon copying the
data, M3G automatically converts it from the source format into the destination format.
SECTION 13.2 IMAGE2D 299
As a general rule, the conversion happens by copying the respective components of the
source image and setting any missing components to 1.0 (or 0xFF for 8-bit colors). A cou-
ple of special cases deserve to be mentioned. When converting an RGB or RGBA source
image into LUMINANCE or LUMINANCE_ALPHA, the luminance channel is obtained
by converting the RGB values into grayscale. A similar conversion is done when convert-
ing an RGB image into ALPHA. This lets you read an alpha mask from a regular PNG
or JPEG image through Image.createImage, or create one with the 2D drawing
functions of MIDP, for example.
Often the most convenient way to create an Image2D is to load it from a file. You can
do that with the Loader, as discussed in Section 13.5. All implementations are required
to support the M3G and PNG formats, but JPEG is often supported as well.
4
Loading
an image file yields a new Image2D whose format matches that of the image stored in
the file. JPEG can do both color and grayscale, yielding the internal formats RGB and
LUMINANCE, respectively, but has no concept of transparency or alpha. PNG supports
all of the Image2D formats except for ALPHA. It has a palletized format, too, but unfor-
tunately the on-device PNG loaders tend to expand such data into raw RGB or RGBA

before it ever reaches the Image2D. M3G files obviously support all the available for-
mats, including those with a palette.
Pitfall: The various forms of transparency supported by PNG are hard to get right.
For example, the M3G loader in some early Nokia models (e.g., the 6630), does not
support any form of PNG transparency, whereas some later models (e.g., the 6680)
support the alpha channel but not color-keying. Possible workarounds include using
Image.createImage or switching from PNG files to M3G files. These issues have
been resolved in M3G 1.1; see Section 12.3.
Finally, you can create a mutable Image2D:
Image2D(int format, int width, int height)
The image is initialized to opaque white by default. It can be subsequently modified by
using set(int x, int y, int width, int height, byte[] pixels). This method copies
a rectangle of width by height pixels into the image, starting at the pixel at (x, y) and
proceeding to the right and down. The origin for the Image2D is in its top left corner.
A mutable Image2D can also be bound to Graphics3D as a rendering target. The
image can still be used like an immutable Image2D. This lets you, for example, render
dynamic reflections or create feedback effects.
4 JPEG support is in fact required by the Mobile Service Architecture (MSA) specification, also known as
JSR 248. MSA is an umbrella JSR that aims to unify the Java ME platform.
300 BASIC M3G CONCEPTS CHAPTER 13
Table 13.2: The available Image2D formats and their capabilities. The shaded cells show the capabilities of most
devices, as these cases are not dictated by the specification. Mipmapping is entirely optional, and palletized images may
be silently expanded into the corresponding raw formats, typically
RGB. Most devices support mipmapping and palletized
images otherwise, but will not generate mipmaps for palletized textures, nor load a palletized PNG without expanding it.
There are some devices that can do better, though. Finally, note that JPEG does not support the alpha channel, and that
PNG does not support images with
only an alpha channel.
Load
from

M3G
Load
from
PNG
Load
from
JPEG
Copy
from
Image
Render
Target
Back-
ground
Mutable Texture Mipmap Sprite
ALPHA ✓✖✓ ✖✓✓✖✖✓
LUMINANCE ✓✖✓ ✖✓✓✓✓✓
LUM_ALPHA ✓✖✓ ✓✖✓✓✓✖✓
RGB ✓✓✓ ✓✓✓✓✓✓✓
RGBA ✓✓✓ ✓✓✓✓✓✖✓
Palette ✖✖ ✖✓✓✖✖✖✓✖✖ ✖ ✖





Performance tip: Beware that updating an Image2D, whether done by rendering or
through the setter, can be a very costly operation. For example, the internal format
and layout of the image may not be the same as in the set method, requiring heavy
conversion and pixel reordering. If your frame rate or memory usage on a particular

device is not what you would expect, try using immutable images only.
While Image2D is a general-purpose class as such, there are various restrictions on what
kind of images can be used for a specific purpose. For example, textures must have power-
of-two dimensions, and render targets can only be in RGB or RGBA formats. Table 13.2
summarizes the capabilities and restrictions of the different Image2D formats.
13.3 MATRICES AND TRANSFORMATIONS
One of the most frequently asked questions about M3G is the difference between
Transform and Transformable. The short answer is that Transform is a sim-
ple container for a 4 × 4 matrix with no inherent meaning, essentially a float array
wrapped into an object, whereas Transformable stores such a matrix in a compo-
nentized, animatable form, and for a particular purpose: constructing the modelview
matrix or the texture matrix. The rest of this section provides the long answer.
13.3.1 Transform
Transform stores an arbitrary 4 × 4 matrix and defines a set of basic utility functions
for operating on such matrices. You can initialize a Transform toidentity,copyitin
SECTION 13.3 MATRICES AND TRANSFORMATIONS 301
from another Transform,orcopyitfromafloat[] in row-major order (note that
this is different from OpenGL ES, which uses the unintuitive column-major ordering).
setIdentity resets a Transform back to its default state, facilitating object reuse.
Creating a matrix
To give an example, the following code fragment creates a matrix with a uniform scaling
component
[
222
]
and a translation component
[
345
]
. In other words, a vector

multiplied by this matrix is first scaled by a factor of two, then moved by three units
along the x axis, four units along y, and five units along z:
Transform myTransform = new Transform();
myTransform.set(new float[] { 2f, 0f, 0f, 3f,
0f, 2f, 0f, 4f,
0f, 0f, 2f, 5f,
0f, 0f, 0f, 1f });
Matrix operations
Once you have created a Transform, you can start applying some basic arithmetic
functions to it: You cantranspose the matrix (M

= M
T
), invert it (M

= M
−1
),
or multiply it with another matrix (M

= MA). Note that each of these operations
overwrites the pre-existing value of the Transform with the result (M

). The matrix
multiplication functions come in several flavors:
void postMultiply(Transform transform)
void postScale(float sx, float sy, float sz)
void postTranslate(float tx, float ty, float tz)
void postRotate(float angle, float ax, float ay, float az)
void postRotateQuat(float qx, float qy, float qz, float qw)

The post prefix indicates that the matrix is multiplied from the right by the given matrix
(e.g., M

= MA); pre would mean multiplying from the left (e.g., M

= AM), but
there are no such methods in Transform. Going through the list of methods above,
the first three probably need no deeper explanation. The rotation method comes in two
varieties: postRotateQuat uses a quaternion to represent the rotation (see Section
2.3.1), whereas postRotate uses the axis-angle format: looking along the positive
rotation axis

ax ay az

, the rotation is angle degrees clockwise.
To make things more concrete, let us use postScale and postTranslate to con-
struct the same matrix that we typed in manually in the previous example:
Transform myTransform = new Transform();
myTransform.postTranslate(3f, 4f, 5f);
myTransform.postScale(2f, 2f, 2f);
302 BASIC M3G CONCEPTS CHAPTER 13
Transforming vertices
As in OpenGL, you should think that the matrix operations apply to vertices in the
reverse order that they are written. If you apply the transformation TS toavertexv,
the vertex is first scaled and then translated: T (Sv). Let us write out the matrices and
confirm that the above code fragment does indeed yield the correct result:
M

= ITS=






1000
0100
0010
0001










1003
0104
0015
0001











2000
0200
0020
0001





=





2003
0204
0025
0001





One of the most obvious things to do with a transformation matrix is to transform
an array of vectors with it. The Transform class defines two convenience methods
for this purpose. The first, transform(float[] vectors) multiplies each 4-element
vector in the vectors array by this matrix and overwrites the original vectors with the

results (v

= Mv,wherev is a column vector). The other transform variant is a bit
more complicated:
void transform(VertexArray in, float[] out, boolean w)
Here,wetakein2Dor3DvectorsinaVertexArray, set the fourth component to zero
or one depending on the w parameter, and write the transformed 4-element vectors to
the out array. The input array remains unmodified.
The transform methods are provided mostly for convenience, as they play no role in
rendering or any other function of the API. Nonetheless, if you have a large number of vec-
tors that you need to multiply with a matrix for whatever purpose, these built-in methods
are likely to perform better than doing the same thing in Java code. The VertexArray
variant also serves a more peculiar purpose: it is the only way to read back vertices from a
VertexArray on many devices, as the necessary VertexArray.get methods were
only added in M3G 1.1.
Other use cases
Now that you know how to set up Transform objects and use them to transform ver-
tices, let us look at what else you can use them for. First of all, in Graphics3D you
need them to specify the local-to-world transformations of the immediate-mode camera,
lights, and meshes. In both immediate mode and retained mode, you need a Transform
to set up an oblique or otherwise special projection in Camera, or any kind of projec-
tion for texture coordinates in Texture2D. Finally, you can (but do not have to) use a
Transform in the local-to-parent transformation of a Node. Each of these cases will
come up later on in this book.
SECTION 13.3 MATRICES AND TRANSFORMATIONS 303
13.3.2 Transformable
Transformable is an abstract base class for the scene graph objects Node and
Texture2D.Conceptually,itisa4 × 4 matrix representing a node transformation or a
texture coordinate transformation. The matrix is made up of four components that can be
manipulated separately: translation T, orientation R, scale S,andageneric4 × 4 mat rix

M. During rendering, and otherwise when necessary, M3G multiplies the components
together to yield the composite transformation:
C = TRSM (13.1)
A homogeneous vector p =

xyzw

T
, representing a vertex coordinate or texture
coordinate, is then transformed into p

=

x

y

z

w


T
by:
p

= Cp (13.2)
The components are kept separate so that they can be controlled and animated indepen-
dent of each other and independent of their previous values. For example, it makes no
difference whether you first adjust S and then T, or vice versa; the only thing that matters

is what values the components have when C needs to be recomputed. Contrast this with
the corresponding operations in Transform, which are in fact matrix multiplications
and thus very much order-dependent.
Note that for node transformations, the bottom row of the M component is restricted
to
[
0001
]
—in other words, projections are not allowed in the scene graph. Texture
matrices do not have this limitation, so projective texture mapping is fully supported (see
Section 3.4.3).
Methods
The following four methods in Transformable allow you to set the transformation
components:
void setTranslation(float tx, float ty, float tz)
void setOrientation(float angle, float ax, float ay, float az)
void setScale(float sx, float sy, float sz)
void setTransform(Transform transform)
The complementary methods, translate, preRotate, postRotate, and scale,
each modify the current value of the respective component by applying an additional
translation, rotation, or scaling. The user-provided rotation can be applied to the left

×