Tải bản đầy đủ (.pdf) (10 trang)

3D Graphics with OpenGL ES and M3G- P11 potx

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (263.47 KB, 10 trang )

84 LOW-LEVEL RENDERING CHAPTER 3
in the illumination either to vertex colors or to texture maps. This is done so that the
environment and the light sources are modeled, and then rendered using a high-quality
but perhaps non–real-time method such as raytracing or radiosit y calculations. One could
even take photographs of real environments or model environments with real lighting and
use those as texture maps. The illuminated surfaces are then copied to the texture maps
which are used in the real-time application.
Below we describe various approaches of using texture maps to provide advanced lighting
effects. Figure 3.17 illustrates several of them. From top left, the first image shows dot3
bump mapping that gives an i llusion of higher geomet ric detail on the barrel and walls
that are affected by the main light source. The next image shows a projective spot light,
Figure 3.17: Several passes of a scene: bump mapping, projective lighting (using the circular light map on left middle),
adding environment map reflection to the barrel (the cube map at left bottom), adding shadows, final image. Image copyright
AMD. (See the color plate.)
SECTION 3.4 RASTERIZATION 85
using the light map on the left middle. Top right adds an environment map that reflects
the light from the lamp from the surface of the barrel; the environment map is shown at
bottom left. The bottom middle image adds shadows, and the last image shows the final
image with all lighting effects combined.
Light mapping
Often you might want to reuse the same texture maps, e.g., a generic floor, a wall panel, or
ceiling, for different parts of your scene, but those different areas have different lighting.
Then you can use light maps that can usually be in a much lower resolution than the
texture map. The light maps are used to attenuate the texture maps (for white light) or
modulate them (for colored light) using multitexturing. The advantage over baking in is
the potential savings in storage space, and the possibility of having more dynamic lighting
effects. However, you need to have at least two texturing units, or you have to render the
object in two passes and suffer a significant speed penalty.
For moving lights, the light maps have to be generated on the fly. For simple scenes you
may be able to just project the polygons into the same space as the light, and calculate the
lighting equation directly into a corresponding texture map.


Projective lighting
It is possible to use projective texture mapping to project a light pattern such as using a
slide projector [SKv
+
92]. The texture map is usually an intensity map that looks like a
cross section of a spot light’s beam, often a bright circle that falls off to the boundaries.
Since projecting light out is the inverse of a camera projection where light projects into
the camera from the scene, it should be no surprise that the mathematics are quite simi-
lar. Whereas with a camera you project the scene vertices into the frame buffer, you now
project them into a texture map so you find which part of the texture projects to which
vertex. This is done by first copying the object-space vertex locations into texture coordi-
nates, and then accumulating a transformation into the texture matrix, as follows.
First, the texture coordinates need to be transformed from object space into the world
coordinate system. Then you need to use a similar transformation as with the camera
to transform the vertices into the “eye coordinates” of the spot light. This is followed by
an application of a similar perspective projection matrix as with the camera. The last
step is to apply a bias matrix that maps the (s, t) coordinates from the [−1, 1] range
to [0, 1], which covers the spot light texture. These transformations happen at vertices,
and the final division by q is done, as discussed before, during the rasterization for
each fragment.
Let us check how thebias step works. Assume thatafter projection,we have an input texture
coordinate

−qq0 q

T
. Without applying the bias, this would yield
[
−1101
]

T
,
86 LOW-LEVEL RENDERING CHAPTER 3
that is, s = −1 and t = 1. To turn that into s = 0 and t = 1, we need to scale and tr anslate
the coordinates by
1
2
:








1
2
00
1
2
0
1
2
0
1
2
0 000
0 001














−q
q
0
q





=





0
q
0

q





.
(3.8)
This matrix ensures that (s, t) will always span the range [0, 1] after the homogeneous
division. The third row can be zero as the third texture coordinate is ignored. To summa-
rize, the complete texture matrix T is as follows:
T = BPM
we
M
ow
,
(3.9)
where M
ow
is the transformation from object coordinates to world coordinates, M
we
is the transformation from world space to the eye space of the spot light, P is the spot
light projection matrix, and B is the the bias matrix with a scale and offset shown in
Equation (3.8).
Ambient occlusion
Another technique that improves the quality of shading is ambient occlusion [Lan02],
derived from accessibility shading [Mil94]. Uniform ambient lighting, as discussed pre-
viously, is not very useful as it strips away all the shape hints. However, a very useful hint
of the local shape and shadowing can be obtained by estimating the fraction of the light
each surface point is likely to receive. One way to estimate that is to place a relatively large

sphere around a point, render the scene from the surface point, and store the fraction of
the surrounding sphere that is not occluded by other objects. These results are then stored
into an ambient occlusion map which, at rendering time, is used to modulate the amount
of light arriving to the surface. Figure 3.18 shows an ambient occlusion map on a polygon
mesh. The effect is that locations under other objects get darker, as do indentations in the
surface. Note that the creation of this map is typically done off-line and is likely to take
too long to be used interactively for animated objects.
Environment mapping
Environment mapping is a technique that produces reflections of the scene on very shiny
objects. The basic idea involves creating an image of the scene from the point of view of the
reflecting object. For spherical environment mapping one image is sufficient; for parabolic
mapping two images are needed; and for cube maps six images need to be created. Then,
for each point, the direction to the camera is reflected about the local normal vector, and
the reflected ray is used to map the texture to the surface.
SECTION 3.4 RASTERIZATION 87
Figure 3.18: A mesh rendered using just an ambient occlusion map without any other shading. The
areas that are generally less exposed to light from the environment are darker. Image courtesy of
Janne Kontkanen.
Spherical environment maps are view-dependent and have to be re-created for each new
eye position. Dual paraboloid mapping [HS98] is view-independent but requires two tex-
turing units or two passes. Cube mapping (Figure 3.19) is the easiest to use, and the easiest
to generate the texture maps for: just render six images from the center of the object (up,
down, and to the four sides). However, cube mapping is not included in the first genera-
tion of mobile 3D APIs.
Besides reflections, you can also do diffuse lighting via environment mapping [BN76].
If you filter your environment map with a hemispherical filter kernel, you can use the
surface normal directly to index into the environment map and get cheap per-pixel diffuse
lighting. This saves you from having to compute the reflection vector—you just need to
transform the surface normals into world space, which is easily achieved with the texture
matrix.

Texture lighting does not end with environment mapping . Using multiple textures as
lookup tables, it is possible to approximate many kinds of complex reflectance func-
tions at interactive rates [HS99]. The details are beyond the scope of this book, but these
88 LOW-LEVEL RENDERING CHAPTER 3
Figure 3.19: An environment cube map (right) and refraction map (center) used to render a well. (Image copyright
c

AMD.) (See the color plate.)
techniques achieve much more realistic shading than is possible using the built-in lighting
model.
3.4.4 FOG
In the real world, the air filters the colors of a scene. Faraway mountains tend to seem
bluish or grayish, and if there is fog or haze, objects get mixed with gray before disappear-
ing completely. OpenGL has support for a simple atmospheric effect called fog. Given
a fog color, objects close to the camera have their own color, a bit farther away they get
mixed with the fog color, and yet farther away they are fully covered by the fog.
There are three functions for determining the intensity of the fog: linear, exponential, and
square exponential. Linear fog is easy to use: you just give a starting distance before which
there is no fog, and an ending distance after which all objects are covered by fog. The
fraction of the fragment color that is blended with the fog color is
f =
end − z
end − start
; (3.10)
where z is the distance to the fragment along the z axis in eye coordinates, start is the
fog start distance, and end is the fog end distance. The result is clamped to [0, 1]. Linear
SECTION 3.4 RASTERIZATION 89
fog is often used for simple distance cueing, but it does not correspond to real-life fog
attenuation. A real homogeneous fog absorbs, say, 10% of the light for every 10 meters.
This continuous fractional attenuation corresponds to the exponential function, which

OpenGL supports in the form of
f = e
−dz
,
(3.11)
where d is a user-given density, a nonnegative number. Real fog is not truly homogeneous
but its density varies, so even the exponential function is an approximation. OpenGL also
supports a squared exponential versionoffog:
f = e
−(dz)
2
.
(3.12)
This function has no physical meaning; it simply has an attenuation curve with a different
shape that can be used for artistic effect. In particular, it does not correspond to double
attenuation due to light traversing first to a reflective surface and then reflecting back to
the observer, as some sources suggest. With large values of d both the exponential (EXP)
and the squared exponential (EXP2) fog behave fairly similarly; both functions approach
zero quite rapidly. However, at near distances, or with small density values, as shown in
Figure 3.20, the functions have different shapes. Whereas EXP begins to attenuate much
more sharply, EXP2 first attenuates more gradually, followed by a sharper fall-off before
flattening out, and often produces a better-looking blend of the fog color.
020406080
100
0.0
0.2
0.4
0.6
0.8
1.0

LINEAR
EXP2
EXP
Figure 3.20: Fog functions. In this example, LINEAR fog starts from 20 and ends at 70, EXP and
EXP2 fogs both have d = 1/50. LINEAR is the easiest to control, but produces sharp transitions. EXP
corresponds to the attenuation by a uniformly distributed absorbing material, such as real fog, but
gives less control as the attenuation in the beginning is always so severe. EXP2 can sometimes give
the esthetically most pleasing results.
90 LOW-LEVEL RENDERING CHAPTER 3
Performance tip: A common use of fog is really a speed trick to avoid having to draw
too many objects in a scene. If you use fog that obscures the faraway objects, you can
skip drawing them entirely, which brings frame rates up. The distance to the complete
fog and to the far viewing plane should be aligned: if you use linear fog, place the far
viewing plane slightly beyond the fog end distance, or with exponentials to a distance
where the fog contributes over 99% or so.
Pitfall: Implementations are allowed to perform the fog calculations at the vertices, even
though fog really should be calculated at every pixel. This may yield artifacts with large
triangles. For example, even if you select a n onlinear (exponential or double exponen-
tial) fog mode, it may be interpolated linearly across the triangle. Additionally, if the
triangle extends beyond the far plane and is clipped, the vertex introduced by clipping
may have a completely incorrect fog intensity.
3.4.5 ANTIALIASING
The frame buffer consists of pixels that are often thought of as small squares.
1
When
a polygon edge is rasterized in any angle other than horizontal or vertical, the pixels can
only approximate the smooth edge by a staircase of pixels. In fact, there is a range of slopes
that all produce, or alias to, the same staircase pattern. To make the jagged pixel pattern
less obvious, and to disambiguate those different slopes, one can blend the foreground
and background colors of the pixels that are only partially covered by the polygon. This

is called antialiasing, and is illustrated in Figure 3.21.
Since optimal antialiasing needs to take into account human visual perception, charac-
teristics of the monitor on which the final image is displayed, as well as the illumination
surrounding the monitor, most 3D graphics APIs do not precisely define an antialiasing
algorithm. In our graphics pipeline diagram (Figure 3.1), antialiasing relates to the “cov-
erage generation” and “multisampling” boxes.
Figure 3.21: A square grid cannot accurately represent slanted edges; they can only be approxi-
mated with a staircase pattern. However, blending the foreground and the background at the partially
covered pixels makes the staircase pattern far less obvious.
1 Although there are good arguments why that is not the right view [Smi95].
SECTION 3.4 RASTERIZATION 91
Edge antialiasing
When rasterizing straight edges, it is fairly trivial to calculate the coverage of the polygon
over a pixel. One could then store the coverage value [0, 1] to the alpha component of the
fragment, and use a later blending stage to mix the polygon color with the background
color. This edge antialiasing approach can do an acceptable job when rendering line draw-
ings, but it has several drawbacks. Think about the case where two adjacent polygons
jointly fully cover a pixel such that each individually covers only half of it. As the first
polygon is drawn, the pixel gets 50% of the polygon color, and 50% of the background
color. Then the second polygon is drawn, obtaining 75% of the polygon’s color, but still
25% background at the seam. There are tricks that mark the outer edges of a contin-
uous surface so this particular problem can be avoided, but this is not always possible.
For example, if a polygon penetrates through another, the penetration boundary is inside
the two polygons, not at their edges, edge antialiasing does not work, and the jaggies are
fully visible.
Full-scene antialiasing
Edge antialiasing only works at the edges of primitives, but jaggies can happen also at
intersections of polygons. The depth buffer is resolved at a pixel level, and if a blue triangle
pokes through a white one, the jagged intersection boundary is clearly visible. Full-scene
antialiasing (FSAA) can correctly handle object silhouettes, adjacent polygons, and even

intersecting polygons. Whereas edge antialiasing can be turned on or off per primitive,
FSAA information is accumulated for the duration of the whole frame, and the samples
are filtered in the end.
There are two main approaches for FSAA, supersampling and multisampling. The basic
idea of supersampling is simply to first rasterize the scene at higher resolution using point
sampling, that is, each primitive affects the pixel if one point such as the center of a pixel is
covered by the object. Once the frame is complete, the higher resolution image is filtered
down, perhaps using a box filter (simple averaging) or Gaussian or sinc filters that tend to
give better results but require more samples and work [GW02]. This is a very brute force
approach, and the processing and memory requirements increase linearly by the number
of samples per pixel.
Multisampling approximates supersampling with a more judicious use of resources. At
each pixel, the objects are sampled several times, and various information such as color
and depth may be stored at each sample. The samples coming from the same primitive
often sample the textured color only once, and store the same value at each sample. The
depth values of the samples, on the other hand, are typically computed and stored sepa-
rately. The OpenGL specification leaves lots of room to different antialiasing approaches;
some implementations may even share samples with their neighbor ing pixels, sometimes
gaining better filtering at the cost of image sharpness.
92 LOW-LEVEL RENDERING CHAPTER 3
Other types of aliasing
There are other sources of aliasing in 3D graphics beyond polygon rasterization. They can
also usually be remedied by denser sampling followed by filtering, as is done with pixels in
FSAA. Examples of other aliasing artifacts include approximating area light sources with
point lights, where using only a few may create artifacts.
Sampling a moving object at discrete times may produce temporal aliasing—the familiar
effect where the spokes of a wheel appear to rotate backward. The eye would integrate
such motion into a blur; this can be simulated by rendering the animation at a higher
frame rate and averaging the results into motion blur.
3.5 PER-FRAGMENT OPERATIONS

Each pixel, from the point of view of memory and graphics hardware, is a collection of bits.
If the corresponding bits are viewed as a collection over the frame buffer, they are called
bitplanes, and some of those bitplanes are in turn combined into logical buffers such as the
back color buffer (usually RGB with or without alpha), depth buffer, and stencil buffer.
After the fragments have been generated by the rasterization stage, there are still several
operations that can be applied to them. First, there is a sequence of tests that a fragment
is subjected to, using either the fragment location, values generated during rasterization,
or a value stored in one of the logical buffers.
A blending stage then takes the incoming color and blends it with the color that already
exists at the corresponding pixel. Dithering may change the color values to give an illusion
of a greater color depth than what the frame buffer really has. Finally, a logical operation
may be applied between the incoming fragment’s color and the existing color in the frame
buffer.
3.5.1 FRAGMENT TESTS
There are four different tests that a fragment can be subjected to before blending and
storing into frame buffer. One of them (scissor) is based on the location of the fragment,
while the rest (alpha, stencil, depth) compare two values using a comparison function
such as LESS (<), LEQUAL (≤), EQUAL (=), GEQUAL (≥), GREATER (>), NOTEQUAL
(=), or accept (ALWAYS)orreject(NEVER) the fragment regardless of the outcome of
the comparison.
Scissor test
The scissor test simply determines whether the fragment lies within a rectangular scissor
rectangle, and discards fragments outside that rectangle. With scissoring you can draw
the screen in several stages, using different projection matrices. For example, you could
SECTION 3.5 PER-FRAGMENT OPERATIONS 93
first draw a three-dimensional view of your world using a perspective projection matrix,
and then render a map of the world on the side or corner of the screen, controlling the
drawing area with a scissor rectangle.
Alpha test
The alpha test compares the alpha component of the incoming fragment with a user-

given reference or threshold value, and based on the outcome and the selected comparison
function either passes or rejects the fragment. For example, with LESS the fragment is
accepted if the fragment’s alpha is less than the reference value.
One use case for alpha test is rendering transparent objects. In the first pass you can draw
the fully opaque objects by setting the test to EQUAL 1 and rendering the whole scene,
and then in the second pass draw the scene again with blending enabled and setting the
test to NOTEQUAL 1.
Another use is to make real holes to textured objects. If you use a texture map that modifies
the fragment alpha and sets it to zero, the pixel may be transparent, but the depth value
is still written to the frame buffer. With alpha test LESS 0.1 fragments with alpha smaller
than 0.1 will be completely skipped, creating a hole.
Stencil test
The stencil test can only be performed if there is a stencil buffer, and since every additional
buffer uses a lot of memory, not all systems provide one. The stencil test conditionally
discards a fragment based on the outcome of a comparison between the pixel’s stencil
buffer value and a reference value. At its simplest, one can initialize the stencil buffer with
zeros, paint an arbitrary pattern with ones, and then with NOTEQUAL 0 draw only within
the stencil pattern. However, if the stencil pattern is a simple rectangle, you should use
scissor test instead and disable stencil test since scissoring is much faster to execute.
Before using the stencil test some values must be drawn into the stencil buffer. T he buffer
can be initialized to a given value (between zero and 2
s
− 1 for an s bit stencil buffer), and
one can draw into the stencil buffer using a stencil operator. One can either KEEP the
current value, set it to ZERO, REPLACE it with the reference value, increment (INCR)or
decrement (DECR) with saturation or without saturation (INCR_WRAP, DECR_WRAP),
or bitwise INVERT the current stencil buffer value. The drawing to the stencil buffer can
be triggered by a failed stencil test, a failed depth test, or a passed depth test.
Some more advanced uses of stencil test include guaranteeing that each pixel is drawn
only once. If you are drawing a partially transparent object with overlapping parts, the

overlapping sections will appear different from areas with no overlap. This can be fixed
by clearing the stencil buffer to zeros, drawing only to pixels where the stencil is zero, and
replacing the value to one when you draw (KEEP operation for stencil fail, REPLACE for

×