Title: Computer Graphics
1Computer Graphics
- Lecture Eight
- Photorealistic Computer Graphics
- Shadows
- Ray Tracing Algorithm
- LecturerHeather Sayers
- E-mailhm.sayers_at_ulster.ac.uk
- URLhttp//www.infm.ulst.ac.uk/heather
2Content
- Shadows the z-buffer algorithm
- Object/image space rendering
- The Ray Tracing Algorithm
- Ray sphere intersections
- Ray polygon intersections
- Efficiencies
- Bounding volumes
- Spatial subdivision
3Depth Sorting
- Used for hidden surface removal
- The basic problem in hidden surface removal is to
find the closest object to the viewer at that
pixel - Soevery object in the scene can be sorted in
depth (z) and then scan converted in order,
starting with the surface of greatest depth - Such algorithms (eg Painters Algorithm) work well
for surfaces that do not overlap in depth, which
require further processing (and since this is the
case for most scenes these are rarely used today)
4Z-buffer
- A common image-based approach is to compare
surface depths (z-values) at each pixel position
on the projection plane
y
x
P(x,y)
S1
S2
z
ZS1
ZS2
5Z-buffer
- Commonly implemented in hardware by an additional
block of memory, storing the current z-value for
the surface currently rendered at that pixel - Initially the z-buffer locations are initialised
to -infinity - During scan conversion, each surface listed in
the polygon table is processed scanline by
scanline (top to bottom), calculating the depth
(z) at each pixel position - The algorithm is shown on the next slide
6Z-buffer algorithm
- // Initialise z-buffer and FB for every location
- depth(x,y) -infinity
- FB(x,y) RGB background
- // For every (x,y) position on every projected
polygon surface, - // compare depth value to previously stored value
in z-buffer - zcurrent getDepthValue(current_surface, x,
y) - if zcurrent gt depth(x,y) then
- depth(x,y) zcurrent
- FB(x,y) RGBcurrent_surface(x,y)
7Z-buffer and shadows
- The classic z-buffer algorithm can be augmented
to support shadows - The technique requires a separate shadow Z-buffer
for each light source, and is a 2-step process - Firstly the scene is rendered and depth
information stored in the shadow Z-buffer using
the light source as the viewpoint - This computes a depth image from the light
source of the polygons that are visible to the
light
8Shadow Z-buffer
- Step 2 - render the scene normally using a
Z-buffer algorithm - The normal process is enhanced as followsif a
point (x, y, z) is visible, a transformation maps
the point from 3D screen space to (x, y, z) in
screen space with the light source as origin. - The transformed points x and y are used to
index the shadow buffer, and the corresponding
depth value is compared with z.
9Shadow Z-buffer
- If z is greater than the stored value in the
shadow z-buffer for that point, then another
surface is nearer to the light source than the
point under consideration and the point is in
shadowor else the point is not in shadow and is
rendered normally - With this algorithm there are obviously high
memory requirements - Also inefficient in that calculations are
performed for surfaces that are subsequently
overwritten
103D Rendering
- Can be classified into 2 types image space and
object space - Image space rendering algorithms are most
common. - the polygons are first clipped to the viewing
frustum, - an illumination algorithm is applied to light the
surfaces, - the surfaces are then transformed into image
space, - and finally rasterised
11Image-based Rendering
- Clipping involved removing surfaces which lie
outside the viewing frustum - An illumination algorithm takes account of any
ambient light, and the position of the light
sources (and the viewing position), the surface
characteristics and calculates a colour at the
vertices of each triangle - Transformation matrices are applied to each lit
3D triangle to project the triangle onto the 2D
image plane - Each triangle is then rasterised.colours for
each pixel covering the triangle are interpolated
from the vertex colours, and written to the
z-buffer
12Object-based rendering
- Object-based rendering operates in object (or
scene) coordinate spacei.e. 3D space - Algorithms such as ray tracing and radiosity are
object-based algorithmsthese techniques do not
involve projections of the 3D objects onto the 2D
image plane, are therefore more accurate and
produce more realistic images - The downside is that since they operate in 3D
(i.e. floating point) space, they are not (yet)
suitable for implementation in hardware, and are
therefore very much slower than image-space
techniques.
13The Ray Tracing Algorithm
- Attempts to model the physics of light
energylight energy is modelled as a number of
light rays emanating from the light source and
illuminating the scene. - The light rays are absorbed by some objects,
reflect off some objects and pass through some
objects - Some of these light rays then pass out of the
scene through the camera to the viewer - In practice, backward ray tracing is employed,
where rays emanate from the viewer, and are
traced through the centre of each pixel into the
scene
14Ray Tracing
- First set up a coordinate system with the pixel
positions designated in the xy plane - From the centre of projection, determine a ray
path that passes through the centre of each
screen-pixel position - Illumination effects accumulated along this ray
path are then assigned to the pixel - Based on the principle of geometric optics
light rays from the surfaces in a scene emanate
in all directions, and some will pass through the
pixel positions in the projection plane - Since there is an infinite number of ray paths,
the contributions to a particular pixel are found
by tracing a light path backward from the pixel
to the scene
15Backward Ray Tracing
Cube B (opaque)
Light source
Reflection ray
Image plane
Shadow rays
Cube A (reflective)
Primary ray
16Ray Tracing
- For each pixel ray, we test each surface in the
scene to determine if it is intersected by the
ray - If a surface is intersected, we calculate the
distance from the pixel to the surface
intersection point - The closest surface (smallest distance)
intersected is found (hidden surface removal)
it is the visible surface for that pixel - Depending on the surface characteristics, more
rays are traced from that intersection point to
the light sources, and in the reflection and
transparent directions
17Ray Tracing
- We then reflect the ray off the visible surface
along a specular path (angle of reflection
angle of incidence) - If the surface is transparent, we also send a ray
through the surface in the refraction direction - Reflection and refraction rays secondary rays
- This is a recursive process which is repeated for
each secondary ray until a predetermined maximum
recursion level is reached (set by user /
determined by amount of storage space available)
or if the ray strikes a light source.
18Ray Tracing
- The result of this process is an intersection
tree, which is initialised with the intersection
produced by the primary ray, and a node is added
to the tree for each object intersected by the
secondary rays (left branches for reflection
paths, right branches for transmission paths) - The intensity assigned to a pixel is determined
by accumulating the intensity contributions,
starting at the surfaces that are intersected at
the bottom of the ray-tracing tree - If no surfaces are intersected by the pixel ray,
the tree is empty and the pixel is assigned the
background colour - The contribution of each node to the final
intensity of the pixel represented by the tree is
determined as described in Whitteds 1980 paper
19Ray Tracing
- The final colour at a pixel depends on
- the properties of the intersected surfaces,
- the relative positions of the surfaces, and their
orientation with respect to the light sources. - The intensity of light at an each intersection
point P is a combination of the direct
illumination at P, and the indirect illumination
due to reflection and refraction, and is given
by
20Local Illumination
- According to Whitteds local illumination model,
the colour at intersection point P is given by
where Iambi is the ambient light component, Id is
the diffuse reflection component, Is is the
specular reflection component, and It is the
specular transmission component
21Ambient light
- This component models light that arrives on a
surface indirectly. Ambient light does not
originate from a particular source but is rather
the product of light from all the surfaces in the
environment. The ambient light term has the
form
where Ia is the ambient light, assumed to be
constant for all objects, and ka is the ambient
reflection coefficient, which determines the
amount of ambient light reflected from a surface.
Ambient light is a uniform illumination, product
of multiple reflections of light on other objects
in the environment
22Diffuse Reflection (Lambertian)
- Diffuse reflection, also called Lambertian
reflection, models light reflection of dull
surfaces. The intensity of light at a point on
the surface is proportional to the angle ?
between the normal to the surface at that point,
N, and the direction to the light source, L
Id Ilkd cos ?
L
N
Il is the intensity of the light source kd is
the diffuse reflection coefficient
q
23Diffuse Reflection
- Note this can be re-written in terms of the
normal vector and the light vector as
Id Ilkd(L.N)
Diffuse reflection does not depend on the view
point. These surfaces appear equally bright
regardless of the viewing angle.
24Specular Reflection
- Specular reflection refers to the amount of light
reflected due to the shininess of a surface. In
contrast to diffuse reflection, specular
reflection is highly dependent on the viewing
angle.
R
L
N
V
q
q
a
25Specular Reflection
- Ideally, in the case of perfectly specular
surfaces, the light is reflected back in one
direction, R, which is exactly opposite the
normal from the direction that light hits the
surface. - Specular reflection is proportional to the cosine
of the angle ?, between the direction R, and the
viewer direction, V. - In nature, shiny surfaces are not perfectly
specular, but they reflect light in different
directions around the direction of the perfect
specular reflection, R.
26Specular Transmission
- Transmission refers to the amount of light
refracted due to the transparency of a surface - Each medium (surface) has a refractive index
associated with it, that describes the speed of
light through the medium compared to the speed of
light in vacuum
N
L
qi
ni
nt
qt
T
27Specular Transmission
- If ni and nt are the refraction indices of the
mediums, the direction of transmission, T, can be
computed as
28Ray-object intersections
- The idea behind the intersection test between a
ray and an object is to put the mathematical
equation of the ray into the object equation and
look for a real solution. - If there is a real solution, then there is an
intersection between the ray and the object, and
the actual intersection point can be calculated. - A ray in ray tracing is defined in terms of the
ray origin, O(x0, y0, z0), and the ray direction
vector, D(xd, yd, zd). The set of points on the
line defined by O and D are given by the
parametric equation of a ray...
29Ray-object intersections
- t is the distance of the point from the origin,
if the ray direction vector is normalised
30Ray-object intersections
- The x, y, z coordinates of a point on the path of
the ray are given by
31Ray-object intersections
- We will now look at ray-object intersections for
some common - primitivesspheres and polygons
- Ray-sphere intersections
- Ray-polygon intersections
32Ray-sphere intersections
- The mathematical equation for a sphere at centre
C(xc, yc, zc) and radius r is given by
- By substituting the ray equation, into the above
equation we get - (x0xd.t-xc)2 (y0yd.t-yc)2 (z0zd.t-zc)2
r2
- which gives a quadratic equation in t of the
form - ax2 bx c 0
33Ray-sphere intersection
- If the equation does not have any real roots,
the ray does not intersect the sphere. - If there is only one real root, the ray is
tangential to the sphere. - Finally, if there are two real roots, the
smallest positive root gives the ray entry point
to the sphere, and the other root gives the ray
exit point. - The roots of this equation are substituted back
to the parametric equation of the ray to give the
actual intersection points
34Ray-sphere intersection
35Ray-polygon intersection
- The first step is to test for intersection with
the ray and the plane defined by the points of
the polygon. - The mathematical equation for a plane through
point P(xp, yp, zp), with normal N(a, b, c) is
given by
By substituting the ray equation into the above
equation we get
which gives a linear equation in t...
36Ray-polygon intersection
If the denominator of this equation is equal to
zero, the ray is parallel to the plane, so an
intersection does not occur. If the ray
intersects the plane, the intersection point
needs also to be tested against the polygon for
containment.
IP
37Ray-polygon intersection
Ray1
Ray2
IP1
IP2
38Ray-polygon intersection
- Point in polygon test
- extend a line to infinity from the IP in the
plane, - count the number of edges crossed.if even, IP
does not lie inside the polygon, - if odd, IP lies inside the polygon.
39Illumination
- Once a valid intersection point has been found,
this is stored. - Once all the polygons have been tested, the
closest valid intersection point is
retained.this represents the closest visible
surface to the viewer, and the illumination
algorithm is then performed on this point.
40Ray tracing efficiencies - 1
- Back facing surface removal takes place before
illumination, after transformation into scene
space - Recall that solid objects are made up of a
number of surfaces which describe the boundary of
the object (b-rep models)
G
H
E
F
Surfaces ABFE, BCGF and FGHE are
visible, surfaces AEHD, CDHG and ADBC are back
facing from this viewing position, and can be
removed prior to illumination and rendering
D
C
B
A
41Back face surface removal
- In order to determine whether or not a surface is
back facing, take the dot product of the viewing
vector with the surface normal (viewed from
outside the object)
Remember v.n v n cos(theta) If theta lt 90,
the surface is visible, otherwise it is back
facing If theta lt 90, then cos(theta) is gt
0 Therefore, if v.n gt 0 the surface is visible,
otherwise the surface is back facing, and is
culled.
42Efficiencies - 2
- Since every ray must be intersected with every
surface, which is a very high computational load
on the system (up to 90 of the work), a number
of efficiencies have been introduced to
accelerate the naïve ray tracing algorithm - bounding volume hierarchies
- spatial data structures
43Bounding volumes
- By analysing ray-sphere and ray-polygon
intersection tests it is apparent that it is much
easier to perform a ray-sphere intersection test
than a ray-polygon intersection test. - Also, for any single ray, it is likely that only
a small number of surfaces actually intersect
with the ray - So.one efficiency is to enclose every polygon
inside a tightly fitting bounding volume (spheres
are commonly used) - The ray is then intersected with the bounding
volume first.and if no intersection is found,
the surface need not be subjected to the more
expensive ray-polygon test - If an intersection is found, of course, the real
ray-polygon test must then be performed as usual
44Bounding Volumes
- The trade-off is then the extra computation
required to generate the bounding volumes and the
extra storage required (in the case of a sphere
this is an (x,y,z) centre and a real radius)
versus the reduction in computation for the
ray-intersection tests - For a complex scene this is almost always worth
doing.
45Bounding Volumes
- In some cases, bounding volume hierarchies are
employedhere the whole scene and all the objects
in it are enclosed by the bounding volume - If the ray intersects the root bounding volume,
then the child bounding volumes are intersected,
and so on until individual polygons are
intersected.
46Bounding Volumes
C
A
B
Ray
Ray does not intersect with the bounding volume
surrounding A or B, and therefore need not be
intersected with the real geometry of A or B. Ray
does intersect with the bounding volume of C, and
is then intersected with Cs geometry as usual.
47Space-Subdivision Methods
- Another way to reduce intersection calculations
is to use space-subdivision methods - We can enclose a scene within a cube and then
successively subdivide the cube until each
subregion (cell) contains no more than a preset
maximum number of surfaces - We then trace rays through the individual cells
of the cube and only perform intersection tests
within those cells containing surfaces - The first object surface intersected by a ray is
the visible surface for that ray - Trade-off cell size vs number of surfaces per
cell. If maximum number of surfaces per cell is
too low, cell size can become so small that much
of the savings in reduced intersection tests goes
into cell-traversal processing
48Spatial subdivision (2D)
49Summary
- Z buffer shadow algorithm
- image-space techniques
- object-space techniques
- ray tracing
- ray generation
- illumination
- ray-sphere intersections
- ray-polygon intersections
- efficiencies
- bounding volumes
- spatial subdivision