Title: What are Shadows?
1What are Shadows?
- Does the occluder have to be opaque to have a
shadow? - transparency (no scattering)
- translucency (scattering)
- What about indirect light?
- reflection
- atmospheric scattering
- wave properties diffraction
- What about volumetric or atmospheric shadowing?
- changes in density
Is this still a shadow?
2Common Shadow Algorithm Restrictions
- No transparency or translucency!
- Limited forms can sometimes be handled
efficiently - Backwards ray-tracing has no trouble with these
effects, but it is much more expensive than
typical shadow algorithms - No indirect light!
- More sophisticated global illumination algorithms
handle this at great expense (radiosity,
backwards ray-tracing) - No atmospheric effects (vacuum)!
- No indirect scattering
- No shadowing from density changes
- No wave properties (geometric optics)!
3What Do We Call Shadows?
- Regions not completelyvisible from a light
source - Assumptions
- Single light source
- Finite area light sources
- Opaque objects
- Two parts
- Umbra totally blocked from light
- Penumbra partially obscured
area light source
shadow
umbra
penumbra
4Basic Types of Light Shadows
area, direct indirect
area, direct only
point, direct only
directional, direct only
SOFT SHADOWS
HARD or SHARP SHADOWS
simpler
more realistic
more realistic for small-scale scenes,
directional is realistic for scenes lit by
sunlight in space!
5Goal of Shadow Algorithms
Ideally, for all surfaces, find the fraction of
lightthat is received from a particular light
source
- Shadow computation can be considered a global
illumination problem - this includes ray-tracing and radiosity!
- Most common shadow algorithms are restricted to
direct light and point or directional light
sources - Area light sources are usually approximated by
many point lights or by filtering techniques
6Global Shadow Component inLocal Illumination
Model
Without shadows
With shadows
- Shadowi is the fraction of light received at the
surface - For point lights, 0 (shadowed) or 1 (lit)
- For area lights, value in 0,1
- Ambient term approximates indirect light
7What else does this say?
- Multiple lights are not really difficult
(conceptually) - Complex multi-light effects are many single-light
problems summed together! - Superposition property of illumination model
(Bastos) - This works for shadows as well!
- Focus on single-source shadow computation
- Generalization is simple, but efficiency may be
improved
8Characteristics of Shadow Algorithms
- Light-source types
- Directional
- Point
- Area
- Light transfer types
- Direct vs. indirect
- Opaque only
- Transparency / translucency
- Atmospheric effects
- Geometry types
- Polygons
- Higher-order surfaces
9Characteristics of Shadow Algorithms
- Computational precision (like visibility
algorithms) - Object precision (geometry-based, continuous)
- Image precision (image-based, discrete)
- Computational complexity
- Running-time
- Speedups from static viewer, lights, scene
- Amount of user intervention (object sorting)
- Numerical degeneracies
10Characteristics of Shadow Algorithms
- When shadows are computed
- During rendering of fully-lit scene (additive)
- After rendering of fully-lit scene
(subtractive)not correct, but fast and often
good enough - Types of shadow/object interaction
- Between shadow-casting object and receiving
object - Object self-shadowing
- General shadow casting
11Taxonomy of Shadow Algorithms
- Object-based
- Local illumination model (Warnock69,Gouraud71,Pho
ng75) - Area subdivision (Nishita74,Atherton78)
- Planar projection (Blinn88)
- Radiosity (Goral84,Cohen85,Nishita85)
- Image-based
- Shadow-maps (Williams78,Hourcade85,Reeves87)
- Projective textures (Segal92)
- Hybrid
- Scan-line approach (Appel68,Bouknight70)
- Ray-tracing (Appel68,Goldstein71,Whitted80,Cook84)
- Backwards ray-tracing (Arvo86)
- Shadow-volumes (Crow77,Bergeron86,Chin89)
- More complete surveys found in (Crow77 Woo90)
12Taxonomy of Shadow Algorithms
- Focus is on the following algorithms
- Local illumination
- Ray-tracing
- Planar projection
- Shadow volumes
- Projective textures
- Shadow-maps
- Will briefly mention
- Scan-line approach
- Area subdivision
- Backwards ray-tracing
- Radiosity
13Local Illumination Shadows
- Backfacing polygons are in shadow (only lit by
ambient) - Point/directional light sources only
- Partial self-shadowing
- like backface culling is a partial visibility
solution - Very fast (often implemented in hardware)
- General surface types in almost any rendering
system!
14Local Illumination Shadows
- Typically, not considered a shadow algorithm
- Just handles shadows of the most restrictive form
- Dramatically improves the look of other
restricted algorithms
15Local Illumination Shadows
- Properties
- Point or directional light sources
- Direct light
- Opaque objects
- All types of geometry (depends on rendering
system) - Object precision
- Fast, local computation (single pass)
- Only handles limited self-shadowing
- convenient since many algorithms do not handle
any self-shadowing - Computed during normal rendering pass
- Simplest algorithm to implement
16Planar Projection Shadows
- Shadows cast by objects onto planar surfaces
- Brute force project shadow casting objects onto
the plane and draw projected object as a shadow
Directional light(parallel projection)
Point light(perspective projection)
17Planar Projection Shadows
- Not sufficient
- co-planar polygons (Z-fighting) depth bias
- requires clipping to relevant portion of plane
shadow receiver stenciling
18Planar Projection Shadowsbetter approach,
subtractive strategy
- Render scene fully lit by single light
- For each planar shadow receiver
- Render receivers stencil pixels covered
- Render projected shadow casters in a shadow color
with depth testing on, depth biasing (offset from
plane), modulation blending, and stenciling (to
write only on receiver and to avoid double pixel
writing) - Receiver stencil value1, only write where
stencil equals 1, change to zero after modulating
pixel
Texture is visible in shadow
19Planar Projection Shadowsproblems with
subtractive strategy
- Called subtractive because it begins with
full-lighting and removes light in shadows
(modulates) - Can be more efficient than additive (avoids
passes) - Not as accurate as additive. Doesnt follow
lighting model - Specular and diffuse components in shadow
- Modulates ambient term
- Shadow color is chosen by user
as opposed to the correct version
20Planar Projection Shadowseven better approach,
additive strategy
- Draw ambient lit shadow receiving scene (global
and all lights local ambient) - For each light sourceFor each planar receiver
- Render receiver stencil pixels covered
- Render projected shadow casters into stenciled
receiver area depth testing on, depth biasing,
stencil pixels covered by shadow - Re-render receivers lit by single light source
(no ambient light) depth-test set to EQUAL,
additive blending, write only into stenciled
areas on receiver and not in shadow - Draw shadow casting scene full-lighting
21Planar Projection ShadowsProperties
- Point or directional light sources
- Direct light
- Opaque objects (could fake transparency using
subtractive) - Polygonal shadow casting objects, planar
receivers - Object precision
- Number of passes Lnum lights, Pnum planar
receivers - subtractive 1 fully lit pass, LP special
passes (no lighting) - additive 1 ambient lit pass, 2LP receiver
passes, LP caster passes
22Planar Projection ShadowsProperties
- Can take advantage of static components
- static objects lights precompute silhouette
polygon from light source - static objects viewer precompute first pass
over entire scene - Visibility from light is handled by user(must
choose casters and receivers) - No self-shadowing (relies on local illumination)
- Both subtractive and additive strategies
presented - Conceptually simple, surprisingly difficult to
get right - gives techniques needed to handle more
sophisticated multi-pass methods
23Projective Texture ShadowsWhat are Projective
Textures?
- Texture-maps that are mapped to a surface through
a projective transformation of the vertices into
the textures camera space
24Projective Texture ShadowsHow do we use them to
create shadows?
- Project a modulation image of the shadow casting
objects from the lights point-of-view onto the
shadow receiving objects
Lights point-of-view
Shadow projective texture (modulation image or
light-map)
Eyes point-of-view, projective texture applied
to ground-plane(self-shadowing is from another
algorithm)
25Projective Texture ShadowsMore examples
Cast shadows from complex objects onto complex
objects in only 2 passes over shadow casters and
1 pass over receivers (for 1 light)
Lighting for shadowed objects are computed
independently for each light source and summed
into a final image
Colored light sources. Lit areas are modulated by
value of 1 and shadow areas can be any ambient
modulation color
26Ray-tracing Shadows
- Only interested in shadow-ray tracing (shadow
feelers) - For a point P in space, determine if it is shadow
with respect to a single point light source L by
intersecting line segment PL (shadow feeler) with
the environment - If line segment intersects object, then P is in
shadow, otherwise, point P is illuminated by
light source L
L
shadow feeler(edge PL)
P
27Ray-tracing Shadows
- Arguably, the simplest general algorithm
- Can even handle area light sources
- point-sample area source distributed ray-tracing
(Cook84)
Li
Area light Li
P
P
Shadowi 0
Shadowi 2/5
28Ray-tracing Shadows
Sounds great, whats the problem?
- Slow
- Intersection tests are expensive
- May be sped up with standard ray-tracing
acceleration techniques - Shadow feeler may incorrectly intersect object
touching P - Depth bias
- Object tagging
- Dont intersect shadow feeler with object
touching P - Works only for objects not requiring
self-shadowing
29Ray-tracing Shadows
- How do we use the shadow feelers?
- 2 different rendering methods
- Standard ray-casting with shadow feelers
- Hardware Z-buffered rendering with shadow feelers
30Ray-tracing Shadows
Ray-casting with shadow feelers
Light
- For each pixel
- Trace ray from eye through pixel center
- Compute closest object intersection point P along
ray - Calc Shadowi for point by performing shadow
feeler intersection test - Calc illumination at point P
Eye
31Ray-tracing Shadows
Z-buffering with shadow feelers
- Render the scene into the depth-buffer (no need
compute color) - For each pixel, determine if in shadow
- unproject the screen space pixel point to
transform into eye space - Perform shadow feeler test with light in eye
space to compute Shadowi - Store Shadowi for each pixel
- Light the scene using per-pixel Shadowi values
Light
Eye
32Ray-tracing Shadows
- Z-buffering with shadow feelers
How do we use per-pixel Shadowi values to light
the scene?
- Method 1 compute lighting at each pixel in
software - Deferred shading
- Requires object surface info (normal, materials)
- Could use more complex lighting model
33Ray-tracing Shadows
- Z-buffering with shadow feelers
How do we use per-pixel Shadowi values to light
the scene?
- Method 2 use graphics hardware
- For point lights
- Shadowi values either 0 or 1
- Use stencil buffer, stencil values Shadowi
values - Re-render scene with the corresponding light on
using graphics hardware but use stencil test to
only write into lit pixels (stencil1). Should
perform additive blending and ambient-lit scene
should be rendered in depth computation pass. - For area lights
- Shadowi values continuous in 0,1
- Multiple-passes and modulation blending
- Pixel Contribution Ambienti
Shadowi(DiffuseiSpeculari)
34Shadow-Mapsfor accelerating ray-traced shadow
feelers
Light
- Previously, shadow feelers had to be intersected
against all objects in the scene - What if we knew the nearest intersection point
for all rays leaving the light? - The depth-buffer of the rendered scene from a
camera at the light would give us a discretized
version of this - This depth-buffer is called a shadow-map
- Instead of intersecting rays with objects, we
intersect the ray with the light viewplane, and
lookup up the nearest depth value. - If the lights depth value at this point is less
than the depth to the eye-ray nearest
intersection point, then this point is in shadow!
Light-ray nearest intersection point
Eye
L
E
Eye-ray nearest intersection point
If L is closer to the light than E, then E is in
shadow
35Shadow-Mapsfor accelerating ray-traced shadow
feelers
- Cool, we can really speed up ray-traced shadows
now! - Render from eye view to accelerate first-hit
ray-casting - Render from light view to store first-hits from
light - For each pixel-ray in the eyes view, we can
project the first hit point into the lights view
and check if anything is intersecting the shadow
feeler with a simple table lookup! - The shadow-map is discretized, but we can just
use the nearest value. - What are the potential problems?
36Shadow-MapsProblems with Ray-traced Shadow Maps
- Still too slow
- requires many per-pixel operations
- does not take advantage of pixel coherence in eye
view - Still has self-shadowing problem
- need a depth bias
- Discretization error
- Using the nearest depth value to the projected
point, may not be sufficient - How can we filter the depth-values? The standard
way does not really make sense here.
37Shadow-Mapsfaster way standard shadow-map
approach
- Not normally used as a ray-tracing acceleration
technique, normally used in a standard Z-buffered
graphics system - Two methods presented (Williams78)
- Subtractive post-processing on final lit image
(like full-scene image warping) - Additive as implemented in graphics hardware
(OpenGL extension on InfiniteReality)
38Shadow-Mapsillustration of basic idea
Shadow-map from light 1
Shadow-map from light 2
Final view
39Shadow-MapsSubtractive
- Render fully-lit scene
- Create shadow-map render depth from lights view
- For each pixel in final image
- Project point at each pixel from eye screen-space
into light screen-space (keep eye-point depth De) - Look up light depth value Dl
- Compare depth values, if DlltDe eye-point is in
shadow - Modulate, if point is in shadow
40Shadow-MapsSubtractive advantages
- Constant time shadow computation!
- just like full-scene image-warping eye view
pixels are warped to light view and then a depth
comparison is performed - Only a 2-pass algorithm
- 1 eye pass, 1 light pass (and 1 constant time
image-warping pass) - Deferred shading (for shadow computation)
Zhang98 presents a similar approach using a
forward-mapping (from light to eye, reverses this
whole process)
41Shadow-MapsSubtractive disadvantages
- Not as accurate as additive (same reasons)
- Specular and diffuse components in shadow
- Modulates ambient term
- Has standard shadow-map problems
- Self-shadowing depth-bias needed
- Depth sampling error how do we accurately
reconstruct depth values from a point-sampling?
42Shadow-MapsAdditive
- Create shadow-map render depth from lights view
- Use shadow-map as a projective texture!
- While scan-converting triangles
- apply shadow-map projective texture
- instead of modulating with looked-up depth value
Dl, compare the value against the r-value (De) of
the transformed point on the triangle - Compare De to Dl , if DlltDe eye-point is in shadow
Basically, scan-converting triangle in both eye
and light spaces simultaneously and performing a
depth comparison in light space against
previously stored depth values
43Shadow-MapsAdditive advantages
- Easily implemented in hardware
- only a slight change to the standard
perspectively-correct texture-mapping hardware
add an r-component compare op - Fastest, most general implementation to date!
- As fast as projective textures, but general!
44Shadow-MapsAdditive disadvantages
- Computes shadows on a per-primitive basis
- All pixels covered by all primitives must go
through shadowing and lighting operation whether
visible or not (no deferred shading) - Still has standard shadow-mapping problems
- Self-shadowing
- Depth sampling error
45Shadow-MapsSolving main problems self-shadowing
- Use a depth bias during the transformation into
light space - Add a z translation towards the light source
after transformation from eye to light - OR
- Add z-translation towards eye before transforming
into light space - OR
- Translate eye-space point along surface normal
before transforming into light space
46Shadow-Maps
- Solving main problems depth sampling
- Could just use the nearest sample, but how would
you anti-alias depth?
47Shadow-MapsDepth sampling normal filtering
- Averaging depth doesnt really make sense
(unrelated to surface, especially at shadow
boundaries!) - Still a binary result, (no anti-aliased softer
shadows)
48Shadow-MapsDepth sampling percentage closer
filtering (Reeves87)
- Could average binary results of all depth map
pixels covered - Soft anti-aliased shadows
- Very similar to point-sampling across an area
light source in ray-traced shadow computation
49Shadow-MapsHow do you choose the samples?
Quadrilateral represents the area covered by a
pixels projection onto a polygon after being
projected into the shadow-map
50Scanline Algorithmsclassic by Bouknight and
Kelley
- Project edges of shadow casting triangles onto
receivers - Use shadow-volume-like parity test during
scanline rasterization
51Scanline Algorithms
- Project all other polygons onto each polygon
being considered for the current scan-line. - Projection is from the light source (point or
parallel) - Perform span determination for the visible
surface. - Perform spans again using the projected shadows.
- If object span and shadow span (for the closest)
object do not overlap, draw the object span as
normal. - Unshadowed part is visible, shadowed part is
hidden. - If shadow span completely overlaps the object
span, draw the object span attenuated. - Otherwise, split the span into two or more
subspan having either no shadow or complete
shadow.
52Scanline Algorithms
- Worst case performance is O(n(n-1)).
- Simplify the process by pre-computing the light
projections. - Store as a matrix, with an entry aij indicating
whether polygon-i occludes polygon-j wrt the
light source. - Now, when scan-converting polygon-j, we only need
to consider those polygons with entries in js
column.
53Area-Subdivision Algorithmsbased on
Atherton-Weiler clipping
- Find actual visible polygon fragments
(geometrically) through generalized clipping
algorithm - Create model composed of shadowed and lit
polygons - Render as surface detail polygons
54Area-Subdivision Algorithmsbased on
Atherton-Weiler clipping
55Multiple Light Sourcesfor any single-light
algorithm
- Accumulate all fully-lit single-light images into
a single image through a summing blend op
(standard accumulation buffer or blending
operations) - Global ambient lit scene should be added in
separately - Very easy to implement
- Could be inefficient for some algorithms
- Use higher accuracy of accumulation buffer
(usually 12-bit per color component)
56Area light Sourcesfor any point-light algorithm
- Soft or fuzzy shadows (penumbra)
- Some algorithms have some natural support for
these - For restricted algorithms, we can always sample
the area light source with many point light
sources jitter and accumulate - Very expensive many high quality passes to
obtain something fuzzy - Not really feasible in most interactive
applications - Convolution and image -based methods are usually
more efficient here