Computer Graphics Module Review - PowerPoint PPT Presentation

1 / 30
About This Presentation
Title:

Computer Graphics Module Review

Description:

A bitmap is rectangle of pixels stored offscreen for use in an image ... A texture is a bitmap wrapped around a model ... pixel in the bitmap appears as a ... – PowerPoint PPT presentation

Number of Views:103
Avg rating:3.0/5.0
Slides: 31
Provided by: lauren80
Category:

less

Transcript and Presenter's Notes

Title: Computer Graphics Module Review


1
Computer GraphicsModule Review
  • CO2409 Computer Graphics
  • Week 23-24

2
Lecture Contents
  • 2D Graphics Geometry
  • 3D Geometry Maths
  • Rendering Pipeline Key Concepts
  • Programmable Pipeline / Shaders
  • Depth / Stencil Buffers Shadows
  • Animation

3
Pixels Colour
  • A computer display is made of a grid of small
    rectangular areas called pixels
  • Pixel colour is usually specified as red, green
    and blue components
  • Integers (0-255) or floats (0.0 to 1.0)
  • The RGB colour space is a cube
  • Another colour space is HLS
  • Hue colour from spectrum, Lightness brightness
    of the colour Saturation intensity of the
    colour
  • Can be pictured as a double cone
  • More intuitive for artists

4
Bitmaps / Sprites / Alpha Channels
  • A bitmap is rectangle of pixels stored offscreen
    for use in an image
  • A sprite describes a particular use of bitmaps
  • When used as distinct elements in a larger scene
  • As well as RGB colours, we can store per-pixel
    values specifying transparency
  • Alpha data, or the alpha channel, making RGBA
  • Can use alpha to blend pixels onto viewport
  • FinalColour Alpha SourceColour (1-Alpha)
    ViewportColour
  • Alpha can also be used for alpha testing
  • Used for cutout sprites

5
Further Blending
  • Other ways of blending pixels onto the viewport
  • Multiplicative blending equation is
  • FinalColour SourceColour ViewportColour
  • A darkening effect, suitable for representation
    of glass, shadows, smoke etc
  • Additive blending equation is
  • FinalColour SourceColour ViewportColour
  • This is a lightening effect, mainly used for the
    representation of lights

6
Basic Geometry Definitions
  • In both 2D and 3D we have used these definitions
  • A vertex a single point defined by coordinates
    on the axes of a coordinate system
  • E.g A(10, 20)
  • An edge a straight line joining two vertices
  • E.g. AB
  • A vector a movement within a coordinate system
  • E.g. AB (from A to B) or V(5, 10)
  • A normal is any vector whose length is 1
  • A polygon a single closed loop of edges
  • E.g. ABC (from C to A implied)

7
Coordinate Systems
  • A coordinate system is a particular set of
    choices for
  • The location of the origin
  • The orientation and scale of the axes
  • A vertex will have different coordinates in two
    different coordinate systems
  • The viewport is a particular coordinate system
    that corresponds to the visible display area

8
Rendering
  • Rendering is the process of converting geometry
    into pixels
  • In general rendering is a two stage process
  • Convert the geometry into 2D viewport space
    (geometry transformation / vertex processing)
  • Set the colour of the pixels corresponding to
    this converted geometry (rasterising / pixel
    processing)
  • Render 2D lines by stepping through the pixels
  • Render polygons with multiple lines
  • Filling polygons by pixel beyond scope of module

9
Maths/C for Graphics Apps
  • Be aware of numeric limitations in C, e.g
  • int limits can be exceeded
  • float / double have limited precision
  • Repeated calculations with float can build up
    errors
  • C automatically converts between numeric types,
    issuing warnings when it does
  • Dont ignore, may not be what is required
  • Several math functions used for graphics
  • Max, min, remainders, modulus / absolute value,
    powers, cos, sin etc.
  • Know the library functions used

10
3D Geometry - Meshes / Normals
  • A mesh is a set of polygons making a 3D object
  • A mesh is usually defined together with a set of
    face and/or vertex normals
  • A face normal is a normalised vector at right
    angles to a polygon
  • Can be used to specify the plane of the polygon
  • A vertex normal can be average of all the face
    normals of the polygons containing that vertex
  • Or can have multiple for sharp edges

11
Matrices
  • A matrix (plural matrices) is a rectangular table
    of numbers
  • They have special rules of arithmetic
  • A coordinate system matrix is used to represent a
    models position/orientation
  • Transformation matrices used to convert between
    spaces, or move/orient models
  • E.g. world matrix converts from model-gtworld
    space
  • Basic transforms translate, rotation, scale
  • Of central importance to 3D graphics
  • Will always be exam questions on matrices

12
DirectX / Rendering Pipeline
  • Graphics APIs perform a pipeline of operations
  • This is the DirectX pipeline
  • Input 3D model geometry data
  • Geometry is stored as lists of vertex data
  • Customised depending on pipeline usage
  • Pipeline process
  • Convert to world then viewport space, applying
    lighting
  • Scan through resultant 2D polygons, one pixel at
    a time
  • Render pixels using light colours, textures and
    blending

13
World Matrix
  • Mesh vertices are stored in model space
  • The local space for the mesh with a convenient
    origin and orientation
  • For each model we store a world matrix that
    transforms the model geometry into world space
  • This defines the position and orientation of the
    model
  • Has a special form containing the local axes of
    the model

14
View Matrix
  • For each camera we store a view matrix that
    transforms the world space geometry into camera
    space
  • Camera space defines the world as seen by the
    camera
  • X right, Y up and Z in the direction it is facing
  • For technical reasons this matrix is actually the
    inverse of a normal world matrix
  • But it has a similar form and can be used in a
    similar way

15
Projection Matrix
  • Cameras also have a second matrix, the projection
    matrix
  • Defining how the camera space geometry is
    projected into 2D
  • Defines the field of view and viewport distance
    of the camera
  • Applying this matrix flattens the camera space
    geometry into 2D viewport space
  • The projected 2D geometry needs to be
    scaled/offset into viewport pixel coordinates

16
Lighting / Shading
  • Geometry colour can come from
  • Static vertex or face colours and/or dynamic
    lighting
  • Two shading modes used
  • Flat Gouraud
  • 3 basic light types
  • Directional, Point and Spot
  • Lighting is calculated by combining
  • Ambient light global background illumination
  • Diffuse light direct lighting
  • Specular light reflection of light source
    (highlights)

17
Textures
  • A texture is a bitmap wrapped around a model
  • The wrapping is specified by assigning a texture
    coordinate (UV) to each vertex in the geometry
  • This is texture mapping
  • The UVs for the texture range from 0-1
  • UVs outside this range will be wrapped, mirrored,
    etc. depending on the texture addressing mode
  • Each pixel in the bitmap appears as a square on
    the geometry called a texel
  • Textures and texels can be smoothed using texture
    filtering and mip-mapping

18
Vertex / Index Data
  • Customised vertex data is stored in vertex
    buffers
  • Coordinate (always), normal, UVs, colour etc.
  • Can use vertex data alone to store geometry
  • Each triplet of vertices is a triangle (triangle
    list)
  • More efficient to use an additional index buffer
  • Store the set of unique vertices only
  • Define the triangles using triplets of indices
  • Can also use triangle strips
  • First triplet defines the first triangle
  • Each further vertex/index is used with the
    previous two to form a further triangle

19
Programmable Pipeline / Shaders
  • Fixed pipeline programs have limited
    functionality
  • Three pipeline stages can be programmed directly
  • The vertex, geometry and pixel processing stages
  • We did not look at programmable geometry
    processing
  • Programs usually in a high-level language (HLSL)
  • Called shaders, the vertex shader and the pixel
    shader
  • Write a shader for every customised effect needed
  • Shaders loaded as needed

20
Vertex Shaders
  • We define shaders in terms of their inputs (from
    previous stages) and outputs (to later stages)
  • Shaders have a basic usage, but are otherwise
    flexible
  • Vertex shaders operate on each vertex in the
    original 3D geometry. Their basic usage is to
  • Transform and project the vertex into viewport
    space
  • Perhaps apply animation or lighting to the vertex
  • At a minimum, a vertex shader expects vertex
    coordinates as input, but may have other input
    too
  • Normals, UVs, vertex colours, etc.
  • A vertex shader must at least output a viewport
    position for the current vertex

21
Pixel Shaders
  • Pixel Shaders operate on each pixel in the final
    2D polygons. Their basic usage is to
  • Sample (and filter) any textures applied to the
    polygon
  • Combine the texture colour with the existing
    polygon colour (from lighting and/or geometry
    colours)
  • The input for a pixel shader is usually the
    output from the vertex shader
  • A pixel shader must at least output a final pixel
    colour to be drawn/blended with the viewport

22
Advanced Shaders
  • Advanced shaders can be used to implement
    high-quality lighting and rendering techniques
  • A key technique is per-pixel lighting
  • Vertex lighting exhibits problems on large
    polygons
  • Instead, have the vertex shader pass the vertex
    position and normal on to the pixel shader
  • These are interpolated for the pixel shader,
    which the uses the normal lighting equations on
    them
  • Have covered several other shader techniques
  • Specular mapping, normal mapping, parallax
    mapping, cell shading

23
Graphics Architecture
  • The basic graphics architecture for all modern
    PCs and game consoles is similar
  • Comprised of a main system and a graphics unit
  • With one processor each (CPU GPU)
  • Fast local RAM for each processor
  • Interface between these two systems is often slow
  • GPU is a dedicated graphics microprocessor
  • Much faster than a CPU for graphics related
    algorithms
  • GPU operates in parallel with the CPU
  • These are concurrent systems

24
Depth Buffers Z-Buffers
  • A depth buffer is a rectangular array of depth
    values that matches the back-buffer
  • Used to sort rendered primitives in depth order
  • A z-buffer stores the viewport space Z coordinate
    of each pixel in the back buffer
  • Calculated per vertex (range 0.0 to 1.0)
  • Interpolated per-pixel
  • Pixels are only rendered if their z value is less
    than the existing value in the z-buffer
  • Z values are not distributed evenly
  • Pixels with different depths may have same z
    value
  • Can cause inaccurate sorting and visual artefacts

25
Stencil Buffers
  • The stencil-buffer is a buffer of additional
    per-pixel values associated with the z or w
    buffer
  • Usually 1 or 8 bits embedded in the z or w values
  • A mask controlling drawing to the back-buffer
  • Test stencil values before writing each pixel
  • Result determines whether to write to back-buffer
    and/or stencil
  • Customisable tests - a very flexible system with
    many uses

26
Rendering to Textures
  • Some special effects can be performed by
    rendering the scene into a texture
  • Rather than into the back-buffer/viewport
  • Process needs two (or more) rendering passes
  • Set up a special render target texture and render
    the scene onto it
  • Then render the scene again normally (to the back
    buffer), but with some polygons using the render
    texture
  • Quality is limited by texture size

27
Shadow Techniques
  • Basic shadows (e.g. blob-shadows) easy and useful
  • Advanced techniques may be static or dynamic
  • Static shadow maps are pre-calculated darkening
    textures applied over the main model textures
  • Another more recent technique is Pre-computed
    Radiance Transfer (PRT)
  • Pre-calculated light simulation resulting in
    equations to evaluate at run-time

28
Dynamic Shadow Mapping
  • Dynamic Shadow Mapping is an extension of
    render-to texture techniques used for shadows
  • The scene is rendered into a texture (a shadow
    map), but from the point of view of the light
  • Treat the light like a camera
  • Then the scene is rendered normally, but each
    pixel first tested against shadow map
  • The pixel is not lit if in shadow from the light
  • Spotlights straightforward, point / directional
    complex

29
Rigid Body Animation
  • Rigid body animation concerns models made of
    several distinct parts
  • We assume that the parts form a hierarchy
  • Each part in the hierarchy has a world matrix
  • Defining its position and orientation - just like
    a model
  • Each parts matrix is stored relative to its
    parent
  • And defines the joint with the parent
  • This is called a Matrix or Transform Hierarchy
  • Can be rendered using a recursive process
  • Or the parts can be stored in a depth-first list
    and rendered using an iterative process and a
    stack
  • DirectX provides a matrix stack for this purpose

30
Soft Body Animation
  • Soft body animation concerns models that stretch
    and flex as they animate
  • We define an independent hierarchy of bones
    assumed to underlie the geometry the skeleton
  • Again each bone has a parent-relative world
    matrix
  • Vertices can be influenced by more than one bone
  • Each influence carries a (vertex) weight (0-1)
  • Sum of the weights for each vertex is 1
  • Linearly blend the vertex world position from
    each bone influence using these weights
Write a Comment
User Comments (0)
About PowerShow.com