Emerging Technologies for Games Cameras Picking - PowerPoint PPT Presentation

1 / 15
About This Presentation
Title:

Emerging Technologies for Games Cameras Picking

Description:

Next we consider how the entities are positioned and oriented relative to the camera ... fov is similar to choosing a wide angle or zoom lens ... – PowerPoint PPT presentation

Number of Views:61
Avg rating:3.0/5.0
Slides: 16
Provided by: lauren80
Category:

less

Transcript and Presenter's Notes

Title: Emerging Technologies for Games Cameras Picking


1
Emerging Technologies for GamesCameras / Picking
  • CO3303
  • Week 8

2
Todays Lecture
  • World / View Matrices Recap
  • Projection Maths
  • Pixel from World-Space Vertex
  • World Space Ray from Pixel
  • Note Formulae from this lecture are not
    examinable. However, discussion of the general
    process may be expected

3
Model Space
  • An entitys mesh is defined in its own local
    coordinate system - model space
  • Each entity instance is positioned with a matrix
  • Transforming it from model space into world space
  • This matrix is called the World Matrix

4
World to Camera Space
  • Next we consider how the entities are positioned
    and oriented relative to the camera
  • Convert the models from world space into camera
    space
  • The scene as viewed from cameras position
  • This transformation is done with the view matrix

5
Camera to Viewport Space
  • Finally project the camera space entities into 2D
  • The 3D vertices are projected to camera position
  • Assume the viewport is an actual rectangle in the
    scene
  • Calculate where the rays hit the viewport 2D
    geometry
  • This is done (in part) with the projection matrix

6
Projection Details
  • Cameras have two settings
  • Field of view (fov)
  • Near clipping distance (zn)
  • Near clip distance is from camera to viewport
  • Where the geometry slices through the viewport
  • fov is similar to choosing a wide angle or zoom
    lens
  • fov can be different for width and height fovx
    fovy

7
Projecting a Vertex
  • Consider the projection of a single 3D vertex (x,
    y, z) to 2D
  • Need viewport coordinates (xv, yv)
  • yv not shown on diagram
  • Calculate using similar triangles
  • x / z xv / zn, so xv xzn / z
  • similarly, yv yzn / z
  • This is the perspective divide
  • Now have 2D coordinates, but still in camera
    space units

8
Converting to Viewport Space
  • Calculate the actual viewport dimensions (in
    camera space)
  • tan(fovx / 2) (wv / 2) / zn
  • so wv 2zntan(fovx / 2)
  • similarly, hv 2zntan(fovy / 2)
  • Use this to map camera space 2D coordinates (xv,
    yv) to viewport space (range 1 to 1)
  • xn xv / (wv / 2) 2xv / wv
  • yn yv / (hv / 2) 2yv / hv

9
Converting to Pixels
  • Finally map the coordinates (xv, yv) in range -1
    to 1 to final pixel coordinates (xp, yp)
  • If the viewport width height (in pixels) are wp
    and hp
  • then xp wp (xn 1) / 2
  • and yp hp (1 - yn) / 2
  • Map to 0gt1 range then scale to viewport pixel
    dimensions
  • Second formula also flips the Y axis (viewport Y
    is down)

10
Using the Projection Matrix
  • The projection matrix is structured to perform
    part of the above calculation
  • This is a perspective projection, other types
    possible
  • When applied to a camera space point (x, y, z)
  • Note the original z value becomes the 4th (w)
    component

11
After the Projection Matrix
  • The perspective divide occurs after the
    projection matrix
  • Some simplifications - cancelling of 2 zn terms
    between here and in the tan() terms in the matrix
  • xn xv / w, yn yv / w, dn d / w
  • The d value (in the z component) is also divided,
    leaving it in the range 0 to 1
  • This is the value used for the depth buffer
  • The 4th (w) component is unchanged, and is the
    original camera space z
  • The scaling to viewport pixels completes the
    process
  • xp wp (xn 1) / 2, yp hp (1 - yn) / 2
  • The steps on this slide are not programmable,
    occurring outside the shaders

12
Picking
  • Sometimes we need to manually perform the
    projection process
  • To find the pixel for a particular 3D point
  • E.g. To draw text/sprites in same place as a 3D
    model
  • Or perform the process in reverse
  • Each 2D pixel corresponds to a ray in 3D space
    (refer to the projection diagram)
  • Finding and working with this ray is called
    picking
  • E.g. to find the 3D object under the mouse
  • The algorithms for both follow they are derived
    from the previous slides

13
Pixel from World-Space Vertex
  • Start with world space vertex P
  • Transform this vertex by combined view /
    projection matrix to give Q
  • If Q.z lt 0 then the vertex is behind us, discard
  • Otherwise do perspective divide
  • Q.x / Q.z and Q.y / Q.z
  • Finally, scale to pixel coordinates X,Y
  • X (Q.x 1) (ViewportWidth / 2)
  • Y (1 - Q.y) (ViewportHeight / 2)
  • Use to draw text/sprites in same place as 3D
    entity

14
World-Space Ray From Pixel 1
  • Initial pixel (X,Y), first convert to point Q in
    the range -1 -gt 1
  • Q.x (2 X / ViewportWidth) - 1
  • Q.y 1 (2 Y / ViewportHeight)
  • Set Q.z D (the near clipping distance)
  • The result vertex will be exactly on the clip
    plane
  • Calculate viewport size in camera space
  • WV 2 D tan(FOVX / 2)
  • HV 2 D tan(FOVY / 2)
  • If FOVY not available
  • HV WV ViewportHeight / ViewportWidth

15
World-Space Ray From Pixel 2
  • Convert Q into camera space
  • Q.x WV / 2
  • Q.y HV / 2
  • Finally transform by the inverse of the view
    matrix to get a point in world space
  • Then cast a ray from camera to this point
  • Can use this 3D ray to detect the model at the
    pixel
Write a Comment
User Comments (0)
About PowerShow.com