Connectivity and Distance - PowerPoint PPT Presentation

1 / 90
About This Presentation
Title:

Connectivity and Distance

Description:

Course Progress. Introduction. Data Representation. Number and text. Graphics and animations ... Light energy in flux, measured in a unit of watt. Radiant intensity is ... – PowerPoint PPT presentation

Number of Views:45
Avg rating:3.0/5.0
Slides: 91
Provided by: drphil7
Category:

less

Transcript and Presenter's Notes

Title: Connectivity and Distance


1
Connectivity and Distance
  • Is one pixel connected to another pixel?
  • Can one pixel be reached from another by tracing
    a path between connected pixels?
  • These two problems can be reduced to
  • How do we decide whether two adjacent pixels are
    connected or not?

2
Connectivity and Distance
  • Eight connected neighbours

7
0
1
6
2
4
5
3
3
Connectivity and Distance
  • Euclidean distance between two pixels, p1 and p2,
    can be measured by the Pythagoras theorem
  • where (x1, y1) and (x2, y2) are the coordinates
    of p1 and p2 respectively.

4
Connectivity and Distance
  • City block distance approximates the distance
    between two pixels, p1 and p2, as

5
Colour Representation Models
  • RGB - Red Green Blue
  • HSI - Hue Saturation Intensity
  • YUV
  • YCbCr

6
Trichromatic Theory
  • Trichromatic theory Most colours can be produced
    by mixing three properly chosen primay colours.
  • Let Cj , j 1, 2, 3, represent the colours of
    three primary colour sources and C a given
    colour. Then,
  • where Tj are the amount of the three primary
    colours required to match the colour C.
  • Some of the Tj may be negative.

7
Tristimulus Values
  • In order to make the colour specification
    independent of the absolute energy of the primary
    colours, these values should be normalized so
    that
  • Tj 1, for j1,2,3
  • on a reference white colour with a unit energy.

8
RGB Representation
  • The most popular primary set for illuminating
    light sources is RGB.
  • Each colour is represented by the intensity of
    red, green, and blue.
  • The intensity values R, G, B, are usually coded
    with 8 bits to the range 0, 255.

R
G
B
9
HSI
  • Human perception is more sensitive to brightness
    (luminance) than any colour (chrominance)
    information.
  • Several colour coordinate systems are used to
    separate brightness intensity from the
    chrominance (hue and saturation).

10
Chromaticity Values
  • The tristimulus representation mixes the
    luminance and chrominance attributes of a colour.
  • To measure the chrominance information only, the
    chromaticity coordinate is defined as
  • Since t1t2t31, two chromaticity values are
    sufficient to specify the chrominance of a colour.

11
YUV Representation
  • YUV separates brightness information (luminance
    Y) from the colour information (chrominance U and
    V).
  • U and V are proportional to the colour
    differences B-Y and R-Y, respectively. U and V
    are then scaled to the desired range.
  • Thus, YUV is related to the normalized RGB (in
    the range 0-1)by
  • Y 0.3R 0.6G 0.1B
  • U B Y
  • V R Y

12
YUV Representation
  • In YUV representation, chrominance values may be
    coded with fewer bits than the luminance value.
  • E.g. Number of bits in the ratio of 422. Four
    bits are used to encode one luminance value and
    two bits are used to encode each chrominance
    value.

13
YCbCr Representation
  • Similar to YUV,
  • Y 0.3R 0.6G 0.1BCb U/2 0.5Cr
    V/1.6 0.5
  • Scaled and zero shifted so that the range remains
    within 0, 1.
  • YCbCr is used in JPEG.

14
CMYK Representation
  • Used in colour printing according to the
    complement colours of red, green, and blue.
  • Black ink is added for printing black without
    mixing the colours.
  • Each colour is decomposed into 4 colours. Other
    colours are printed as combination of these
    colours at different intensity.
  • C Cyan
  • M Magenta
  • Y Yellow
  • K Black

15
Camera Calibration
  • How to relate the coordinates of pixels to the
    coordinates in the real world?
  • Divide into two stages
  • Relate image coordinates to a coordinate frame
    aligned to the camera
  • Relate the camera coordinate frame to the
    coordinates in the real world

16
Camera Calibration
  • Extrinsic parameter
  • Relate the camera coordinates with the
    coordinates in the real world
  • Orientation of the cameras optical axis to the
    world coordinate frames axes.

17
Camera Calibration
  • Intrinsic parameters relate the image to camera
    coordinates
  • Without optical aberrations
  • With optical aberrations

18
Image to Camera Coordinates
  • Without distortion, the image to camera
    coordinates are linearly related. Just offset the
    origin and scale the data
  • x -(xim - ox)sx
  • y -(yim - oy)sy

19
Image to Camera Coordinates
  • Warping The image capturing process may distort
    the data.

Warped image
Original image
20
Image to Camera Coordinates
  • Assume that a first order spherical correction is
    sufficient, apply the correction below
  • x (1kx(x2y2)) xd
  • y (1ky(x2y2))yd
  • before the linear shifting and scaling to remove
    the warping.

21
Image Representation Summary
  • Most colours could be matched by mixing different
    portions of a red, a green, and a blue primary
    source.
  • Human eyes can distinguish only 40 shades of
    brightness and they are more sensitive to
    brightness than colour difference.
  • Cameras capture images using Vidicon, CCD or CMOS
    sensors. Colour images are formed by red, green,
    and blue sensors.
  • Quality of captured images is degraded by to
    refraction, distortion, scattering, imperfect
    detectors, and blooming effects.
  • Spherical corrections are applied before the
    linear shifting and scaling to remove the warping
    in camera calibration.

22
Image Representation Summary
  • Image quality can be measured objectively by
    PSNR.
  • Spatial, brightness, and temporal resolutions
    determine the best possible images.
  • Euclidean distance and city block distance can be
    used to specify the distance between two pixels
  • RGB representation is the basic colour
    representation.
  • YUV, YIQ separated the luminance component from
    the chrominance components.

23
Exercise
  • List the colours of the cyan, yellow, white and
    black in the RGB formats.
  • Convert these colours to YUV and YCbCr formats.
  • Why the CMYK format is used in printing while
    other colour formats are not?

24
Course Progress
  • Introduction
  • Data Representation
  • Number and text
  • Graphics and animations
  • Sound and Audio
  • Images Representation
  • Video Representation
  • Processing Techniques

25
Video Representation
  • Video Concept
  • Video Raster
  • Video frame rates
  • Image aspect ratio
  • Analog TV
  • Digital video BT.601
  • Computer Video Formats

26
Light and Colour revisit
  • Light
  • Electromagnetic Wave
  • Wavelength 380 to 780 nanometers (nm)
  • Light energy in flux, measured in a unit of watt.
  • Radiant intensity is
  • flux radiated into a unit solid angle in a
    direction
  • Measured in watts/solid-angle.

27
Light and Colour
  • Let C(X, t, ?) be emitted or reflected light
    intensity of wavelength ? from objects at spatial
    location X (Xc, Yc, Zc) and at time t.
  • Let the spectral absorption function of a video
    camera be denoted by ac (?).
  • The light intensity distribution in the 3D world
    that is visible to the camera is

28
Light and Colour
  • Image function captured by the camera at any time
    t is the projection of light distribution in the
    3D scene onto a 2D image plane x(xa, ya).
  • Let P(.) represent the camera projection
    operator, so that x P(X).
  • Let P-1(.) denote the inverse projection
    operator, so that X P-1(x).
  • Then the projected image is related to the 3D
    image by

or
29
Light and Colour
  • The function ?(x,t) is known as video signal. It
    describes the radiant intensity at the 3D
    position X that is projected onto x in the image
    plane at time t.
  • If the absorption function is nonzero over a
    narrow band, a monochrome image is formed.
  • To perceive all visible colours, three sensors
    are needed, each with a frequency response
    similar to the colour matching function for a
    selected primary colour.

30
Video Concept
  • Analog video signal is continuously varying in
    space and time.
  • Cameras today cannot capture signals continuously
    in all three dimensions (2D in space and 1D in
    time)
  • Most TV cameras capture a video sequence by
    sampling it in temporal, vertical, and horizontal
    directions.

31
Video Concept
  • A video is represented as a list of images called
    frames.
  • Each image frame is separated from the previous
    frame by a time interval.

frame
time
0 1 2 3 4 5
6 7 8 9
32
Video Concept
  • Some video cameras acquire a frame by scanning
    consecutive lines with a certain line spacing.
  • Raster scan. The resulting signal is stored in a
    continuous 1D waveform.
  • Each frame is thus represented by a list of scan
    lines.

Scan line
retrace
frame
33
Progressive Scan
  • Progressive frame. The electronic or optic beam
    of an analog video camera continuously scans the
    image region from the top to bottom and then back
    to the top.
  • The resulting signal consists of a series of
    frames separated by a regular frame interval, ?t.

Scan line
retrace
frame
34
Progressive Scan
  • Horizontal retrace When the scan line reaches
    the vertical edge on the right, it retraces back
    to the left.
  • Vertical retrace When the scan line reaches the
    bottom, it retraces back to the top.
  • The bottom line is scanned about one frame
    interval later than the top line of the same
    frame.
  • Each scan line is actually slightly tilted.
  • For analysis, we often assume that all the lines
    are sampled at the same time, and each line is
    perfectly horizontal.

35
Interlaced Scan
  • Interlaced frame. Each frame is scanned in two
    fields and each field contains half the number of
    lines in a frame.
  • The time interval between two fields is half of
    the frame interval.
  • The line spacing in a field is twice that desired
    for a frame.
  • The scan lines in two successive fields are
    shifted by half of the line spacing in each field.

Scan line
retrace
frame
36
Interlaced Scan
  • Interlaced scan trades off the vertical
    resolution for an enhanced temporal resolution.
  • Two adjacent lines in a frame are separated in
    time by the field interval. This fact leads to
    the zig-zag artifacts in fast-moving objects with
    vertical edges.
  • In general, the interlaced scan can be performed
    with more than two fields.
  • In K1 Interlaced, each frame is divided into
    K2 fields.
  • Each field is separated by ?t/K.

37
Video Raster
  • A raster is described by two basic parameters
  • Frame rate, fs,t (frames/second)
  • Line number, fs,y (lines/frame)
  • Line rate, fl (lines/second) can be found as
  • fl fs,t fs,y.
  • Temporal sampling interval or frame interval, ?t
    (seconds/frame) can be found as
  • ?t 1/ fs,t.
  • Vertical sampling interval or line spacing, ?y is
    as
  • ?y picture-height / fs,y.
  • Line interval, Tl, is time to scan each line, is
    found as
  • Tl 1/ fl ?t / fs,y.

38
Video Raster
  • Sync signals are held at a constant level above
    the level corresponding to black during the
    horizontal and vertical retrace periods , Th and
    Tv,. The display devices start the retrace
    process upon detecting these sync signals.
  • Excluding the horizontal retrace, Th, the actual
    scanning time for a line, Tl , is
  • Tl Tl - Th.
  • Number of active lines, fs,y, which is the
    number of lines actually scanned in a frame time
    can be found as
  • fs,y (?t KTv)/Tl fs,y KTv/Tl.
  • Normally, Tv is chosen to be an integer multiple
    of Tl.

39
Video Raster Exercise
  • Given that NTSC TV has frame rate of 30
    frames/second and line number of 525 lines. The
    horizontal retrace takes 10 ?s and the vertical
    retrace takes 1333.5 ?s. The picture size of a TV
    is 0.32m x 0.24m. Find the following parameters
    for the NTSC TV
  • Line rate, fl
  • Frame interval ?t
  • Line spacing, ?y
  • Line interval, Tl
  • Actual scanning time for a line, Tl ,
  • Number of active lines , fs,y

40
Video Frame Rates
  • Frame rates
  • Depend on the viewing distance and frequency
    content
  • Continuous motion at least 15 frames/second
  • Full motion video 30 frames/second
  • Standard frame rates based on the visual temporal
    and spatial thresholds
  • Movies 24 frames/second
  • Standard Definition TV at 21 interlaced at 25
    to 30 frames/second
  • Computer display 72 frames/second

41
Image Aspect Ratio
  • Image Aspect ratio Width/height
  • In standard definition TV, image aspect ratio
    4/3 1.33
  • Q. What is the image aspect ratio in the HDTV
    screen below?

W 1920
H1080
42
Angular Frequency
  • Viewing distance, d, determines the viewing
    angle, ?, subtended by the picture height, h.
  • If h/2ltltd,

?
h
d
43
Analog TV Systems
  • TV systems
  • Spatial and temporal resolution
  • Colour coordinate
  • Signal Bandwidth
  • Multiplexing of Luminance, Chrominance, and Audio

44
Analog TV Systems
  • Three TV systems PAL, NTSC, SECAM
  • PAL
  • Most of Western Europe and Asia, including China
    and Middle East
  • NTSC
  • North America, Japan, and Taiwan
  • SECAM
  • Russia, Eastern Europe, France, and Middle East
  • All three colour TV systems use 21 interlaced
    scan, and IAR is 43.

45
TV Frame Rates
  • TV frame rates
  • PAL TV 25 frames/second
  • NTSC TV 29.97 frames/second
  • HDTV 59.94 frames/second
  • Chosen not to interfere with standard power
    systems in the countries.
  • Lines per frame
  • NTSC 525 lines/frame
  • PAL and SECAM 625 lines/frame

46
Analog TV Systems
  • NTSC line interval, Tl 1/(30525) 63.5?s.
    Th10 ?s. Actual time for scanning each line,
    Tl53.5 ?s. Vertical retrace, Tv 1333 ?s,
    equivalent to time for 21 scan lines per field.
    Number of active lines is 525-42483/frame.

47
Analog TV Systems
  • All three TV systems use RGB primary.
  • PAL use YUV, NTSC uses YIQ, and SECAM uses YDbDr.
    YUV relates to the normalized gamma-corrected RGB
    by
  • where RGB are normalized and set to (1,1,1)
    corresponding to the reference white.

48
Analog TV Systems
  • In reverse, normalized gamma-corrected values of
    RGB relates to YUV by.

49
Analog TV Systems
  • SECAM uses the YDbDr coordinate.
  • The Db and Dr values are related to the U and V
    by
  • Db 3.059U, and Dr -2.169V.

50
Analog TV Systems
  • NTSC uses YIQ.
  • The I and Q components are rotated versions (by
    33) of the U and V components.
  • I component corresponds to orange-to-cyan
    colours.
  • Q component corresponds to green-to-purple
    colours.
  • Human eye is less sensitive to changes in the
    green-to-purple range than the yellow-to-cyan
    range.
  • Q component can be transmitted with less
    bandwidth than the I component.

51
Analog TV Systems
  • YIQ relates to the normalized gamma-corrected RGB
    by
  • where RGB are normalized and set to (1,1,1)
    corresponding to the reference white.

52
Analog TV Systems
  • In reverse, normalized gamma-corrected values of
    RGB relates to YIQ by.

53
Signal Bandwidth
  • The bandwidth of a video raster can be estimated
    from its line rate.
  • the maximum vertical frequency results when white
    and black lines alternate in a raster frame,
    which is equal to fs,y/2 (cycles/picture-height).
  • The maximum frequency that can be rendered
    properly by a system is usually lower than this
    theoretical limit. The ratio is known as the
    attenuation factor or Kell factor, Ke.
  • Kell factor, Ke, depends on the camera and
    display aperture functions. Typical TV cameras
    have a Kell factor of Ke 0.7.
  • The maximum vertical frequency is

54
Signal Bandwidth
  • Assume the maximum horizontal frequency is
    identical to the vertical frequency for the same
    spatial distance.
  • Each line is scanned in Tl seconds, the maximum
    frequency is

55
Multiplexing
  • The three colour components and the audio
    component are multiplexed into one composite
    signal before broadcasting.
  • The two chrominance components are first combined
    into a single colour signal.
  • The colour signal is then combined with the
    luminance component
  • The video signal is then combined with the audio
    signal to form the final composite signal.
  • The overall NTSC composite signal occupies 6.0
    MHz.
  • The picture carrier, fp, depends on the
    broadcasting channel.

56
Multiplexing in NTSC
Overall Spectral Composition of the NTSC
composite signal
6.0 MHz
Y
4.5 MHz
I
I and Q
4.2 MHz
1.25 MHz
Audio
3.58MHz
f
fp
fc
fa
audio subcarrier
colour subcarrier
picture carrier
57
Analog TV Systems
58
Analog TV Systems
59
Analog Video Recording
60
Digital Video
  • A digital camera samples the imaged scene as
    discrete frames
  • Each frame consists of sample values that are
    discrete both horizontally and vertically.
  • Let m and n be the integer indices in the
    horizontal and vertical directions and let k be
    the frame number.
  • The actual spatial and temporal locations are
  • xm?x, ym?y, and km?t.
  • We may use ?(m, n, k) to describe a digital video.

61
Digital Video
  • Let Nb be the number of bits to represent a
    pixels colour value.
  • Nb 8 for monochrome video and Nb 24 for
    colour video.
  • The data rate, R, is determined by
  • where fs,t , fs,x , fs,y are the frame rate,
    samples per line, and line number per frame
    respectively.

62
Digital Video
  • Each pixel is render as a rectangular region with
    a constant colour for the pixel.
  • The ratio of the width to the height of this area
    is known as Pixel Aspect Ratio (PAR).
  • PAR relates to the Image Aspect Ratio by
  • The display device should conform to this PAR to
    avoid distortions.

63
Digital Video
  • ITU-R developed the BT.601 to standardize the
    digital TV format.
  • To convert a raster scan to a digital video
    signal, one need only sample the 1D waveform.
  • If a total number of fs,x samples are taken per
    line, the equivalent sampling rate is

64
Digital Video
  • The sampling rate in BT.601 standard is chosen to
    satisfy two constraints
  • The horizontal sampling resolution should match
    the vertical sampling resolution as close as
    possible. That is, ?x ?y.
  • The same sampling rate should be used for NTSC
    and PAL/SECAM and it should be a multiple of
    respective line rates. Thus,
  • fs,x IAR fs,y .
  • Using fs fs,x fl and fl fs,t fs,y , we
    have
  • fs IAR f 2s,y fl .

65
Digital Video
  • fs 11 and 13 MHz for NTSC and PAL/SECAM.
  • A number that is closest to both numbers and
    satisfies the second criterion is then chosen. We
    have
  • fs 858 fl (NTSC) 864 fl (PAL/SECAM) 13.5
    MHz.
  • This gives the 525/60 and 625/60 signals.

66
Digital Video
NTSC
PAL/SECAM
858 pels
864 pels
Active Area
Active Area
720 pels
720 pels
525 lines
625 lines
480 lines
576 lines
122 pels
16 pels
132 pels
12 pels
67
Digital Video
  • Note that both formats have the same number (720)
    of active pixels/line.
  • Note that the pixel width-to-height ratio is not
    1.
  • PAR ?x /?y IARf 's,y/f 's,x
  • PARNTSC (4/3)480/720 8/9
  • PARPAL (4/3)576/720 16/15

68
Digital Video - BT.601
  • BT.601 also uses the YCbCr representation.
  • The YCbCr values (0-255) are related to the RGB
    values (0-255) by

69
Digital Video - BT.601
  • The inverse relation is

70
Spatial Sampling Rate
  • Human eyes are less sensitive to colour differenc
    than brightness.
  • Chrominance values are sampled at lower frequency
    than the luminance value.
  • 422 format. Each chrominance component, Cb and
    Cr, are usually sampled at half of the sampling
    rate of the luminance component.
  • 411 format. Each chrominance component is
    sampled at ¼ of the sampling rate of the
    luminance component.
  • 420 format. Each chrominance component is
    sampled at ½ of the sampling rate both
    horizontally and vertically.
  • 444 format. The chrominance values are sampled
    at the same rate as the luminance values.

71
Chrominance Subsampling
Y
Cb
Cr
411 format
422 format
Each 22 Y pixels 1Cb 1Cr pixels (41
horizontal subsampling)
Each 22 Y pixels 2Cb 2Cr pixels (horizontal
subsampling)
72
Chrominance Subsampling
Y
Cb
Cr
444 format
420 format
Each 22 Y pixels 4Cb 4Cr pixels (no
subsampling)
Each 22 Y pixels 1Cb 1Cr pixels (21
subsampling both horizontally and vertically)
73
Video Quality Measure
  • Mean Square Error (MSE)
  • PSNR
  • Mean Absolute Difference (MAD)

74
Computer Video Formats
  • CGA Colour Graphics Adapter
  • Resolutions 320 200 pixels with 4 colours (2
    bits)
  • (320x200) pixels 2 bits/pixel 15.625 KB/image
  • EGA Enhanced Graphics Adapter
  • Resolutions 640 350 pixels with 16 colours (4
    bits)
  • (640x350) pixels 4 bits/pixel 109.375 KB/image

75
Computer Video Formats
  • VGA Video Graphics Array
  • Resolution 640 x 480 pixels with 256 colours (8
    bits)
  • (640x480) pixels x 8 bits/pixel 300 KB/image
  • XGA Extended Graphics Array
  • Resolution 640 x 480 pixels with 65,536 colours
    (16 bits) or 1024 x 768 pixels with 256 colours
  • (640x480) pixels x 16 bits/pixel 600 KB/image
  • (1024x768) pixels x 8 bits/pixel 768 KB/image
  • SVGA Super VGA
  • Resolution 800 x 600 pixels with 16,777,216
    colours (24 bits) or 1024 x 768 pixels with
    65,536 colours (16 bits)
  • (800x600) pixels x 24 bits/pixel 1.37 MB/image
  • (1024x768) pixels x 16 bits/pixel 1.5 MB/image

76
Sampling
  • Digitization of an analog waveform
  • Take samples at different temporal locations
  • Frequency of taking samples sampling rate
  • Amplitude of taken samples quantization
  • Objective To maintain the information of the
    original analog waveform

77
Sampling Rate
  • Sampling rate is the no. of samples taken per
    unit time
  • Affects whether the no. of samples are enough to
    reproduce the waveform.

Reproduced waveform a straight line
E.g. if only 1 sample per period
time
0 1 2 3 4 5 6
7 8 9 10 11 12
78
Nyquist Sampling Theorem
  • For lossless digitization, the sampling rate must
    be at least twice of the maximum frequency
  • That is, at least 2 samples must be obtained
    within each cycle. Otherwise, either the maximum
    or the minimum amplitude in a period is gone
    leading to loss of information in the
    digitization process.

time
0 1 2 3 4 5 6
7 8 9 10 11 12
79
Sampling Compression
  • When the sampling rate is reduced,
  • Fewer samples are accessed per unit time
  • Lower temporal resolution
  • Object size is reduced
  • The information of the higher frequency waveforms
    is also lost

80
Quantization
  • Consider using a decimal number to describe a
    value. Each digit has ten different values.
  • One digit specifies 10 different values (0-9)
  • Two digits specifies 100 different values (0-99)
  • Three digits specifies 1000 different values
    (0-999).
  • Each additional digit increases the range by 10
    times.
  • Consider using a binary number to describe a
    sample value.
  • Each additional digit(bit) doubles the range of
    values.

81
Quantization
  • Quantization of a sample the mapping of values
    to integral values in describing a sample value.
    Mathematically,
  • No. of bits, Nb log2(no. of integral values)

82
Quantization
  • Analog A continuous slope
  • Digital steps in the stairs

Step size
83
More bits better resolution
  • More values(v) need more bits(b) v ? 2b
  • 8 bits ? 256 values
  • 16 bits ? 65,536 values
  • 24 bits ? 16,777,216 values
  • Affects the step size in the reproduced waveform
  • More bits ? smaller step size ? better sample
    quality

84
No. of Bits Affects Quality
  • Consider a sine value, original values in the
    sine curve are 0.707, 1.0, 0.707, 0, -0.707,
    -1.0, -0.707, 0, ...
  • Values in binary representation are0.10110101,
    1, 0.10110101, 0, -0.10110101, -1, -0.10110101,
    0,

time
0 1 2 3 4 5 6
7 8 9 10 11 12
85
No. of Bits Affects Quality
  • Values in 7-bit quantization are 0.101101,
    1.000, 0.101101, 0.000, -0.101101, -1.000,
    -0.101101, 0.000,
  • Quantized value in decimal becomes0.703125, 1,
    0.703125, 0, -0.703125, -1, -0.703125, 0,

time
0 1 2 3 4 5 6
7 8 9 10 11 12
86
No. of Bits Affects Quality
  • Values in 4-bit quantization are 0.101, 1.000,
    0.101, 0.000, -0.101, -1.000, -0.101, 0.000,
  • Quantized value in decimal becomes0.625, 1,
    0.625, 0, -0.625, -1, -0.625, 0,

time
0 1 2 3 4 5 6
7 8 9 10 11 12
87
Quantization and Compression
  • When the no. of integral values to represent a
    sample is reduced,
  • The number of bits per sample is reduced
  • Lower sample resolution
  • Object size is reduced
  • The quality of each sample value is also reduced

88
Summary to Sampling and Quantization
  • Digitization of the analog waveform involves
    sampling and quantization.
  • Sampling rate must be at least twice of the
    highest frequency to avoid loss of information
  • No. of bits in quantization affects the quality
    of each sample value
  • Object size are reduced with loss of information
    by lowering the sampling rate and number of
    sample values.

89
Data Representations Summary
  • Computer graphics are represented using the
    coordinates on the screen.
  • Computer animations are done by updating changes
    to the frame buffers and these changes are drawn
    on the display
  • Images are represented as 2D pixels. Each pixel
    can be represented using RGB, YUV, YCbCr, or
    CMYK.
  • A/D converters digitize an analog wave by taking
    samples of amplitudes at fixed time intervals.
  • A video is represented as an array of frames. 24
    to 30 frames should be displayed per second to
    show full motions.

90
Exercise
  • A sine curve of maximum amplitude of 16 is
    sampled at 6 times per cycle and the sample
    values are quantized using 8 bits. Alternatively,
    the sine curve may be sampled at 8 times per
    cycle and the sample values are quantized using 6
    bits.
  • Find the Mean Absolute Difference of each
    quantized curves with the original sine curve.
  • Find the Mean Square Errors of the two quantized
    curves.
  • Compare the PSNR of the two quantized curves.
Write a Comment
User Comments (0)
About PowerShow.com