BINF 5040 MEDICAL IMAGING AND NETWORKING COMMON IMAGE PROCESSING OPERATIONS - PowerPoint PPT Presentation

1 / 54
About This Presentation
Title:

BINF 5040 MEDICAL IMAGING AND NETWORKING COMMON IMAGE PROCESSING OPERATIONS

Description:

... and Stretching ... spatially dull or poorly defined can be sharpened with this technique. ... Lossy image compression techniques are used to reduce ... – PowerPoint PPT presentation

Number of Views:117
Avg rating:3.0/5.0

less

Transcript and Presenter's Notes

Title: BINF 5040 MEDICAL IMAGING AND NETWORKING COMMON IMAGE PROCESSING OPERATIONS


1
BINF 5040MEDICAL IMAGING AND NETWORKINGCOMMON
IMAGE PROCESSING OPERATIONS
2
  • 1. Image Complement
  • Each image pixel is logically complemented. All
    black pixels map to white, white pixels map to
    black, and all the intermediate grays are
    correspondingly mapped to their complemented
    values.
  • Application This operation is useful in making
    the subtle brightness details in the brighter
    areas of an image more visible.
  • Implementation
  • O(x,y) 255 - I(x,y)
  • Similar mapping functions can be used to
    complement only a portion of the gray scale

255
output
input
255
0,0
3
  • 2. Histogram Sliding and Stretching
  • Each pixel is added to a constant slide value
    and is multiplied by a constant stretch value.
    The resulting image histogram appears shifted
    left or right, and expanded or reduced in size.
  • Applications Histogram sliding operations
    brighten or darkens the resulting image while
    stretching operations increase or decrease the
    contrast of the resulting image. This operation
    is used for general enhancement of low contrast
    images
  • Implementation O(x,y) (I(x,y) slide value )x
    stretch value.

255
Slide by - 64
Stretch by 4
192
0,0
64
255
64
255
4
  • 3. Binary Contrast Enhancement
  • Each image pixel is determined to be above or
    below a predetermined brightness threshold value.
    If the pixel is darker than the threshold, it is
    set to 0. Above the threshold value is changed to
    255.
  • Application Binary contrast enhancement
    operations are used to create a very high
    contrast image from a low contrast image
  • ImplementationDefine the mapping function and
    perform the operation on all the image pixels.
  • O(x,y) 0, if I(x,y) lt threshold value
  • 255, if I(x,y) gt threshold value
  • A variable threshold value can be used, instead
    of a fixed value. This technique is called
    adaptive thresholding.

5
a - original picture with low contrast, b-
histogram of original image, c- binary contrast
enhancement mapping function with brightness
threshold at 106, d- binary contrast enhanced
image showing much higher contrast.
6
  • 4. Brightness Slicing
  • Each image pixel is determined to be between or
    outside a range of brightness defined by a low
    and high brightness threshold values. If the
    pixel brightness is outside the thresholds, the
    resulting pixel brightness is set to 0. If the
    brightness is between the thresholds, the
    resulting brightness is set to 255
  • Applications This operation is used to generate
    a high contrast image in the area of interest
  • Implementation
  • O(x,y) 255, if I(x,y) ?low threshold value
  • and if I(x,y) lt high threshold value
  • 0 , otherwise.

7
a- original bank check, b- histogram of the
original image, c- brightness slicing operation,
d- brightness sliced image showing the signature
as white and everything as black
8
  • 5. Image Subtraction
  • First image is subtracted from the second image,
    pixel by pixel. Where pixels have the same value
    in each image, the resulting output pixel value
    is 0. Where two pixels are different, the
    resulting pixel is the difference between them.
  • Application Image subtraction can be used to
    remove common background image information from
    images of identical scenes.
  • It has a good application in angiography, a
    baseline image is subtracted from one where blood
    vessels are enhanced with x-ray-opaque liquid
  • Motion detection also uses image subtraction.
  • Implementation
  • O(x,y ) I1 (x,y) - I2 ( x,y)
  • Note you should not get negative intensity
    value after subtraction.

9
(No Transcript)
10
  • 6. Image Division
  • One image is divided by another image, pixel by
    pixel. If a pixel value in numerator is greater
    than the corresponding pixel in the denominator
    image, the resulting pixel is large bright. Where
    opposite is true, the resulting pixel is small -
    dark
  • Applications This operation is performed on
    images of identical scenes. In the most common
    use, spectral ratioing highlights components of
    an image that appears differently when imaged
    through different spectral filters.
  • For example, if it is necessary to detect red
    object in an image, a red component image can be
    divided by a green component image or a blue
    component image. Where pixels have large red
    values and small green or blue values, by the div
    operation, a large result would appear for red
    object. Opposite would appear otherwise.
  • Implementation O(x,y ) I1(x,y)/I2(x,y)

11
Red filter
Green filter
After image division
12
  • 7. Image Addition
  • One image is added to a second image, pixel by
    pixel. When two images of entirely different
    subjects are added, a blend of two will appear.
    If two identical images of the same object are
    summed, an identical image of twice brightness
    will result.
  • ApplicationsImages of same scene can be
    averaged to reduce random noise. Stationary
    object will appear double bright and randomly
    varying points will have smaller increase.
  • Implementation O(x,y) I1(x,y) I2( x,y)
  • One point to note is that when you sum
    brightness, the resultant value could be beyond
    scale. So you have to use some form of averaging
    to get some meaningful result.

13
  • 8. Low Pass Filter
  • Low pass filtering smoothens an image by
    attenuating high spatial frequency details. The
    convolution masks weighting coefficients are
    selected to vary the cutoff point where high
    frequencies become attenuated. The resulting
    low-pass filtered image can be brightness-scaled
    down and summed with the original image to create
    milder low pass effects.
  • Applications A low pass filter can be used to
    attenuate image noise that is primarily of high
    frequency components, such as impulse spike.
    Three possible masks are ( sum of elements must
    be 1.

14
a- original image, b- low-passed image with mask 1
15
  • 9. Unsharp Masking Enhancement.
  • The unsharp masking operation sharpens an image
    by subtracting a brightness scaled,
    low-pass-filtered image form its original. The
    resulting image appears with sharpened high
    frequency details. Low frequency details are
    untouched.
  • Applications The unsharp masking enhancement
    produces a subtle and visually pleasing results.
    It is used primarily to enhance the subjective
    spatial appearance of an images high frequency
    details. Images suffering from poor spatial
    definition are best suited for unsharp masking
    enhancement.
  • This unsharp masking operation is implemented by
    first performing a low-pass filtering operation
    on the original image. The image is
    brighness-scaled down to a desired level. This
    image is subtracted from the original image. The
    resulting image contains sharpened edge details.

16
  • 10. High-Pass Filter
  • This filter sharpens an image. The convolution
    mask coefficients are selected to vary cut-off
    where high frequencies become accentuated. The
    resulting high-pass filtered images can be
    brightness scaled down and summed with the
    original image to create a milder high-pass
    filter effects.
  • Applications This filter is used to highlight
    high-frequency details within an image, such as
    lines, points and edges. Images that appear
    spatially dull or poorly defined can be sharpened
    with this technique.

17
a- original building image, b- High-passed image
using mask 1
18
  • 11. Sobel Edge Enhancement
  • This filter extracts all the edges in an image,
    regardless of direction. It implements as the sum
    of two directional edge enhancement operation.
    Resulting image appears as an omni-directional
    outline of the objects in the original image.
    Constant brightness features become black, while
    changing brightness regions become highlighted.
  • Applications This operation is used to produce
    the edges of the objects in an image. The edges
    are any sharp brightness transitions rising from
    black to white or white to black. The Sobel
    operation is more immune to noise than the
    Laplacial operation, and provides stronger edge
    discrimination .
  • Implementation
  • Define the spatial convolution mask number 1

19
  • Pixel group process
  • Define spatial convolution mask number 2
  • Pixel group process
  • Duel-image pixel point process - addition.
  • Mask coefficients add to 0. Two masks are applied
    to a copy of the original image to highlight
    horizontal and vertical edges independently.

20
A- original binary image of washer, nut and bolt,
b- Sobel edge enhanced image
21
  • 12. Shift and Difference Edge Enhancement
  • This operation extracts the vertical, horizontal
    or diagonal edges in an image. Each pixel is
    subtracted from its adjacent neighboring pixel in
    the chosen direction. Constant brightness regions
    become black and changing regions are
    highlighted.
  • This operation is directional, which means that
    the black to white transition is highlighted only
    in a single direction
  • ImplementationYou can implement this operation
    by geometrically translating the original image
    by one pixel and subtracting from the original
    image.

22
  • 13. Prewitt Gradient Edge Enhancement
  • The Prewitt gradient edge enhancement operation
    extracts the north, north east, southeast,
    southwest, west or northwest edges in an image.
    The resulting image appears as a directional
    outline of the object in the original image.
    Constant brightness regions become black and
    changing brightness regions are highlighted.
  • This operation is more immune to noise than
    shift and difference operation and provides
    stronger directional edge discrimination.
  • The Prewitt gradient edge enhancement operation
    can produce results that are less than 0 or
    greater than 255 intensity values. The resulting
    values are to be forced to extreme values. The
    mask coefficients add up to zero.

23
  • Implementation
  • Define spatial convolution mask
  • Perform pixel group process

24
  • 14. Laplacian edge Enhancement
  • The Laplacian edge enhancement operation
    extracts all of the edges in an image, regardless
    of direction. The resulting image appears as an
    omni-directional outline of the objects in the
    original image. The constant brightness become
    black, while changing brightness regions become
    highlighted .
  • Applications This operation is used to produce
    the edges of the objects in an image. This
    operation is omni-directional
  • Implementation similar to other operations.
    There are three masks. You can use any one.

25
  • 15. Median and Ranking Filter
  • The median filter is a ranking filter, where the
    fifth-ranked pixel brightness value is selected
    as the output pixel brightness from an input
    group of pixels from a 3 x 3 mask.
  • The resulting image is free of pixel brightness
    that are extremes in each group of pixels
  • Applications Median filter is used to remove
    impulse noise spikes which appear as bright or
    dark pixel randomly distributed throughout the
    image.
  • Implementation Define brightness values in a
    mask and do the selection process. The selection
    process selects the median value of the
    brightness level.

5,10,10,15,15,20,20,20, 210
median
26
Median Filtered image showing elimination of
single pixel impulses
27
  • 16. Fourier Frequency Transform Filtering
  • An image is transformed from its original
    spatial-domain brightness representation to a
    frequency-domain representation by using Discrete
    Fourier transform operation.
  • Undesired frequency components are eliminated
    form the frequency domain image. The resultant
    frequency domain image is then transformed back
    to spatial domain. The final image is devoid of
    the removed frequency components from the image.
  • Application It is used for selective removal of
    periodic noise pattern from an image. Noise
    patterns of this type show bright spots in the
    frequency domain image. You have to eliminate
    these spots.
  • Implementation Take FFT and define frequency
    mask using window and multiply. Finally inverse
    Fourier transform.

28
Noise removal using the Fourier Transform. - (d)
- inverse transformed image
29
  • 17. Translation Operation
  • Shifts the spatial location of image pixels from
    side to side to up and down. Pixels are
    translated in x and y direction.
  • ApplicationsThis operation is used to shift and
    register the image at the required location. This
    operation can also be used to correct the
    geometrical distortions in the image.
  • Implementation. Define x and y translation
    values and perform the operation using
  • x x Tx
  • y y Ty
  • Normally, integer value translation is used.
    Which means you shift by required number of pixel
    translation is integer value. For non-integer
    value translation, an interpolation technique is
    used to estimate the actual intensity values of
    pixels.

30
Translation by ( -50,-40 ) and ( 50, 40)
31
  • 18. Rotation Transformation
  • The rotation transformation rotates the spatial
    location of an image pixels linearly about the (
    0,0) point of the image. The image is rotated
    clockwise by an angle ?.
  • Applications This transformation is used to
    geometrically register multiple images of the
    same object or scene. This operation is normally
    carried out for subsequent image subtraction,
    addition, or other composition operations.
  • This transformation is also used to correct
    geometric distortions created during the image
    acquisition process.
  • Implementation Source to target mapping is
  • x x cos ? y sin ?
  • and
  • y - x sin ? y cos ?

32
Rotation of 30 degrees and using nearest
neighbor interpolation
33
  • 19. Scaling Transformation
  • Scaling transformation linearly expands or
    reduces the spatial size of an image pixels.
    Pixels are scaled by a factor Sx in the
    x-direction and Sy in the y-direction.
  • Applications These operations are used to
    geometrically register multiple images of the
    same or different scenes. Geometric adjustments
    can be carried out prior to image combination
    such as addition and subtractions, division, etc.
  • Implementation Define the x and y scale factor
    values, and carry-out the scaling operation. For
    scale value greater than 1, an interpolation
    technique must be used to estimate the
    intermediate pixel intensity value.
  • Appropriate interpolation is used using nearest
    neighbor technique. x x Sx and y y Sy..

34
  • 20. Binary Erosion and Dilation
  • Erosion operation uniformly reduces the size of
    white objects on a black background in an
    image.The reduction is by one pixel around the
    objects perimeter.
  • The binary dilation operation uniformly
    increases the size of white objects on a black
    background in an image. The objects size
    increases by one pixel around the objects
    perimeter.
  • Applications Erosion is used to remove small
    anomalies such as single-pixel objects, called
    speckles, and single pixel wide spurs from the
    image. Multiple erosion operation on an image
    shrinks touching objects until they finally
    separate. This can be useful prior to object
    counting operations.
  • Binary dilation operations are used to remove
    small anomalies such as single-pixel holes in
    objects and single pixel wide gaps from an image.

35
  • Multiple applications of dilation process
    expands broken objects until they finally merge
    into one. This can also be useful prior to
    object counting operation.
  • Implementation Define morphological structuring
    element and perform the morphological group
    process on the image.
  • Structuring elements for omni-directional,
    horizontal, vertical and diagonal erosion are

Omnidirectional Horizontal Vertical
Diagonal element element element element
36
Binary erosion and dilation process - a- original
image, b- binarized image after contrast
enhancement, c- eroded image, d- erosion
operation applied two times.
37
  • Structuring elements for the dilation operation
    for omni-directional, horizontal, vertical and
    diagonal are

Omnidirectional Horizontal Vertical
Diagonal element element element element
38
  • 21. Binary Opening and Closing
  • The Binary opening operation is a binary erosion
    operation followed by binary dilation. Objects
    generally remain in their original size.
  • The binary closing operation is a binary
    dilation operation followed by as binary erosion
    operation. Objects retain size.
  • Applications Opening operation- eliminates
    small anomalies, such as single pixel objects and
    single pixel wide spurs from an image It also
    reduces object size. The following dilation
    expands object back to original size. This
    operation is useful to clean-up images with noise
    and other anomalies.
  • Closing - the dilation process eliminates single
    pixel holes and single pixel wide gaps. The
    dilation also increase size. Following erosion
    reduces object to original size. It gets rid of
    object holes and other small anomalies.

39
  • 22. Outlining
  • Outlining operation operates on binary images by
    subtracting the eroded version of the image from
    the original one.The resulting image shows a
    single pixel wide outline of the original image.
  • Applications The outlining operation is useful
    as a precursor to object measurement. It produces
    outline of the object. Using omnidirectional
    erosion structuring element, this operation
    produces outlines of all objects, regardless of
    their orientation.
  • Multiple erosion operations can be applied prior
    to subtracting the eroded image from the
    original. Additional erosion produces a resulting
    image with wider outline widths.
  • Directional outlines can also be produced using
    directional structuring element.

40
a- original chip image b- eroded image c-
original image minus eroded image yielding an
outline image
41
  • 23.Gray Scale Erosion and Dilation
  • This operation reduces the brightness ( and
    therefore the size) of bright objects on a dark
    background.
  • The gray-scale dilation process increases the
    brightness ( and therefore the size) of bright
    object on a black background .
  • Applications The erosion operation is used
    eliminate small anomalies such as single pixel
    bright spots. Multiple application of this
    operation shrinks the touching objets for
    counting purpose.
  • The dilation operation is used to eliminate
    small anomalies such as single pixel dark spots
    from an image. Multiple application of the
    operation expands broken objects by brightening
    their perimeters until they finally merge into
    one.
  • Masks are similar to binary erosion and
    dilation ones.

42
  • 24. Morphological Gradient
  • This operation operates on gray-scale images.
    Both the eroded and dilated version are created.
    Then the eroded version of the image is
    subtracted from the dilated version. The
    resulting image shows the edges of the objects in
    the original image.
  • Applications The morphological gradient
    operation is used to produce the edges of the
    object in an image. Using the omindirectional
    erosion and dilation structuring element, this
    operation produces edges of all objects,
    regardless of their orientation.
  • Other structuring elements as horizontal,
    vertical or diagonal can be used to obtain
    directional edges.
  • Multiple erosion and dilation operations can be
    applied prior to subtracting. This operation
    provides wider edge widths.

43
a- eroded gray scale image , b- dilated image
minus eroded image, yielding a morphological
gradient image
44
  • 25. Object Shape Measures.
  • Imaged objects are measured to describe their
    characteristic attributes for proper
    identification. Measures such as its perimeter,
    area, length, and width can often be sufficient
    for for most classification tasks. The list of
    potential measures is endless and is generally
    dictated by requirements of the application.
  • Applications Once the image is in right form,
    the object is measured in some specified way
    prior to classification.
  • The measures are chosen to enable easiest
    discrimination between object characteristics of
    interest.
  • For example area alone may be a good feature.
  • Careful measures enable the automatic
    understanding of images by machine and can
    provide fast processing while maintaining
    required accuracy.
  • Parameter measurement could be done by simply
    counting the pixels.

45
Outlines of nut and bolt Nut Bolt Perimeter 13
97 pixels Perimeter 504 pixels Area 50,970
pixels Area 8135 pixels Number of
holes 0 Number of holes 1
46
  • 26. Line Segment Boundary Description
  • Images are measured to describe their
    characteristic attributes. When the
    classification task requires a specific
    understanding of an object shape, object boundary
    descriptors are essential.
  • Variable length line segment boundary
    descriptors break the parameter of an object into
    discrete line segments and record the length and
    angle of each segment relative to previous
    segment.
  • With this boundary description technique, the
    objects perimeter can be concisely and
    accurately represented.
  • These features can be compared with stored
    representations to obtain the classification.
  • You have to combine adjacent line segments with
    similar angles or with minor length differences.

47
Segment No. 1 Pixels,108
degrees 198 Segment No. 2 Pixels, 33
degrees 265 Segment No. 3 Pixels, 280
degrees 196 Segment No. 4 Pixels, 112
degrees 280 Segment No. 5 Pixels, 278
degrees 17 Segment No. 6 Pixels, 40
degrees 290 Segment No. 7 Pixels, 115
degrees 17 Segment No. 8 Pixels, 175
degrees 108
48
  • 27. Gray Level Classification
  • Each image pixel is determined to be inside or
    outside a number of brightness ranges where each
    range is determined by low and high brightness
    threshold values.
  • If the pixel brightness is between two
    thresholds, the resulting pixel brightness is set
    to a corresponding predetermined value. If it is
    outside all threshold ranges, the resultant
    brightness value is set to 0.
  • This classification operation can be used to
    separate objects with different gray levels into
    different classes.
  • This operation separates multiple objects from
    backgrounds.
  • O(x,y) 64, if I(x,y) is between low and high
    thresholds defined for object 1
  • O(x,y) 128 if I(x,y) is between low and high
    thresholds defined for object 2. O(x,y) 0 ,
    otherwise.

49
Bkgrd
Logo
Text
sign
112
0
Text
A- original image, b- histogram of image, c-
mapping function to isolate the text brightness,
d- image after thresholding operation .
50
  • 28. IMAGE COMPRESSION
  • Run-length Coding
  • It is a loss-less image compression technique.
    Across each line of the image, pixel brightness
    is sequentially compared and grouped together
    into identical brightness.
  • The resultant compressed image data is composed
    of a series of paired brightness and run-length
    values.
  • Example a run of 25 pixels with the brightness
    value of 176 will be coded as 176/25. Original 25
    bytes are replaced with two bytes.
  • Applications Lossless image compression
    technique is used to reduce the data size of
    image files while preserving the exact image
    brightness values of the original image.
    Loss-less schemes like run-length, the
    compression ratio is normally lower. Typical
    compression values of 2-10 may be achieved on
    binay images and 1.2 - 2 on gray images.

51
  • 29. Huffman Coding
  • It is also loss-less compression technique.
    Pixel brightness values are replaced with
    variable length codes based on the frequencies of
    occurrence in the image.
  • An image histogram is computed, yielding the
    frequencies of occurrences for each brightness
    values. Most frequent brightness level is
    assigned a smallest code. Least frequent
    occurrences are assigned longer codes.
  • Applications lossless image compression is used
    to reduce data size while preserving the exact
    image brightness values.
  • Lossless schemes in general have smaller
    compression ratios when compared with lossy
    compression techniques.
  • This technique provides compression ratios
    between 1.5 to 4.0 on gray images. On binary
    images a better compression is possible.
    Resultant images are identical.

52
  • 30. Truncation coding
  • This scheme represents a lossy coding technique.
    It is carried out by reducing the brightness
    resolution or spatial resolution.
  • Brightness reduction is achieved by reducing the
    number of bits used to represent brightness
    values.
  • Spatial resolution reduction is achieved by
    reducing the number of pixels forming an image (
    M x N ).
  • Applications Lossy image compression
    techniques are used to reduce the data size of
    image while sacrificing some of the original
    image quality.
  • Lossy schemes like truncation compression
    generally have greater compression ratios.
  • Spatial resolution can reduced by regularly
    discarding the image pixels. Typical compression
    of 21 to 101 is possible.

53
  • 31. Discrete Cosine Transform Coding(DCT)
  • DCT image compression technique works on the
    blocks of 8x8 blocks of image pixels. For each
    block, the DCT image ( in the frequency domain)
    is obtained in terms of DCT coefficients.
  • Frequency components with minimal values are
    discarded, leaving only those components which
    contribute significantly to the image.
  • DCT compression scheme has generally higher
    compression ratios.
  • The degradation is normally in the form of high
    frequency component losses. A blur effect is
    visible at times ( if a lot of DCT components are
    discarded)
  • This technique can easily give the compression
    ratio of 10 1 for gray images.

54
  • 32. Differential Pulse Code Modulation Coding
  • In this technique, each pixel brightness value
    is replaced by the difference value of it and its
    neighbor to the left.
  • Because brightness statistically changes slowly
    over most of an image, difference values can be
    represented by a lower-resolution value than the
    raw brightness value.
  • When the full difference value can not be
    represented with a lower difference resolution,
    degradation occurs in the compressed image.
  • Implementation
  • Compute pixel-by-pixel differences of brightness
    between current pixels and preceding neighbors.
  • Truncate difference brightness difference value
    to predetermined resolution.
  • A compression of 31 is possible without serious
    image degradation.
Write a Comment
User Comments (0)
About PowerShow.com