ECSE6963, BMED 6961 Cell - PowerPoint PPT Presentation

1 / 40
About This Presentation
Title:

ECSE6963, BMED 6961 Cell

Description:

ECSE6963, BMED 6961 Cell – PowerPoint PPT presentation

Number of Views:50
Avg rating:3.0/5.0
Slides: 41
Provided by: badrinat
Category:
Tags: bmed | cell | diau | ecse6963

less

Transcript and Presenter's Notes

Title: ECSE6963, BMED 6961 Cell


1
ECSE-6963, BMED 6961Cell Tissue Image Analysis
  • Lecture 14 Feature Selection Validation
  • Badri Roysam
  • Rensselaer Polytechnic Institute, Troy, New York
    12180.

2
Recap Blob Segmentation
3
Recap Object Models
Model
Object Features
Score
  • Think of an object model as a wise formula
  • Given a bunch of object features, the formula
    returns a score
  • Higher scores for valid objects
  • Lower scores for invalid objects
  • Practical issues
  • Needs sufficiently informative features
  • Needs a well-chosen set of features
  • Usually, the score is normalized to the range 0,
    1

4
Recap Model Based Merging
If the overall score can be improved by a
proposed merge, do it
5
Example Cervical Smears
6
Features of Nuclei in Cervical Smears
Absorption Features
Clump
OD
Energy
Energy
mean
1
2
I
I
Corr
Corr
mea
mean
1
2
OD
var
I
Homog
Homog
norm
2
1
R
I
OD
Entropy
Entropy
max
var
int
1
2
R
Contrast
Contrast
mean
2
1
Area
Bclump
CHomog
Elong
Size Features
Fit
R
var
Texture Features
Tort
Shape Features
7
The Feature Selection Problem
  • The features that we have studied are only a
    subset of thousands more that can be
    defined/imagined
  • One common question that arises is the following
  • If I can get 90 accuracy with 3 features, will I
    be better off with 4 features? 5 features? 1000
    features?
  • Issue 1 Additional features may not have enough
    additional discriminatory value or may introduce
    distractions (noise)
  • Issue 2 The curse of dimensionality was a
    phrase used by Richard Bellman to describe a
    problem we run into
  • The complexity of the computation can grow
    exponentially with the number of features
  • We need 100 points to sample the one-dimensional
    interval 0, 1 with a spacing of 0.01
  • Now if we try to achieve the same spacing on a
    10-dimensional unit interval 0, 1N then well
    need 1020 samples!
  • In a sense, the 10-dimensional hypercube is a
    factor of 1018 "larger" than the 1-dimensional
    unit interval!
  • Bottom line
  • If we consider too many features, well need far
    too many examples to estimate the model
    parameters
  • Becomes computationally expensive fast

8
Good Features
  • A good feature
  • Is significantly different for each class
  • Has a small variance/spread within each class
  • Maximize the Fisher Discriminant ratio
  • Is not correlated with another feature in use
  • Correlated features are redundant, and increase
    dimensionality without adding value

Bad
Good
9
Discriminating Ability
  • We start by examining the discriminating power of
    each feature independently
  • Qualitative Method
  • Clear separation of classes on a scatter plot or
    histogram
  • Quantitative Method
  • Start with a LABELED scatter plot
  • Define two hypotheses
  • H0 The values of the feature do not differ
    significantly
  • (Null hypothesis)
  • H1 The values of the features differ
    significantly
  • (Alternative hypothesis)
  • The term significantly is quantified by a
    significance level ?.

10
Gaussian Basics
If we gather up enough numbers together, their
average will tend to be Gaussian distributed.
Falls Rapidly 95 of the samples fall within two
standard deviations!
11
Probabilities Tables
Normalization
To calculate the probability that x is in a
certain interval, we need to integrate the
Gaussian over that interval. Needed Frequently in
statistics. Normalize, and lookup a table of
integrals for N(0,1). Boils down to
Significance level
Acceptance Interval
Old fashioned Table
12
The Sample Mean Variance are Random Variables
Suppose that they are Gaussian distributed, and
mutually independent
13
Hypothesis Testing
Suppose that we have just two classes for now.
14
Hypothesis Testing
test statistic
15
Significance
Suppose we choose a 95 significance level, then
acceptance interval is
If q falls in the above range, decide H0, else
decide H1
16
Case when variances are unknown
?
We can no longer use the Gaussian table Need to
use the T-distribution table instead. Needs two
numbers to look up
DOF
Equal/unequal variance
In MATLAB, H ttest2(x,y,alpha, tail, vartype)
Significance (typ. 5)
17
Discriminant Functions
  • A function of the features that allows us to
    discriminate between classes
  • Generalization of the likelihood ratios and
    thresholds

Linear Discriminant

-
Sign of the discriminant tells us the decision
18
The Next Step
  • The features that pass the individual
    hypothesis tests could still have correlations
    among themselves
  • Correlation implies redundancy, and wasted
    dimensions
  • Procedure
  • Pick the single best feature
  • Try all remaining chosen features one at a time,
    and add the one that gives the best improvement
  • Repeat until
  • The last added feature does not add enough
    improvement to justify an extra dimension

19
Stepwise Discriminant Analysis
  • We can come up with a selection method that goes
    the other way
  • Start off with all features
  • Remove one feature at a time
  • Continue until performance is still acceptable
  • Stepwise discriminant analysis is a method that
    combined top-down and bottom-up approaches
  • Generally, not worth writing our own code
  • Better to use commercial packages
  • The above approaches are still sub-optimal
  • THE VERY BEST approach is to exhaustively
    consider all subsets of features and pick the
    best one.
  • This is very expensive. For example,

20
How do we Test our Model?
Why bother?? Because our model should hold up
over images that we havent processed yet!
Image Selected for Feature Computation and
Labeling
Image to be processed automatically
Select a subset of features, build a discriminant
based on them, and evaluate its effectiveness
over the remaining features
A Batch of Images
21
Features of Nuclei in Cervical Smears
Absorption Features
Clump
OD
Energy
Energy
mean
1
2
I
I
Corr
Corr
mea
mean
1
2
OD
var
I
Homog
Homog
norm
2
1
R
I
OD
Entropy
Entropy
max
var
int
1
2
R
Contrast
Contrast
mean
2
1
Area
Bclump
CHomog
Elong
Size Features
Fit
R
var
Texture Features
Tort
Shape Features
22
Feature Sets Compared
Nearest
Wiener Filter
Neighbor
Deblurred
Raw Data
Deblurred
Data
Data
2-D
Features
3-D
Features
23
Classification Results withLinear Discriminant
Classifier
86
3-D
2-D
Wiener
Wiener
85
Filter
Filter
84
3-D
Nearest
Neighbor
83
3-D
2-D
Nearest
Percent Correct
82
Neighbor
2-D
81
80
79
78
Features Used
24
Stepwise Linear Discriminant Analysis Results
Rank
2-D
2-D
2-D
3-D
3-D Nearest
3-D Wiener
Nearest
Wiener
Neighbor
Neighbor
1
R
R
I
R
R
I
mean
mean
norm
mean
mean
norm
2
Corr
Corr
OD
Corr
Corr
OD
2
2
var
2
2
int
3
Clump
Clump
R
Homog
CHomog
OD
mean
1
mean
4
CHomog
Homog
Entropy
CHomog
Clump
CHomog
2
1
Moral Relative importance of features can be
affected by pre-processing
25
Validation and Performance Assessment
  • Validation
  • Is the software systems output valid?
  • Essential to adoption by biologist/clinician
  • Performance Assessment
  • Exactly how well is the software working?
  • Surprisingly tricky issue given the subjectivity
    and variability of people
  • Inter-subject variability
  • Intra-subject variability

26
The Gold Standard
  • The human visual system is still the gold
    standard for image analysis systems
  • Ask one or more human observers to manually
    analyze the image
  • For multi-observer case
  • Develop a single consensus opinion
  • This becomes the Gold Standard
  • Compare the software output against the gold
    standard, and measure concordance

27
Classical Multiple-Observer Validation
Manual Segmentation by observer 1
Manual Segmentation by observer 2
Image
Consensus Building
Measure(s) of automated segmentation performance
Quantitative Comparison
Manual Segmentation by observer N
Automated Segmentation
Appropriate for validating novel algorithms
28
Things that commonly go wrong
  • Poor data quality
  • Damaged specimen
  • Mis-shapen objects
  • Fragments
  • Poor image quality
  • Noise
  • Spectral bleed-through
  • Partially-imaged nuclei
  • Types of segmentation errors
  • Miss
  • Inaccurate boundary
  • False segmentation
  • Under segmentation
  • Over segmentation
  • Separation errors

Go back to the microscope if at all possible
29
Handling Partial Objects
  • Usually, they need to be deleted based on their
    features
  • Location (close to border)
  • Size (less than modeled value)
  • Brick Rule
  • Define an interior sub-volume in the image
  • Only accept cells that are wholly contained in it

30
Outlier Detection
Outliers are good candidates for further
inspection
31
Color-codes for highlighting errors
Any measure of the quality of fit to the object
model p(X) can serve as a tool for highlighting
errors Red potentially awful Yellow
questionable Green okay
32
Explanatory Display Coding
  • Make it easy for user to separate unhandled
    errors from handled errors
  • One idea is to put mini explanation codes
  • Display detailed explanations when a user clicks
    on a cell or rests the mouse
  • Keep a record trail of all operations that led
    to each object

33
Object Separation Error Example
Gallery view indicates 3 objects
Split Error
Split Cells
34
Editing the Output
Add Object

Split/Merge

Dilate/Shrink
35
Edit-Based Validation Protocol
Inspect and Edit by observer 1
Supervisory Inspect Edit
Automated Segmentation
Image
Verified Corrected Segmentation
Record of edits
Record of edits
Edits not made are implicitly interpreted as
correct results
Compute Statistics on Recorded Edit Operations
Measure(s) of automated segmentation performance
  • Much less effort compared to multi-observer
    validation
  • Subtly different from multi-observer validation,
    but a good approximation
  • Appropriate for mature algorithms in routine usage

36
Algorithm to add an object
  • Seeded Region Growing
  • User clicks on a point on the object to be added
  • Initialize a connected component with this point
  • Examine each neighboring pixel
  • If the intensity is within X of starting point,
    include that in the connected component
  • X tolerance set by user
  • Other criteria can be included
  • Stop when there are no more points to add
  • Flexibility in designing stopping criteria!

37
The need to record edits
  • Often, cell segmentation is performed in a
    pharmaceutical and/or legal situation
  • Much at stake, need protection from cheating and
    carelessness!
  • The Food and Drug Administration has laws and
    guidelines, generally called Good Laboratory
    Practices (GLP)
  • Bottom Line
  • When an edit is made, save the original data, and
    allow rollback (undo)
  • Record the time stamp, and identity of the person
    making the edit, and an explanatory note for
    inspector

38
Edits are Valuable!
  • The edit rate is a direct indicator of software
    performance
  • Basis for edit-based validation
  • The types of edits indicate the most common types
    of errors being made by the software
  • Basis for software revision
  • They also indicate the kinds of images and
    objects for which errors are occurring
  • Sometimes, a good basis for improving specimen
    preparation and imaging steps

39
Summary
  • The feature selection problem
  • Need to select the few really good ones
  • The curse of dimensionality
  • Multiple-Observer Validation and performance
    assessment
  • Technical dimension
  • Human dimension
  • Legal dimension

40
Instructor Contact Information
  • Badri Roysam
  • Professor of Electrical, Computer, Systems
    Engineering
  • Office JEC 7010
  • Rensselaer Polytechnic Institute
  • 110, 8th Street, Troy, New York 12180
  • Phone (518) 276-8067
  • Fax (518) 276-8715
  • Email roysam_at_ecse.rpi.edu
  • Website http//www.ecse.rpi.edu/roysam
  • Course website http//www.ecse.rpi.edu/roysam/CT
    IA
  • Secretary Laraine Michaelides, JEC 7012, (518)
    276 8525, michal_at_.rpi.edu
  • Grader Piyushee Jha (jhap_at_rpi.edu)

Center for Sub-Surface Imaging Sensing
Write a Comment
User Comments (0)
About PowerShow.com