Content-based image retrieval - PowerPoint PPT Presentation

1 / 144
About This Presentation
Title:

Content-based image retrieval

Description:

Contentbased image retrieval a medical perspective – PowerPoint PPT presentation

Number of Views:3275
Avg rating:3.0/5.0
Slides: 145
Provided by: Ins9106
Category:

less

Transcript and Presenter's Notes

Title: Content-based image retrieval


1
Content-based image retrieval a medical
perspective
  • Henning Müller
  • (Thomas Deselaers)
  • HES SO//Valais
  • Sierre, Switzerland

2
Overview
  • Introduction/motivation
  • Non-medical and medical
  • Content-based image retrieval
  • Features, techniques, etc.
  • Text-based image retrieval, multimodal access
  • Relevance feedback
  • Applications
  • Evaluation
  • Demo, questions, answers

3
Henning Müller
  • Diploma in Medical Informatics
  • University of Heidelberg (1992-1997)
  • Daimler Benz research and technology
  • Portland, OR, USA (1997-1998)
  • PhD in image analysis (content-based retrieval)
  • University of Geneva (1998-2002)
  • Part of the work in Melbourne, Australia (2001)
  • Medical image analysis and information systems
  • University and Hospitals of Geneva (2002-)
  • HES SO Valais, Sierre (2007-)

4
University hospitals of Geneva
  • 2,200 beds, 6 hospitals
  • gt80,000 images produced per day
  • All available in digital form
  • 6,000 computers
  • Budget gtCHF 1 billion/year
  • Computerized information system since the 70s
  • Medical Informatics vs.Infrastructure informatics

5
Service of medical informatics
  • 70 employees, part of the radiology department
  • 10 persons in research
  • Research areas
  • Multimedia electronic patient record
  • Decision support systems
  • Telemedicine, especially with African countries
  • Knowledge representation, natural language
    processing, data mining
  • Image processing, PACS, operation planning

6
HES SO
  • University of Applied Sciences Western
    Switzerland, Sierre
  • Business Information Systems
  • eHealth section (retrieval, interoperability, )
  • eServices
  • Software development
  • Henning.mueller_at_hevs.ch

7
MedGIFT project
Data access, standardization
application, implementation
Talisman
Fracture retrieval
_at_neurIST
ImageCLEF
KnowARC
evaluation, validation
infrastructures, computing architectures
8
IntroductionMotivation
9
Some questions!
  • Who works in image processing or image analysis?
  • Who works on information or document retrieval?
  • Who knows what content-based image retrieval
    really is?
  • Who has already worked on image retrieval?
  • Who has already worked in the medical field?

10
General Content-based image retrieval
  • Amount of visual data produced has risen strongly
  • Cheap digital cameras
  • Digital images have no basic costs as paper had
  • Not everything is well annotated
  • Personal collections, journalist archives
  • Image sharing on the web
  • FlickR, youTube,
  • Goals of annotation and retrieval do not always
    match, some things are hard to express
  • Some data can be obtained automatically (GPS)

11
Images on FlickR
12
(No Transcript)
13
Goals of visual retrieval
  • Retrieval of images by visual (objective) means
  • Features extracted fully automatically
  • Colours,
  • Textures,
  • Shapes,
  • Interest points,
  • Query by image example(s), QBE
  • find me images visually similar to this one

14
Simplified overview
Colour 1
Colour 2

Texture N

Represented by
Stored in
User queries
Colour 1
Colour 2

Texture N

Database
Feedback
15
Problems of image retrieval
  • Sensory gap
  • Images taken mean a data loss compared to reality
    (limited resolution, no 3D, )
  • Semantic gap
  • Features automatically extracted may not
    correspond to human semantic search categories
  • Page zero problem of query formulation
  • How can an image for QBE be found?
  • Result use text wherever available!

16
How we would like to see image retrieval
17
and how it is perceived
18
Medical image retrieval
  • Situation is often different from photos as
    images are the first things available for a
    patient
  • No page zero problem, but often no colour
  • Search currently mainly by patient name/ID
  • Legal constraints limit visual navigation
  • Images alone are most often of limited interest
  • Impossible to interpret things
  • Content vs. context
  • Very small region of interest, often rather
    detection

19
Content vs. context healthy patients
  • 25 year old man
  • homogeneous tissue
  • 88 year old man
  • lower mean density
  • pre-fibrotic lesions

20
Image classification vs. image retrieval
  • Retrieval
  • No or almost no training data
  • No clearly defined tasks a priori, relevance for
    a user can contain subjectivity
  • Classification
  • Large amount of training data available
  • Limited number of categories are well defined
  • Detection
  • Objects, concepts, presence, place, size,

21
Content-based retrieval
22
General system overview
23
A more detailed view
24
Interface
Query image
Diagnosis link to teaching file
Link to the full size image
User relevance feedback
Similarity score
25
QBIC Query By Image Content
  • IBM, commercial product,1993
  • Add on for DB2
  • Simple color, texture, layout features
  • Very simple feedback

26
Viper/GIFT
  • http//viper.unige.ch/
  • http//www.gnu.org/software/gift/
  • Visual Information Processing for Enhanced
    Retrieval, project of the University of Geneva
  • The outcome of the project is GIFT and is open
    source
  • Continuation at several places

27
Characteristics of GIFT
  • MRML-based communication interface
  • KMRML in Konqueror
  • Plugin for Gimp
  • User interfaces in Java, PHP, CGI/perl
  • Components can be exchanged relatively easily
  • For example the features
  • Uses technologies known from text retrieval
  • Tools to index directory trees, generate inverted
    files, etc.

28
  • Standardized access to visual search engines
  • http//www.mrml.net/
  • Communication is in XML, server waits at a port
  • Component-based structure

ltmrml session-id"1" transaction-id"44"gt ltquery-s
tep session-id"1" resultsize"30"
algorithm-id"algorithm-default"gt
ltuser-relevance-listgt ltuser-relevance-element
image-location"http//viper/1.jpg"
user-relevance"1"/gt ltuser-relevance-element
image-location"http//viper/2.jpg"
user-relevance"-1"/gt lt/user-relevance-listgt lt/q
uery-stepgt lt/mrmlgt
29
Visual features used
  • Global colour histogram (HSV, 18, 3, 3, 4 grey
    levels)
  • Colour blocks at different scales and locations
  • Histogram of Gabor filter responses
  • 4 directions, 3 scales, quantized in 91
    strengths
  • Gabor blocks at smallest scale
  • 85000 possible features, 1000-3000 features
    per image, distribution similar to words in text

30
The inverted file
Feature 1
Image 1
Image 5
Image 7
Image 25
Image 1
Image 17
Image 3
...
Feature 2
...
Image 25
Image 17
Image 1
Image 4
Feature n-1
Image 4
Image 5
Image 6
...
Feature n
Image 2
Image 17
Image 12
Image 3
31
Inverted file
  • Access feature by feature instead of image by
    image
  • Extremely fast access for rare features
  • Efficient for sparsely populated spaces

32
Feature weighting
  • Classical idf
  • tf - term frequency
  • (number of occurrences of a term in a document)
  • cf - collection frequencydocument frequency
  • j - feature number
  • q - query with i1..N input images
  • k - possible result image
  • R - Relevance of an image in a query

33
General views on the concept of feature
  • Features are numerical values computed from each
    image (continuous values)
  • View connected to image classification
  • Ideas and methods from classification and machine
    learning
  • Inspired by k-nearest neighbor approach
  • Features are image properties that are present or
    absent
  • View connected to textual information retrieval
  • Ideas and methods from text retrieval

34
Visual properties of images
  • Color
  • Texture
  • Shapes
  • Image parts
  • Complete image
  • Meta data
  • Textual labels/captions/annotations
  • Global vs. local

35
Features
  • Global Descriptors
  • Color histograms
  • Texture Features (Gabor filters, Wavelets,
    Coocurrence Matrices)
  • Shape Features (Moments, )
  • Local Descriptors
  • Direct approach, partitioning
  • Patch-histograms / bag-of-visual words
  • SIFT features

36
Color Histograms Example
RGB color space
HSV color space
Visualization done with 3D Color Inspector
http//rsbweb.nih.giv/ij/plugins/color-inspector.h
tml
37
Color Histograms Example
RGB color space
HSV color space
Visualization done with 3D Color Inspector
http//rsbweb.nih.giv/ij/plugins/color-inspector.h
tml
38
Texture Features
  • Texture refers to the properties held and
    sensations caused by the external surface of
    objects received through the sense of touch.
  • Only makes sense in homogeneous areas
  • Texture in image processing
  • Various definitions
  • Different representations

39
Tamura Features
  • Proposed by Tamura 1978
  • Features corresponding to human perception
  • Examined 6 features, 3 corresponding to human
    perception
  • Coarseness coarse vs. fine
  • Contrast high vs. low
  • Directionality directional vs. non-directional
  • Linelikeness line-like vs. non-line-like
  • Regularity regular vs. irregular
  • Roughness rough vs. smooth

40
Gabor Features
  • Obtain several values per pixel denoting spatial
    frequencies and directions

41
Gabor Features
  • Windowed Fourier transform with Gaussian
    as window function

42
Gray-Level Co-Occurrence Matrices
  • Statistical descriptor for texture properties of
    an image by comparing neighboring pixels
  • Direction and distance
  • Extract features from matrix
  • Entropy
  • Contrast
  • Correlation

43
Pixel Values as Features
  • Most straightforward
  • Scale all images to a common size
  • Compare using e.g. Euclidean distance
  • Pixel-wise
  • Multi-scale Representations

44
Image Distortion Model
  • Allow for small local displacements

45
Shape GIST descriptor
GIST descriptor Oliva and Torralba, IJCV 2001
Slide by James Hays and Alexei Efros
46
Local Descriptors
  • Various types
  • Features extracted from local regions
  • Patches, SIFT features, local color histograms,
  • Extraction position determined by interest points
  • Known to achieve good results in many tasks
  • Active field of research in object recognition,
    detection, scene classification, image annotation
  • More recently image retrieval

47
Interest Points
48
Local Descriptors Direct Retrieval
49
Histograms of Local Descriptors
50
(No Transcript)
51
Correlation between Features
1 colorhistogram 2 MPEG7 colorlayout 3
LFSIFThistogram 4 LFSIFTsignature 5
LFSIFTglobal search 6 MPEG7 edgehistogram, 7
Gaborvector 8 Gaborhistograms 9
grayvaluehistogram 10 global texturefeature, 11
inv. Feat histo color 12 Lfpatchesglobal 13
LFpatcheshistogram 14 LFpatchessignature, 15
inv. feat historel 16 MPEG7 scalablecolor, 17
Tamura 18 32x32 image 19 Xx32image.
52
Correlation Between Features
53
Combining Features
  • Manually tuned
  • Have an expert find a proper set of parameters
  • Heuristic to capture different image properties
  • Combination to reflect human perception
  • Combination to obtain optimal performance
  • (given a set of training queries)

54
Combining Featuresto Capture Different Image
Properties
  • Given the result from the correlation analysis,
    first choose a simple feature
  • Then add features which have low correlation

Color Histogram 50.5 MAP
Global Texture Features 49.5 MAP
Tamura Texture Histogram 51.2 MAP
Image Thumbnails 53.9 MAP
Patch Histograms 55.7 MAP
55
Combining FeaturesReflecting Human Perception
56
Available Resources
  • Image Retrieval Systems
  • FIRE Flexible Image Retrieval Engine
  • http//www-i6.informatik.rwth-aachen.de/deselaers
    /fire/
  • Research image retrieval system
  • Developed to allow for easy extension
  • Following the Continuous approach
  • openCV computer vision library
  • http//sourceforge.net/projects/opencvlibrary/
  • Implements many image processing operations
  • Face detection and recognition, feature extraction

57
Efficient access methods to features
  • Query time should below 1s
  • Best below 0.1 s
  • Many methods to reduce search space
  • PCA. ICP,
  • Database community has many index and access
    methods
  • Different trees
  • Inverted files for sparse feature spaces

58
Execution times for query per 100 features
59
Final top ten of the retrieval
60
Text-based retrieval
61
Text retrieval (of images)
  • Started in the early 1960s for images 1970s
  • Not the main focus of this talk
  • Text retrieval is old!!
  • Many techniques in image retrieval are taken from
    this domain (reinvented)
  • It becomes clear that the combination of visual
    and textual retrieval has biggest potential
  • Good text retrieval engines exist in Open Source

62
Problems with annotation
  • Many things are hard to express
  • Feelings, situations, (what is scary?)
  • What is in the image, what is it about, what does
    it invoke?
  • Annotation is never complete
  • Plus it depends on the goal of the annotation
  • Many ways to say the same thing
  • Synonyms, hyponyms, hypernyms,
  • Mistakes
  • Spelling errors, spelling differences (US vs.
    UK), weird abbreviations (particularly medical )

63
Principle techniques used
  • Words follow basically a Zipf distribution
  • Tf/idf weightings
  • A feature frequent in a document describes it
    well
  • A feature rare in a collection has a high
    discriminative power
  • Many variations of tf/idf (see also
    Salton/Buckley paper)
  • Use of inverted files for quick query responses
  • Relevance feedback, query expansion,

64
Zipf distribution (wikipedia example)
  • X- rank
  • Y- number of occurrences of the word

65
Techniques used in text retrieval
  • Bag of words approach
  • Or N-grams can be used
  • Stop words can be removed (based on frequency or
    list)
  • Stemming can improve results
  • Stemmers exist for several languages
  • Named entity recognition
  • Spelling correction (also umlauts, accents, )
  • Mapping of text to a controlled
    vocabulary/ontology

66
Medical terminologies
  • MeSH, UMLS are frequently used
  • Mapping of free text to terminologies
  • Quality for the first few is very high
  • Links between items can be used
  • Hyponyms, hypernyms,
  • Several axes exist (anatomy, pathology, )
  • This can be used for making a query more
    discriminative
  • This can also be used for multilingual retrieval

67
Wordnet
  • Hierarchy, links, definitions in English language
  • Maintained in Princeton
  • Car, auto, automobile, machine, motorcar
  • motor vehicle, automotive vehicle
  • vehicle
  • conveyance, transport
  • instrumentality, instrumentation
  • artifact, artefact
  • object, physical object
  • entity, something

68
Apache Lucene
  • Open source text retrieval system
  • Written in Java
  • Several tools available
  • Easy to use
  • Used in many research projects

69
Multilingual retrieval
  • Many collections are multilingual
  • Web, FlickR, medical teaching files,
  • Translation resources exist on the web
  • Translate query into document language
  • Translate documents into query language
  • Map documents and queries onto a common
    terminology of concepts
  • We understand documents in other languages

70
Multilingual tools
  • Many tools accessible on the web
  • Yahoo! Babel fish
  • www.reverso.net
  • Google translate
  • Named entity recognition
  • Word-sense disambiguation

71
Current challenges in text retrieval
  • Many taken from the WWW or linked to it
  • Analysis of link structures to obtain information
    on potential relevance
  • Also in companies, social platforms,
  • Question of diversity in results
  • You do not want to have the same results show up
    ten times on the top
  • Retrieval in context (domain specific)
  • Question answering

72
Diversity
73
Relevance feedback
74
User interaction
  • Contains not only the user interface but also
    several other parts of the system
  • Interactivity ? Interaction speed
  • Relevance feedback
  • Positive, negative, excessive use
  • Feedback strategies of users
  • Long-term analysis of user behavior
  • Interaction paradigms (QBE, Browsing)
  • Query starting point (text, example, drawing)

75
Relevance feedback
  • Queries based on single keywords or images tell
    little about the users information needs
  • Obtain more information through interaction
  • Results of a query are used to refine a query
  • show me similar documents to this one or those
    two but not like the other one
  • Query expansion (automatic, manual)
  • After a result keywords or features are added

76
Interpreting feedback
  • AND
  • Find images that contain something present in all
    selected images
  • OR
  • Find images of one sort or the other
  • Mix
  • Something in between
  • Pseudo image in GIFT, for example

77
Ways of calculating feedback
  • Separate queries for every example image
  • Computationally expensive
  • Corresponds rather to an OR
  • Creation of a single pseudo image with all the
    input data
  • Quick
  • Corresponds rather to an AND
  • Well suited for the GIFT-model

78
Data fusion strategies
  • Data from separate queries but also varying
    feature sets
  • Early fusion
  • Features and distances are regarded in a same
    feature space
  • Late fusion
  • Distances are calculated in various feature
    spaces in the combined afterwards

79
Relevance feedback strategies
  • Several similar input images improve the query
    quality
  • BUT all images are already similar, just a
    reordering of the top N
  • Negative feedback is extremely important
  • Obtain more discriminative information on
    features
  • Several strategies for negative feedback
  • All, or the most different ones, or one per
    cluster

80
Relevance feedback strategies
  • Experienced users obtain better results
  • Better use of feedback (especially negative)
  • Automation of feedback strategies
  • Positive, few negative, all neg. (low weight)
  • Excessive negative feedback can kill results
  • Rocchio feedback (1960!)
  • Separate calculation of pos. and neg. parts

81
Long-term learning
  • Analyze user behaviour over longer period
  • Logfiles with interaction are stored for example
    in MRML
  • Analyze images that are marked together as
    relevant or non-relevant in the same query step
    (concentrate on pairs)
  • This can lead to image correlations
  • We want to learn on a feature basis to be more
    general
  • On an image basis can help if much feedback is
    available

82
Combinations of images
0
83
Comparison with market basket analysis
  • Market basket analysis (MBA)
  • Items bought together in a supermarket
  • Large data sets exist in supermarkets
  • Impossible to evaluate all possible sets
  • Data reduction is necessary
  • Association rules are the goal
  • Which set of items implies another set of items
  • Best are frequent buys and high probability

84
Factor for feature relevance
  • Based on probability for association rules

85
A hierarchy for learning
86
Other techniques
  • Image browsing (target search)
  • PicHunter system
  • Maximize information gain in each step to find a
    known image in a collection
  • Changes of feature sets during feedback
  • First results seem to be good
  • Increases discriminative power

87
Another simple interface
irrelevant
unjudged
relevant
88
History of feedback
89
Interfaces
  • 3D Browsing and Searching Interface in MARS

90
Interfaces
  • Video Search and Retrieval Interface

Nguyen et al. ACM Trans. MM 2008
91
Collection guide
92
(No Transcript)
93
(No Transcript)
94
Comparison of several feedback techniques (Wang
database)
95
Medical applications
96
(No Transcript)
97
(No Transcript)
98
Talisman
  • Texture analysis in lung CT images to aid
    diagnosis of interstitial lung diseases
  • Set of 150 diseases with unspecific symptoms
  • Often hard for the non-specialist
  • Database of lung CTs and clinical data is
    acquired
  • 100 characteristics, based on expert systems
  • Combine visual and clinical parameters for
    retrieval

99
(No Transcript)
100
Dataset (increasing)
  • Extracted from a multimedia database of ILDs
    containing 96 patients and 1104 ROIs (1/2008)
  • 100 clinical parameters
  • Clinical attributes have 35 of the values
    non-defined
  • 736 ROIs in HRCT scans from 56 patients
    representing 5 classes of lung tissue were
    selected

Healthy 63 ROIs from 5 patients
Emphysema 58 ROIs from 4 patients
Ground glass 148 ROIs from 14 patients
Fibrosis 312 ROIs from 28 patients
Micronodules 155 ROIs from 5 patients
101
Some results
102
(No Transcript)
103
Casimage a radiological case database
  • Case database, especially for teaching
  • gt100000 images, 9000 externally accessible and
    anonymized
  • Case descriptions (textual) available in XML
  • Very varying quality
  • Mix of French and English
  • Interface is compatible to the MIRC (Medical
    Image Resource Centre) standard of the RSNA

104
Fracture retrieval
  • Database with gt20000 images
  • Before and after interventions, sometimes long
    term
  • Assistance for treatment planning
  • Goal is to find similar cases
  • Based on several images (frontal, lateral), place
    of fracture, complexity of fracture but also
    patient data age, weight,
  • What is the best method?
  • Screw, plate,
  • Local features required
  • Salient points

105
_at_neurIST
  • EU project (IP) with 32 partners on Aneurism
    treatment
  • Multimodal data analysis and fusion of data from
    heterogeneous sources
  • Genes, proteins, cells, tissue/organs,
    individual, population
  • Data collectionis important
  • Rare disease

106
_at_neurIST our role
  • Create architecturefor data acquisitionand
    communication
  • Constraints
  • Security, legal issues
  • Political issues
  • Work internationally
  • Normalization
  • Goal is a reusable architecture to access data in
    the patient record also for other research
    projects

107
KnowARC
  • NorduGrid ARC (Advanced Resource Connector) is a
    middleware of several Nordic countries
  • High energy physics community
  • Linked with CERN, LHC
  • For analyzing medical images we could use much
    computing power
  • Goal Use 6000 computers of the hospitals with
    an easy-to-use middleware
  • Problems Security, Politics

108
Desktop grid via virtualization
  • Hospital PCs are centrally managed Windows
    machines (6000)
  • Barely used at night and day
  • Use case similar at several institutions
  • State of Geneva, University
  • Acquire knowledge on grids and thinkabout new
    solutions
  • Independent of computing restrictions
  • Image retrieval is easy to parallelize

109
A small hospital grid
  • Virtual machine distributed automatically
    including a Linux image
  • Active directory based solution in the hospitals
  • PCs in a seminar room, plus own PCs
  • User can stop the virtual machines when he needs
    his computer fully
  • New PCs are barely slowed down
  • Smaller time overhead internally than externally
  • Internal and external computation with the same
    interface

110
Visual problems
logo
specific problems
text
Large parts without information
111
(No Transcript)
112
Document image search
  • http//153.109.124.568080/TB_IVAN/
  • Collaboration with WHO

.doc, -xls, .pdf, .ppt
text
images
Lucene
GIFT
113
(No Transcript)
114
ASSERT (1997-2000)
115
IRMA Image annotation
116
myPACS
117
MedTing
118
Goldminer
119
Google image search (faces)
120
Evaluation
121
Information retrieval evaluation
  • Started very early (1960s, in part as a
    theoretical discipline )
  • Cranfield tests, Smart
  • TREC became a role model for benchmarks with many
    spin-offs (TRECVID, CLEF, )
  • Yearly circle of events
  • Relevance-based evaluations,
  • Mainly system-oriented evaluation
  • Still much can be criticized
  • Measures, interactive retrieval,

122
A yearly circle
123
Visual retrieval evaluation
  • Little systematic evaluation in first years of
    research (1990-2000)
  • Some papers on methodologies
  • Benchathlon to foster discussions
  • Since then, evaluation has come a long way
  • TRECVID, ImageCLEF, INEX MM, ImageEval,
  • Improvement in performance can be shown
  • Techniques can be compared
  • Methodologies and user models can be criticized
  • Not all research can be benchmarked
  • Innovation instead of pure performance

124
Axes for evaluation
  • Databases
  • Tasks
  • Including experts for relevance judgements
  • Participants
  • Techniques to compare
  • Ground truth, gold standard
  • Performance measures

125
Problems of retrieval benchmarks
  • Funding
  • Access to datasets
  • Motivate participation
  • Partners from industry
  • Realistic tasks and user models
  • Ground truthing (costly, ambiguous)
  • Organisational issues
  • Proving advances and benefits

126
CLEF - ImageCLEF
  • Cross Language Evaluation Forum
  • Started as track in TREC (Text Retrieval
    Conference,1997)
  • Independent workshop since 2000
  • Multilingual information retrieval
  • Collections are multilingual
  • Queries are in a language different from the
    collection
  • Good framework, registration, legal issues,
    proceedings in Springer LNCS,

127
History ImageCLEF
  • 2003 first image retrieval task, 4 participants
  • 2004 17 participants for three tasks (200 runs)
  • Medical task for visual image retrieval added
  • 2005 24 participants for fours tasks (300 runs)
  • Two medical tasks
  • 2006 30 participants for four tasks (300 runs)
  • LTU database of objects for object classification
  • 2007 35 participants (gt1000 runs)
  • Hierarchical classification
  • 2008 45 participants submitted results (gt2000
    runs)
  • 63 registrations, wiki task

128
ImageCLEF 2008
  • ImageCLEF/Quaero workshop on image retrieval
    evaluation
  • To motivate visual retrieval community
  • Ad-hoc retrieval with query in different language
  • Photo collection, vacation pictures of an agency
  • Concept detection task
  • Medical Retrieval task
  • Collection of 70000 images with annotations
  • Medical classification task
  • Hierarchical classification
  • Wikipedia retrieval task
  • Interactive retrieval (using a FlickR API)

129
Tasks and topic definitions
  • Realistic!!
  • Based on independent expert opinions
  • Based on surveys (Portland, Geneva)
  • Based on log files (health on the net media
    search, medline)
  • Retrieval with varying degree of visualness
  • A little subjective
  • Afterwards analysis of results per task
  • Analyze ambiguity for judges (double judgments)
  • Kappa analysis

130
Task examples
1.4 Show me x-ray images of a tibia with a
fracture. Zeige mir Röntgenbilder einer
gebrochenen Tibia. Montre-moi des radiographies
du tibia avec fracture.
131
Task examples
3.6 Show me x-ray images of bone cysts. Zeige
mir Röntgenbilder von Knochenzysten. Montre-moi
des radiographies de kystes d'os.
132
Ground truthing
  • Retrieval
  • Expensive task with real users!
  • Funding from NSF, help from participants
  • Pooling is used with varying number depending on
    submissions
  • Judgment scheme relevant partially
    non-relevant
  • Describe all categories exactly!!
  • Double judgments to analyze ambiguity
  • Good systems stay good with any judge
  • Interactive
  • Participants evaluate themselves (time, Nrel)

133
Evaluation
  • Categories for media used
  • Visual, textual, mixed
  • Categories for interaction used
  • Automatic, feedback, manual modification
  • Still Mean Average Precision as a lead measure
  • Correlates very well with other measures
  • BPref, P(10-50) used for comparison
  • Many ideas on how to find better measures
  • No resources to pursue this

134
MAP and other measures
135
Workshop
  • Event for discussions among participants
  • Mix visual and text retrieval communities
  • Learn from results of others
  • Oral presentations are selected based on novelty
    of techniques not on performance
  • Every participant can present a poster
  • Presentation of the main findings
  • Feedback is very positive and participants do not
    regret their participation

136
Example from the database 2008
137
ImageCLEFmed 2008
  • Images and full-text articles of Radiology/
    Radiographics (thanks to the RSNA!)
  • Captions of the figures with detailed information
    on the figures, subfigures
  • The kind of data that clinicians search
  • Detailed search tasks may not be the most common
    for diagnosis, rather teaching
  • More adapted for text retrieval, image analysis
    has to be done with care

138
Some results
  • Visual retrieval has often good early precision
    but poor recall
  • Visual features can be useful for specific
    queries
  • This can be detected more or less automatically
  • Multimodal retrieval has most potential
  • Visual classification has improved significantly
  • Relevance feedback and interactive retrieval are
    rarely used (lack of manpower)

139
ImageCLEFmed 2009
  • Search for similar cases in the literature
  • Several sorts of images (xray, CT, MRI)
  • Use incomplete data (no textual information on
    modality, pathology)
  • Much more realistic scenario! Clinician in the
    process of solving a difficult case
  • Hard task text processing might not work
  • Fusion of very varied data is an important topic

140
Demo questionsanswers
141
Demo of GIFT
  • http//medgift.unige.ch

142
Conclusions
  • Image retrieval has an important potential
  • Still, there are many challenges
  • It will not replace but complement text retrieval
  • The medical domain faces many challenges of the
    non-medical retrieval field
  • But there are some advantages
  • and a strong motivation to learn communicate
    with clinicians

143
Future work
  • Include 3D and 4D datasets into the analysis
  • Massive data reduction is required
  • Detection of abnormalities
  • Dissimilarity retrieval?
  • Region of interest is most often very small
  • Case-based retrieval instead of image-based
    retrieval
  • Multimodal data analysis, incomplete data

144
Abbreviations
  • CBIR - Content-based image retrieval
  • CT - Computed Tomography
  • GIFT - GNU Image Finding Tool
  • GNU - GNU is Not Unix
  • GUI - Graphical User Interface
  • HES SO - Haute Ecole Spécialisé de Suisse
    Orientale
  • HRCT - High Resoltion Computed Tmography
  • HSV - Hue, Saturation, Value
  • ID - Identification
  • ILD - Interstitial Lung Disease
  • MBA - Market Basket Analysis
  • MeSH - Medical Subject Headings
  • MRML - Multimedia Retrieval Markup Language
  • PACS - Picture Archival and Communication System
  • QBE - Query by Example(s)
  • ROI - Region of Interest
  • UK - United Kingdom
  • UMLS - Unified Medical Language System
  • US - United States
Write a Comment
User Comments (0)
About PowerShow.com