Project 2 Ontology alignment - PowerPoint PPT Presentation

1 / 17
About This Presentation
Title:

Project 2 Ontology alignment

Description:

Project 2 Ontology alignment – PowerPoint PPT presentation

Number of Views:33
Avg rating:3.0/5.0
Slides: 18
Provided by: DWI109
Category:

less

Transcript and Presenter's Notes

Title: Project 2 Ontology alignment


1
Project 2Ontology alignment
2
(No Transcript)
3
Ontology Alignment
  • Defining the relations between the terms in
    different ontologies

4
An Alignment Framework
5
Matcher Strategies
  • Strategies based on linguistic matching
  • Structure-based strategies
  • Constraint-based approaches
  • Instance-based strategies
  • Use of auxiliary information

6
Aim
  • Gain understanding about the ontology alignment
    process
  • Gain understanding about about advantages and
    disadvantages of different strategies for
    ontology alignment matching strategies and
    single threshold filtering
  • Gain understanding about evaluation of strategies
    using precision, recall, f-measure
  • Learn about the Ontology Alignment Evaluation
    Initiative (OAEI)
  • Learn to use the tools and data sets of the OAEI

7
Tasks
  1. Tutorial - learn how to use the tools provided by
    the OAEI (matching, single threshold filtering,
    evaluation)
  2. Run existing algorithms on the benchmark test and
    discuss results
  3. Implement own matcher, evaluate and discuss
    results
  4. (study other test cases and discuss what kind of
    matchers would be appropiate)

8
Task 2 using task 1 data
  • Method Precision Recall F-score
    Threshold
  • SMOA 0.69 0.96
    0.80 0.5
  • SMOA 0.73 0.96
    0.83 0.6
  • SMOA 0.79 0.96
    0.87 0.7
  • SMOA 0.80 0.94
    0.87 0.8
  • SMOA 0.88 0.73
    0.80 0.9
  • Levenshtein 0.80 0.94
    0.87 0.5
  • Levenshtein 0.86 0.77
    0.81 0.6
  • Levenshtein 0.92 0.46
    0.61 0.7
  • Levenshtein 0.94 0.35
    0.52 0.8
  • Levenshtein 0.99 0.35
    0.52 0.9

9
Task 3
  • Implementation of own matchers.
  • Definition of similarity computation
  • Testing using thresholds

10
Task 3
11
Task 3
12
Task 3
13
Task 3 performance on task1 data
  • Method Precision Recall F-score
    Threshold
  • equal 1.00 0.23
    0.37 1
  • SMOA 0.69 0.96
    0.80 0.5
  • Levenshtein 0.68 0.98
    0.80 0.33
  • Levenshtein 0.53 1.00
    0.69 0
  • BOWOverlap 0.87 0.85 0.86
    0.3
  • WordOverlap 1.00 0.60 0.75
    0.3
  • BOTOverlap 0.76 0.85 0.80
    0.3
  • TrigramOverlap 0.80 0.92 0.85
    0.1
  • FinalScore 0.87 0.94
    0.90 0.2
  • NEW 0.48 0.92
    0.63 0
  • NEW 0.74 0.88
    0.80 0.88

14
Task 2 - benchmark
  • Benchmark
  • 1xx (4) same ontology, no overlap, language
    generalization, language restriction
  • 2xx (ca 40) base ontology with modified base
    ontology (e.g. change names, remove relations,
    spelling mistakes, use of sysnonyms, change of
    natural language)
  • 3xx (4) real cases

15
Task 2 benchmark data
  • Best system OAEI 2007 p 0.95 r 0.9
  • Range p 0.98 (with r 0.64) - 0.76 (with r
    0.7)
  • r 0.90 (with p 0.95) - 0.21 (with
    p 0.92)
  • According to category
  • 1xx several systems p1, r1
  • 2xx p 0.95, r 0.9 p 0.97, r 0.89
  • 3xx p 0.94, r 0.68

16
Task 2 - benchmark
  • Approximate string matching
  • (WordNet)
  • Multilingual WordNet
  • Structure
  • Instances

17
Task 2/3
Write a Comment
User Comments (0)
About PowerShow.com