Machine Translation Overview - PowerPoint PPT Presentation

About This Presentation
Title:

Machine Translation Overview

Description:

MT started in 1940's, one of the first conceived application of computers ... viol en su perjuicio los derechos a las garant as judiciales ... en su contra. ... – PowerPoint PPT presentation

Number of Views:265
Avg rating:3.0/5.0
Slides: 66
Provided by: AlonL
Learn more at: http://www.cs.cmu.edu
Category:

less

Transcript and Presenter's Notes

Title: Machine Translation Overview


1
Machine Translation Overview
  • Alon Lavie
  • Language Technologies Institute
  • Carnegie Mellon University
  • Open House
  • March 18, 2005

2
Machine Translation History
  • MT started in 1940s, one of the first conceived
    application of computers
  • Promising toy demonstrations in the 1950s,
    failed miserably to scale up to real systems
  • AIPAC Report MT recognized as an extremely
    difficult, AI-complete problem in the early
    1960s
  • MT Revival started in earnest in 1980s (US,
    Japan)
  • Field dominated by rule-based approaches,
    requiring 100s of K-years of manual development
  • Economic incentive for developing MT systems for
    small number of language pairs (mostly European
    languages)

3
Machine Translation Where are we today?
  • Age of Internet and Globalization great demand
    for MT
  • Multiple official languages of UN, EU, Canada,
    etc.
  • Documentation dissemination for large
    manufacturers (Microsoft, IBM, Caterpillar)
  • Economic incentive is still primarily within a
    small number of language pairs
  • Some fairly good commercial products in the
    market for these language pairs
  • Primarily a product of rule-based systems after
    many years of development
  • Pervasive MT between most language pairs still
    non-existent and not on the immediate horizon

4
Best Current General-purpose MT
  • PAHOs Spanam system
  • Mediante petición recibida por la Comisión
    Interamericana de Derechos Humanos (en adelante
    ) el 6 de octubre de 1997, el señor Lino César
    Oviedo (en adelante ) denunció que la República
    del Paraguay (en adelante ) violó en su
    perjuicio los derechos a las garantías
    judiciales en su contra.
  • Through petition received by the Inter-American
    Commission on Human Rights (hereinafter ) on 6
    October 1997, Mr. Linen César Oviedo (hereinafter
    the petitioner) denounced that the Republic of
    Paraguay (hereinafter ) violated to his
    detriment the rights to the judicial guarantees,
    to the political participation, to // equal
    protection and to the honor and dignity
    consecrated in articles 8, 23, 24 and 11,
    respectively, of the American Convention on
    Human Rights (hereinafter ), as a consequence
    of judgments initiated against it.

5
Core Challenges of MT
  • Ambiguity
  • Human languages are highly ambiguous, and
    differently in different languages
  • Ambiguity at all levels lexical, syntactic,
    semantic, language-specific constructions and
    idioms
  • Amount of required knowledge
  • At least several 100k words, about as many
    phrases, plus syntactic knowledge (i.e.
    translation rules). How do you acquire and
    construct a knowledge base that big that is (even
    mostly) correct and consistent?

6
How to Tackle the Core Challenges
  • Manual Labor 1000s of person-years of human
    experts developing large word and phrase
    translation lexicons and translation rules.
  • Example Systrans RBMT systems.
  • Lots of Parallel Data data-driven approaches
    for finding word and phrase correspondences
    automatically from large amounts of
    sentence-aligned parallel texts. Example
    Statistical MT systems.
  • Learning Approaches learn translation rules
    automatically from small amounts of human
    translated and word-aligned data. Example
    AVENUEs XFER approach
  • Simplify the Problem build systems that are
    limited-domain or constrained in other ways.
    Examples CATALYST, NESPOLE!

7
State-of-the-Art in MT
  • What users want
  • General purpose (any text)
  • High quality (human level)
  • Fully automatic (no user intervention)
  • We can meet any 2 of these 3 goals today, but not
    all three at once
  • FA HQ Knowledge-Based MT (KBMT)
  • FA GP Corpus-Based (Example-Based) MT
  • GP HQ Human-in-the-loop (efficiency tool)

8
Types of MT Applications
  • Assimilation multiple source languages,
    uncontrolled style/topic. General purpose MT, no
    semantic analysis. (GP FA or GP HQ)
  • Dissemination one source language, controlled
    style, single topic/domain. Special purpose MT,
    full semantic analysis. (FA HQ)
  • Communication Lower quality may be okay, but
    degraded input, real-time required.

9
Approaches to MT Vaquois MT Triangle
Interlingua
Give-informationpersonal-data (namealon_lavie)
Generation
Analysis
Transfer
s vp accusative_pronoun chiamare proper_name
s np possessive_pronoun name vp be
proper_name
Direct
Mi chiamo Alon Lavie
My name is Alon Lavie
10
Knowledge-based Interlingual MT
  • The obvious deep Artificial Intelligence
    approach
  • Analyze the source language into a detailed
    symbolic representation of its meaning
  • Generate this meaning in the target language
  • Interlingua one single meaning representation
    for all languages
  • Nice in theory, but extremely difficult in
    practice

11
The Interlingua KBMT approach
  • With interlingua, need only N parsers/ generators
    instead of N2 transfer systems

L2
L2
L3
L1
L1
L3
interlingua
L6
L4
L6
L4
L5
L5
12
Statistical MT (SMT)
  • Proposed by IBM in early 1990s a direct, purely
    statistical, model for MT
  • Statistical translation models are trained on a
    sentence-aligned translation corpus
  • Attractive completely automatic, no manual
    rules, much reduced manual labor
  • Main drawbacks
  • Effective only with huge volumes (several
    mega-words) of parallel text
  • Very domain-sensitive
  • Still viable only for small number of language
    pairs!
  • Impressive progress in last 3-4 years due to
    large DARPA funding program (TIDES)

13
EBMT Paradigm
New Sentence (Source) Yesterday, 200 delegates
met with President Clinton. Matches to Source
Found
Yesterday, 200 delegates met behind closed
doors Difficulties with President Clinton
Gestern trafen sich 200 Abgeordnete hinter
verschlossenen Schwierigkeiten mit Praesident
Clinton
Alignment (Sub-sentential)
Yesterday, 200 delegates met behind closed
doors Difficulties with President Clinton over
Gestern trafen sich 200 Abgeordnete hinter
verschlossenen Schwierigkeiten mit Praesident
Clinton
Translated Sentence (Target)
Gestern trafen sich 200 Abgeordnete mit
Praesident Clinton.
14
GEBMT vs. Statistical MT
  • Generalized-EBMT (GEBMT) uses examples at run
    time, rather than training a parameterized model.
    Thus
  • GEBMT can work with a smaller parallel corpus
    than Stat MT
  • Large target language corpus still useful for
    generating target language model
  • Much faster to train (index examples) than Stat
    MT until recently was much faster at run time as
    well
  • Generalizes in a different way than Stat MT
    (whether this is better or worse depends on match
    between Statistical model and reality)
  • Stat MT can fail on a training sentence, while
    GEBMT never will
  • GEBMT generalizations based on linguistic
    knowledge, rather than statistical model design

15
Multi-Engine MT
  • Apply several MT engines to each input use
    statistical language modeller to select best
    combination of outputs.
  • Goal is to combine strengths, and avoid
    weaknesses.
  • Along all dimensions domain limits, quality,
    development time/cost, run-time speed, etc.
  • Used in various projects

16
Speech-to-Speech MT
  • Speech just makes MT (much) more difficult
  • Spoken language is messier
  • False starts, filled pauses, repetitions,
    out-of-vocabulary words
  • Lack of punctuation and explicit sentence
    boundaries
  • Current Speech technology is far from perfect
  • Need for speech recognition and synthesis in
    foreign languages
  • Robustness MT quality degradation should be
    proportional to SR quality
  • Tight Integration rather than separate
    sequential tasks, can SR MT be integrated in
    ways that improves end-to-end performance?

17
MT at the LTI
  • LTI originated as the Center for Machine
    Translation (CMT) in 1985
  • MT continues to be a prominent sub-discipline of
    research with the LTI
  • More MT faculty than any of the other areas
  • More MT faculty than anywhere else
  • Active research on all main approaches to MT
    Interlingua, Transfer, EBMT, SMT
  • Leader in the area of speech-to-speech MT

18
KBMT KANT, KANTOO, CATALYST
  • Deep knowledge-based framework, with symbolic
    interlingua as intermediate representation
  • Syntactic and semantic analysis into a
    unambiguous detailed symbolic representation of
    meaning using unification grammars and
    transformation mappers
  • Generation into the target language using
    unification grammars and transformation mappers
  • First large-scale multi-lingual interlingua-based
    MT system deployed commercially
  • CATALYST at Caterpillar high quality translation
    of documentation manuals for heavy equipment
  • Limited domains and controlled English input
  • Minor amounts of post-editing
  • Active follow-on projects
  • Contact Faculty Eric Nyberg and Teruko Mitamura

19
EBMT
  • Developed originally for the PANGLOSS system in
    the early 1990s
  • Translation between English and Spanish
  • Generalized EBMT under development for the past
    several years
  • Currently one of the two MT approaches developed
    at CMU for the DARPA/TIDES program
  • Chinese-to-English, large and very large amounts
    of sentence-aligned parallel data
  • Active research work on improving alignment and
    indexing, decoding from a lattice
  • Contact Faculty Ralf Brown and Jaime Carbonell

20
Statistical MT
  • Word-to-word and phrase-to-phrase translation
    pairs are acquired automatically from data and
    assigned probabilities based on a statistical
    model
  • Extracted and trained from very large amounts of
    sentence-aligned parallel text
  • Word alignment algorithms
  • Phrase detection algorithms
  • Translation model probability estimation
  • Main approach pursued in CMU systems in the
    DARPA/TIDES program
  • Chinese-to-English and Arabic-to-English
  • Most active work is on phrase detection and on
    advanced lattice decoding
  • Contact Faculty Stephan Vogel and Alex Waibel

21
Speech-to-Speech MT
  • Evolution from JANUS/C-STAR systems to NESPOLE!,
    LingWear, BABYLON
  • Early 1990s first prototype system that fully
    performed sp-to-sp (very limited domain)
  • Interlingua-based, but with shallow task-oriented
    representations
  • we have single and double rooms available
  • give-informationavailability
  • (room-typesingle, double)
  • Semantic Grammars for analysis and generation
  • Multiple languages English, German, French,
    Italian, Japanese, Korean, and others
  • Most active work on portable speech translation
    on small devices Arabic/English and Thai/English
  • Contact Faculty Alan Black, Tanja Schultz and
    Alex Waibel (also Alon Lavie and Lori Levin)

22
AVENUE Transfer-based MT
  • Develop new approaches for automatically
    acquiring syntactic MT transfer rules from small
    amounts of elicited translated and word-aligned
    data
  • Specifically designed to bootstrap MT for
    languages for which only limited amounts of
    electronic resources are available (particularly
    indigenous minority languages)
  • Use machine learning techniques to generalize
    transfer rules from specific translated examples
  • Combine with decoding techniques from SMT for
    producing the best translation of new input from
    a lattice of translation segments
  • Languages Hebrew, Hindi, Mapudungun, Quechua
  • Most active work on designing a typologically
    comprehensive elicitation corpus, advanced
    techniques for automatic rule learning, improved
    decoding, and rule refinement via user
    interaction
  • Contact Faculty Alon Lavie, Lori Levin and
    Jaime Carbonell

23
Transfer with Strong Decoding
24
MT for Minority and Indigenous Languages
Challenges
  • Minimal amount of parallel text
  • Possibly competing standards for
    orthography/spelling
  • Often relatively few trained linguists
  • Access to native informants possible
  • Need to minimize development time and cost

25
Learning Transfer-Rules for Languages with
Limited Resources
  • Rationale
  • Large bilingual corpora not available
  • Bilingual native informant(s) can translate and
    align a small pre-designed elicitation corpus,
    using elicitation tool
  • Elicitation corpus designed to be typologically
    comprehensive and compositional
  • Transfer-rule engine and new learning approach
    support acquisition of generalized transfer-rules
    from the data

26
English-Hindi Example
27
Questions
28
MEMT chart example
Russian leaders signed KBMT (0.8) Russian leaders signed KBMT (0.8) Russian leaders signed KBMT (0.8) Russian leaders signed KBMT (0.8) compact of peace EBMT (0.65) compact of peace EBMT (0.65) compact of peace EBMT (0.65)
political leaders EBMT (0.9) political leaders EBMT (0.9) compact of EBMT (0.7) compact of EBMT (0.7) civilian GLOSS (1.0)
tactful DICT (1.0) pact GLOSS (1.0) of peace EBMT (1.0) of peace EBMT (1.0) civil GLOSS (1.0)
expedients DICT (1.0) bargain DICT (1.0) for DICT (1.0) civil peace EBMT (0.9) civil peace EBMT (0.9)
political DICT (1.0) Russians DICT (1.0) subscribe DICT (1.0) pact DICT (1.0) of GLOSS (1.0) quiet DICT (1.0) civilian DICT (1.0)
leaders DICT (1.0) politic DICT (1.0) Russian DICT (1.0) sign DICT (1.0) compact DICT (1.0) of DICT (1.0) peace DICT (1.0) civil DICT (1.0)
lideres politicos rusos firman pacto de paz civil
29
Why Machine Translation for Minority and
Indigenous Languages?
  • Commercial MT economically feasible for only a
    handful of major languages with large resources
    (corpora, human developers)
  • Is there hope for MT for languages with limited
    resources?
  • Benefits include
  • Better government access to indigenous
    communities (Epidemics, crop failures, etc.)
  • Better indigenous communities participation in
    information-rich activities (health care,
    education, government) without giving up their
    languages.
  • Language preservation
  • Civilian and military applications (disaster
    relief)

30
English-Chinese Example
31
Spanish-Mapudungun Example
32
English-Arabic Example
33
The Elicitation Corpus
  • Translated, aligned by bilingual informant
  • Corpus consists of linguistically diverse
    constructions
  • Based on elicitation and documentation work of
    field linguists (e.g. Comrie 1977, Bouquiaux
    1992)
  • Organized compositionally elicit simple
    structures first, then use them as building
    blocks
  • Goal minimize size, maximize linguistic coverage

34
Transfer Rule Formalism
SL the old man, TL ha-ish ha-zaqen NPNP
DET ADJ N -gt DET N DET ADJ ( (X1Y1) (X1Y3)
(X2Y4) (X3Y2) ((X1 AGR) 3-SING) ((X1 DEF
DEF) ((X3 AGR) 3-SING) ((X3 COUNT)
) ((Y1 DEF) DEF) ((Y3 DEF) DEF) ((Y2 AGR)
3-SING) ((Y2 GENDER) (Y4 GENDER)) )
  • Type information
  • Part-of-speech/constituent information
  • Alignments
  • x-side constraints
  • y-side constraints
  • xy-constraints,
  • e.g. ((Y1 AGR) (X1 AGR))

35
Transfer Rule Formalism (II)
SL the old man, TL ha-ish ha-zaqen NPNP
DET ADJ N -gt DET N DET ADJ ( (X1Y1) (X1Y3)
(X2Y4) (X3Y2) ((X1 AGR) 3-SING) ((X1 DEF
DEF) ((X3 AGR) 3-SING) ((X3 COUNT)
) ((Y1 DEF) DEF) ((Y3 DEF) DEF) ((Y2 AGR)
3-SING) ((Y2 GENDER) (Y4 GENDER)) )
  • Value constraints
  • Agreement constraints

36
The Transfer Engine
Analysis Source text is parsed into its grammatical structure. Determines transfer application ordering. Example ? ? ??(he read book) S NP VP N V NP ? ? ? Transfer A target language tree is created by reordering, insertion, and deletion. S NP VP N V NP he read DET N a book Article a is inserted into object NP. Source words translated with transfer lexicon. Generation Target language constraints are checked and final translation produced. E.g. reads is chosen over read to agree with he. Final translation He reads a book
37
Rule Learning - Overview
  • Goal Acquire Syntactic Transfer Rules
  • Use available knowledge from the source side
    (grammatical structure)
  • Three steps
  • Flat Seed Generation first guesses at transfer
    rules flat syntactic structure
  • Compositionality use previously learned rules to
    add hierarchical structure
  • Seeded Version Space Learning refine rules by
    learning appropriate feature constraints

38
Flat Seed Rule Generation
Learning Example NP Eng the big apple Heb ha-tapuax ha-gadol
Generated Seed Rule NPNP ART ADJ N ? ART N ART ADJ ((X1Y1) (X1Y3) (X2Y4) (X3Y2))
39
Flat Seed Generation
  • Create a transfer rule that is specific to the
    sentence pair, but abstracted to the POS level.
    No syntactic structure.

Element Source
SL POS sequence f-structure
TL POS sequence TL dictionary, aligned SL words
Type information corpus, same on SL and TL
Alignments informant
x-side constraints f-structure
y-side constraints TL dictionary, aligned SL words (list of projecting features)
40
Compositionality
Initial Flat Rules SS ART ADJ N V ART N ? ART N ART ADJ V P ART N ((X1Y1) (X1Y3) (X2Y4) (X3Y2) (X4Y5) (X5Y7) (X6Y8)) NPNP ART ADJ N ? ART N ART ADJ ((X1Y1) (X1Y3) (X2Y4) (X3Y2)) NPNP ART N ? ART N ((X1Y1) (X2Y2))
Generated Compositional Rule SS NP V NP ? NP V P NP ((X1Y1) (X2Y2) (X3Y4))
41
Compositionality - Overview
  • Traverse the c-structure of the English sentence,
    add compositional structure for translatable
    chunks
  • Adjust constituent sequences, alignments
  • Remove unnecessary constraints, i.e. those that
    are contained in the lower-level rule

42
Seeded Version Space Learning
Input Rules and their Example Sets SS NP V NP ? NP V P NP ex1,ex12,ex17,ex26 ((X1Y1) (X2Y2) (X3Y4)) NPNP ART ADJ N ? ART N ART ADJ ex2,ex3,ex13 ((X1Y1) (X1Y3) (X2Y4) (X3Y2)) NPNP ART N ? ART N ex4,ex5,ex6,ex8,ex10,ex11 ((X1Y1) (X2Y2))
Output Rules with Feature Constraints SS NP V NP ? NP V P NP ((X1Y1) (X2Y2) (X3Y4) (X1 NUM X2 NUM) (Y1 NUM Y2 NUM) (X1 NUM Y1 NUM))
43
Seeded Version Space Learning Overview
  • Goal add appropriate feature constraints to the
    acquired rules
  • Methodology
  • Preserve general structural transfer
  • Learn specific feature constraints from example
    set
  • Seed rules are grouped into clusters of similar
    transfer structure (type, constituent sequences,
    alignments)
  • Each cluster forms a version space a partially
    ordered hypothesis space with a specific and a
    general boundary
  • The seed rules in a group form the specific
    boundary of a version space
  • The general boundary is the (implicit) transfer
    rule with the same type, constituent sequences,
    and alignments, but no feature constraints

44
Seeded Version Space Learning Generalization
  • The partial order of the version space
  • Definition A transfer rule tr1 is strictly more
    general than another transfer rule tr2 if all
    f-structures that are satisfied by tr2 are also
    satisfied by tr1.
  • Generalize rules by merging them
  • Deletion of constraint
  • Raising two value constraints to an agreement
    constraint, e.g.
  • ((x1 num) pl), ((x3 num) pl) ?
  • ((x1 num) (x3 num))

45
Seeded Version Space Learning
  • NP v det n NP VP
  • Group seed rules into version spaces as above.
  • Make use of partial order of rules in version
    space. Partial order is defined
  • via the f-structures satisfying the constraints.
  • Generalize in the space by repeated merging of
    rules
  • Deletion of constraint
  • Moving value constraints to agreement
    constraints, e.g.
  • ((x1 num) pl), ((x3 num) pl) ?
  • ((x1 num) (x3 num)
  • 4. Check translation power of generalized rules
    against sentence pairs




46
Seeded Version Space LearningThe Search
  • The Seeded Version Space algorithm itself is the
    repeated generalization of rules by merging
  • A merge is successful if the set of sentences
    that can correctly be translated with the merged
    rule is a superset of the union of sets that can
    be translated with the unmerged rules, i.e. check
    power of rule
  • Merge until no more successful merges

47
Seeded VSL Some Open Issues
  • Three types of constraints
  • X-side constrain applicability of rule
  • Y-side assist in generation
  • X-Y transfer features from SL to TL
  • Which of the three types improves translation
    performance?
  • Use rules without features to populate lattice,
    decoder will select the best translation
  • Learn only X-Y constraints, based on list of
    universal projecting features
  • Other notions of version-spaces of feature
    constraints
  • Current feature learning is specific to rules
    that have identical transfer components
  • Important issue during transfer is to
    disambiguate among rules that have same SL side
    but different TL side can we learn effective
    constraints for this?

48
Examples of Learned Rules (Hindi-to-English)
NP,14244 Score0.0429 NPNP N -gt DET N ( (X1Y2) )
NP,14434 Score0.0040 NPNP ADJ CONJ ADJ N -gt ADJ CONJ ADJ N ( (X1Y1) (X2Y2) (X3Y3) (X4Y4) )
PP,4894Score0.0470PPPP NP POSTP -gt PREP NP((X2Y1)(X1Y2))
49
Manual Transfer Rules Hindi Example
PASSIVE OF SIMPLE PAST (NO AUX) WITH LIGHT
VERB passive of 43 (7b) VP,28 VPVP V V
V -gt Aux V ( (X1Y2) ((x1 form) root)
((x2 type) c light) ((x2 form) part) ((x2
aspect) perf) ((x3 lexwx) 'jAnA') ((x3
form) part) ((x3 aspect) perf) (x0 x1)
((y1 lex) be) ((y1 tense) past) ((y1 agr
num) (x3 agr num)) ((y1 agr pers) (x3 agr
pers)) ((y2 form) part) )
50
Manual Transfer Rules Example
NP PP NP1 NP P Adj N
N1 ke eka aXyAya N
jIvana
NP NP1 PP Adj N
P NP one chapter of N1
N life
NP1 ke NP2 -gt NP2 of NP1 Ex jIvana ke
eka aXyAya life of (one) chapter
gt a chapter of life NP,12 NPNP PP
NP1 -gt NP1 PP ( (X1Y2) (X2Y1) ((x2
lexwx) 'kA') ) NP,13 NPNP NP1 -gt
NP1 ( (X1Y1) ) PP,12 PPPP NP Postp
-gt Prep NP ( (X1Y2) (X2Y1) )
51
A Limited Data Scenario for Hindi-to-English
  • Conducted during a DARPA Surprise Language
    Exercise (SLE) in June 2003
  • Put together a scenario with miserly data
    resources
  • Elicited Data corpus 17589 phrases
  • Cleaned portion (top 12) of LDC dictionary
    2725 Hindi words (23612 translation pairs)
  • Manually acquired resources during the SLE
  • 500 manual bigram translations
  • 72 manually written phrase transfer rules
  • 105 manually written postposition rules
  • 48 manually written time expression rules
  • No additional parallel text!!

52
Manual Grammar Development
  • Covers mostly NPs, PPs and VPs (verb complexes)
  • 70 grammar rules, covering basic and recursive
    NPs and PPs, verb complexes of main tenses in
    Hindi (developed in two weeks)

53
Adding a Strong Decoder
  • XFER system produces a full lattice of
    translation fragments, ranging from single words
    to long phrases or sentences
  • Edges are scored using word-to-word translation
    probabilities, trained from the limited bilingual
    data
  • Decoder uses an English LM (70m words)
  • Decoder can also reorder words or phrases (up to
    4 positions ahead)
  • For XFER(strong) , ONLY edges from basic XFER
    system are used!

54
Testing Conditions
  • Tested on section of JHU provided data 258
    sentences with four reference translations
  • SMT system (stand-alone)
  • EBMT system (stand-alone)
  • XFER system (naïve decoding)
  • XFER system with strong decoder
  • No grammar rules (baseline)
  • Manually developed grammar rules
  • Automatically learned grammar rules
  • XFERSMT with strong decoder (MEMT)

55
Automatic MT Evaluation Metrics
  • Intends to replace or complement human assessment
    of translation quality of MT produced translation
  • Principle idea compare how similar is the MT
    produced translation with human translation(s) of
    the same input
  • Main metric in use today IBMs BLEU
  • Count n-gram (unigrams, bigrams, trigrams, etc)
    overlap between the MT output and several
    reference translations
  • Calculate a combined n-gram precision score
  • NIST variant of BLEU used for official DARPA
    evaluations

56
Results on JHU Test Set
System BLEU M-BLEU NIST
EBMT 0.058 0.165 4.22
SMT 0.093 0.191 4.64
XFER (naïve) man grammar 0.055 0.177 4.46
XFER (strong) no grammar 0.109 0.224 5.29
XFER (strong) learned grammar 0.116 0.231 5.37
XFER (strong) man grammar 0.135 0.243 5.59
XFERSMT 0.136 0.243 5.65
57
Effect of Reordering in the Decoder

58
Observations and Lessons (I)
  • XFER with strong decoder outperformed SMT even
    without any grammar rules in the miserly data
    scenario
  • SMT Trained on elicited phrases that are very
    short
  • SMT has insufficient data to train more
    discriminative translation probabilities
  • XFER takes advantage of Morphology
  • Token coverage without morphology 0.6989
  • Token coverage with morphology 0.7892
  • Manual grammar currently somewhat better than
    automatically learned grammar
  • Learned rules did not yet use version-space
    learning
  • Large room for improvement on learning rules
  • Importance of effective well-founded scoring of
    learned rules

59
Observations and Lessons (II)
  • MEMT (XFER and SMT) based on strong decoder
    produced best results in the miserly scenario.
  • Reordering within the decoder provided very
    significant score improvements
  • Much room for more sophisticated grammar rules
  • Strong decoder can carry some of the reordering
    burden

60
XFER MT for Hebrew-to-English
  • Two month intensive effort to apply our XFER
    approach to the development of a
    Hebrew-to-English MT system
  • Challenges
  • No large parallel corpus
  • Only limited coverage translation lexicon
  • Morphology incomplete analyzer available
  • Plan
  • Collect available resources, establish
    methodology for processing Hebrew input
  • Translate and align Elicitation Corpus
  • Learn XFER rules
  • Develop (small) manual XFER grammar as a point of
    comparison
  • Evaluate performance on unseen test data using
    automatic evaluation metrics

61
Hebrew-to-English XFER System
  • First end-to-end integration of system completed
    yesterday (March 2nd)
  • No transfer rules yet, just word-to-word
    Hebrew-to-English translation
  • No strong decoding yet
  • Amusing Example

office brains the government crack HBW in
committee the elections the central et the
possibility conduct poll crowd about TWKNIT the
NSIGH from goat
62
Conclusions
  • Transfer rules (both manual and learned) offer
    significant contributions that can complement
    existing data-driven approaches
  • Also in medium and large data settings?
  • Initial steps to development of a statistically
    grounded transfer-based MT system with
  • Rules that are scored based on a well-founded
    probability model
  • Strong and effective decoding that incorporates
    the most advanced techniques used in SMT decoding
  • Working from the opposite end of research on
    incorporating models of syntax into standard
    SMT systems Knight et al
  • Our direction makes sense in the limited data
    scenario

63
Future Directions
  • Continued work on automatic rule learning
    (especially Seeded Version Space Learning)
  • Improved leveraging from manual grammar
    resources, interaction with bilingual speakers
  • Developing a well-founded model for assigning
    scores (probabilities) to transfer rules
  • Improving the strong decoder to better fit the
    specific characteristics of the XFER model
  • MEMT with improved
  • Combination of output from different translation
    engines with different scorings
  • strong decoding capabilities

64
Language Modeling for MT
  • Technique stolen from Speech Recognition
  • Try to match the statistics of English
  • Trigram example George W.
  • Combine quality score with trigram score, to
    factor in English-like-ness
  • Problem this gives billions of possible overall
    translations
  • Solution beam search. At each step, throw out
    all but the best possibilities

65
  • Speech-to-speech translation for eCommerce
  • CMU, Karlsruhe, IRST, CLIPS, 2 commercial
    partners
  • Improved limited-domain speech translation
  • Experiment with multimodality and with MEMT
  • EU-side has strict scheduling and deliverables
  • First test domain Italian travel agency
  • Second showcase international Help desk
  • Tied in to CSTAR-III
Write a Comment
User Comments (0)
About PowerShow.com