MT that Deceives: Difference between revisions

From MT Talks
Jump to navigation Jump to search
No edit summary
No edit summary
 
(17 intermediate revisions by 2 users not shown)
Line 1: Line 1:
Many popular MT systems, such as [http://translate.google.com Google Translate] or [http://www.bing.com/translator/ Bing Translator] (for certain languages), are based purely on statistical models. Such models observe word and phrase co-occurrences in parallel texts and try to learn translation equivalents.
{{Infobox
|title = Lecture 2: MT that Deceives
|image = [[File:knitting.png|200px]]
|label1 = Lecture video:
|data1 = [http://example.com web '''TODO'''] <br/> [https://www.youtube.com/watch?v=MR9FyEi_hrE&list=PLpiLOsNLsfmbeH-b865BwfH15W0sat02V&index=2 Youtube]
}}


In some cases, this approach leads to '''systematic errors'''.
{{#ev:youtube|https://www.youtube.com/watch?v=MR9FyEi_hrE&list=PLpiLOsNLsfmbeH-b865BwfH15W0sat02V&index=2|800|center}}


[[File:nemam_kocku.png]]
MT systems make various types of errors. In this lecture, we look at some deceitful examples as well as systematic errors, caused by inadequacies in current translation models.
 
For example, many popular MT systems, such as [http://translate.google.com Google Translate] or [http://www.bing.com/translator/ Bing Translator] (for certain languages), are based purely on statistical models. Such models observe word and phrase co-occurrences in parallel texts and try to learn translation equivalents, which can lead to unexpected errors.
 
== Negation in English-Czech Translation ==
 
[[File:nemam_kocku.png|thumb|300px|'''Example of an error during phrase extraction.''' The system learns a translation pair ''"nemám" = "I have"'' which has the opposite meaning.]]
 
In some cases, the statistical approach leads to '''systematic errors'''. The picture illustrates a common issue with negation -- in many languages (such as Czech), negation is expressed by a prefix ("''ne''" in this case). Moreover, Czech uses double negatives -- the sentence:
 
: ''Nemám žádnou kočku.''
 
Its English translation is:
 
: ''I have no cat.''
 
Although word by word, the Czech sentence actually says:
 
: ''I_do_not_have no cat.''
 
Most statistical MT systems are based on word alignment, i.e. finding which words correspond to each other. From this sentence pair, the automatic procedure learns a wrong translation rule:
 
: ''I have''=''nemám''
 
Whenever this rule is applied, the meaning of the translation is completely reversed.
 
== Named Entities ==
 
Other examples of notorious errors include named entities, such as:
 
: ''Jan Novák potkal Karla Poláka. -> John Smith met Charles Pole.''
 
The name ''Novák'' is sometimes translated as ''Smith'' as both are examples of very common surnames in the respective language.
 
== Inadequate Modeling of Semantic Roles ==
 
[[File:pes-kocka-mys.png|thumb|500px|'''Example of a system's failure to translate semantic roles.''' Screenshot of Google Translate producing identical translation of radically different sentences.]]
 
There is also a disconnect when translating between a morphologically poor and a morphologically rich language. While the first tend to express argument roles using '''word order''' (think English), the latter often use '''inflectional affixes'''. A statistical system which simply learn correspondences between words and short phrases then fails to capture the difference in meaning:
 
: ''Pes dává kočce myš.''    (the dog gives the cat a mouse)
 
: ''Psovi dává myš kočku.''  (to the dog, the mouse gives a cat)
 
: ''Psovi dává kočka myš.''  (to the dog, the cat gives a mouse)
 
All of these examples are translated identically by [https://translate.google.com Google Translate] at the moment, even though their meanings are clearly radically different.
 
== Numerals ==
 
Translation dictionaries of statistical MT systems are full of potential errors in numbers. Consider the possible translations of the number ''1.96'' according to our English-Czech translation system:
 
1.96 ||| , 96 1 ,
1.96 ||| , 96 1
1.96 ||| , 96
1.96 ||| 1,96
1.96 ||| 1.96
1.96 ||| 96 1 ,
1.96 ||| 96 1
1.96 ||| 96
 
While the wrong translations may be improbable according to the model, they can still appear in the final translation in some situations.
 
Moreover, MT systems will often translate the actual number correctly but confuse the units, e.g.:
 
: ''40 miles -> 40 km''
 
On the other hand, such situations can lead to peculiar translations of numbers observed in parallel data:
 
40  ||| 24.8548
(km)    (miles)

Latest revision as of 15:21, 27 January 2015

Lecture 2: MT that Deceives
Lecture video: web TODO
Youtube

{{#ev:youtube|https://www.youtube.com/watch?v=MR9FyEi_hrE&list=PLpiLOsNLsfmbeH-b865BwfH15W0sat02V&index=2%7C800%7Ccenter}}

MT systems make various types of errors. In this lecture, we look at some deceitful examples as well as systematic errors, caused by inadequacies in current translation models.

For example, many popular MT systems, such as Google Translate or Bing Translator (for certain languages), are based purely on statistical models. Such models observe word and phrase co-occurrences in parallel texts and try to learn translation equivalents, which can lead to unexpected errors.

Negation in English-Czech Translation

Example of an error during phrase extraction. The system learns a translation pair "nemám" = "I have" which has the opposite meaning.

In some cases, the statistical approach leads to systematic errors. The picture illustrates a common issue with negation -- in many languages (such as Czech), negation is expressed by a prefix ("ne" in this case). Moreover, Czech uses double negatives -- the sentence:

Nemám žádnou kočku.

Its English translation is:

I have no cat.

Although word by word, the Czech sentence actually says:

I_do_not_have no cat.

Most statistical MT systems are based on word alignment, i.e. finding which words correspond to each other. From this sentence pair, the automatic procedure learns a wrong translation rule:

I have=nemám

Whenever this rule is applied, the meaning of the translation is completely reversed.

Named Entities

Other examples of notorious errors include named entities, such as:

Jan Novák potkal Karla Poláka. -> John Smith met Charles Pole.

The name Novák is sometimes translated as Smith as both are examples of very common surnames in the respective language.

Inadequate Modeling of Semantic Roles

Example of a system's failure to translate semantic roles. Screenshot of Google Translate producing identical translation of radically different sentences.

There is also a disconnect when translating between a morphologically poor and a morphologically rich language. While the first tend to express argument roles using word order (think English), the latter often use inflectional affixes. A statistical system which simply learn correspondences between words and short phrases then fails to capture the difference in meaning:

Pes dává kočce myš. (the dog gives the cat a mouse)
Psovi dává myš kočku. (to the dog, the mouse gives a cat)
Psovi dává kočka myš. (to the dog, the cat gives a mouse)

All of these examples are translated identically by Google Translate at the moment, even though their meanings are clearly radically different.

Numerals

Translation dictionaries of statistical MT systems are full of potential errors in numbers. Consider the possible translations of the number 1.96 according to our English-Czech translation system:

1.96 ||| , 96 1 ,
1.96 ||| , 96 1
1.96 ||| , 96
1.96 ||| 1,96
1.96 ||| 1.96
1.96 ||| 96 1 ,
1.96 ||| 96 1
1.96 ||| 96

While the wrong translations may be improbable according to the model, they can still appear in the final translation in some situations.

Moreover, MT systems will often translate the actual number correctly but confuse the units, e.g.:

40 miles -> 40 km

On the other hand, such situations can lead to peculiar translations of numbers observed in parallel data:

40   ||| 24.8548
(km)     (miles)