MT that Deceives

From MT Talks
Jump to navigation Jump to search
Lecture 2: MT that Deceives
Lecture video: web TODO
Youtube

{{#ev:youtube|https://www.youtube.com/watch?v=MR9FyEi_hrE&list=PLpiLOsNLsfmbeH-b865BwfH15W0sat02V&index=2%7C800%7Ccenter}}

MT systems make various types of errors. In this lecture, we look at some deceitful examples as well as systematic errors, caused by inadequacies in current translation models.

For example, many popular MT systems, such as Google Translate or Bing Translator (for certain languages), are based purely on statistical models. Such models observe word and phrase co-occurrences in parallel texts and try to learn translation equivalents, which can lead to unexpected errors.

Negation in English-Czech Translation

Example of an error during phrase extraction. The system learns a translation pair "nemám" = "I have" which has the opposite meaning.

In some cases, the statistical approach leads to systematic errors. The picture illustrates a common issue with negation -- in many languages (such as Czech), negation is expressed by a prefix ("ne" in this case). Moreover, Czech uses double negatives -- the sentence:

Nemám žádnou kočku.

Its English translation is:

I have no cat.

Although word by word, the Czech sentence actually says:

I_do_not_have no cat.

Most statistical MT systems are based on word alignment, i.e. finding which words correspond to each other. From this sentence pair, the automatic procedure learns a wrong translation rule:

I have=nemám

Whenever this rule is applied, the meaning of the translation is completely reversed.

Named Entities

Other examples of notorious errors include named entities, such as:

Jan Novák potkal Karla Poláka. -> John Smith met Charles Pole.

The name Novák is sometimes translated as Smith as both are examples of very common surnames in the respective language.

Inadequate Modeling of Semantic Roles

Example of a system's failure to translate semantic roles. Screenshot of Google Translate producing identical translation of radically different sentences.

There is also a disconnect when translating between a morphologically poor and a morphologically rich language. While the first tend to express argument roles using word order (think English), the latter often use inflectional affixes. A statistical system which simply learn correspondences between words and short phrases then fails to capture the difference in meaning:

Pes dává kočce myš. (the dog gives the cat a mouse)
Psovi dává myš kočku. (to the dog, the mouse gives a cat)
Psovi dává kočka myš. (to the dog, the cat gives a mouse)

All of these examples are translated identically by Google Translate at the moment, even though their meanings are clearly radically different.

Numerals

Translation dictionaries of statistical MT systems are full of potential errors in numbers. Consider the possible translations of the number 1.96 according to our English-Czech translation system:

1.96 ||| , 96 1 ,
1.96 ||| , 96 1
1.96 ||| , 96
1.96 ||| 1,96
1.96 ||| 1.96
1.96 ||| 96 1 ,
1.96 ||| 96 1
1.96 ||| 96

While the wrong translations may be improbable according to the model, they can still appear in the final translation in some situations.

Moreover, MT systems will often translate the actual number correctly but confuse the units, e.g.:

40 miles -> 40 km

On the other hand, such situations can lead to peculiar translations of numbers observed in parallel data:

40   ||| 24.8548
(km)     (miles)