Automatic MT Evaluation: Difference between revisions

From MT Talks
Jump to navigation Jump to search
No edit summary
 
(13 intermediate revisions by 2 users not shown)
Line 1: Line 1:
{{Infobox
{{Infobox
|title = Lecture 4: Automatic MT Evaluation
|title = Lecture 5: Automatic MT Evaluation
|image = [[File:camel.png|200px]]
|image = [[File:camel.png|200px]]
|label1 = Lecture video:
|label1 = Lecture video:
Line 6: Line 6:
|label2 = Supplementary materials:
|label2 = Supplementary materials:
|data2 = [[File:bleu.pdf]]
|data2 = [[File:bleu.pdf]]
|label3 = Exercises:
|data3 = [https://codex3.ms.mff.cuni.cz/codex-trans/?groupId=3&taskId=8&module=groups%2Ftasks&page=specification BLEU] <br/> [https://codex3.ms.mff.cuni.cz/codex-trans/?groupId=3&taskId=9&module=groups%2Ftasks&page=specification PER]
}}
}}


Line 20: Line 22:
== PER ==
== PER ==


Position-independent error rate<ref name="per">C. Tillmann, S. Vogel, H. Ney, A. Zubiaga, H. Sawaf. ''[https://www-i6.informatik.rwth-aachen.de/publications/download/203/TillmannC.VogelS.NeyH.SawafH.ZubiagaA.--AcceleratedDP-basedSearchforStatisticalTranslation--1997.pdf Accelerated DP Based Search for Statistical Translation]''</ref> (PER) is a simple measure which counts the number of words which are identical in the MT output and the reference translation and divides
Position-independent error rate<ref name="per">C. Tillmann, S. Vogel, H. Ney, A. Zubiaga, H. Sawaf. ''[https://www-i6.informatik.rwth-aachen.de/publications/download/203/TillmannC.VogelS.NeyH.SawafH.ZubiagaA.--AcceleratedDP-basedSearchforStatisticalTranslation--1997.pdf Accelerated DP Based Search for Statistical Translation]''</ref> (PER) is a simple measure which counts the number of correct words in the MT output, regardless of their position. It is calculated using the following formula:
 
<math>\text{PER} = 1 - \frac{\text{correct} - \max(0, c - r)}{r}</math>
 
Where <math>r</math> and <math>c</math> is the number of tokens in the reference translation and candidate translation, respectively.


== BLEU ==
== BLEU ==
Line 36: Line 42:
Where (almost always) <math>\lambda_i = 1/n</math> and <math>n = 4</math>. <math>p_i</math> stand for <math>i</math>-gram precision, i.e. the number of <math>i</math>-grams in the candidate translation which are confirmed by the reference.
Where (almost always) <math>\lambda_i = 1/n</math> and <math>n = 4</math>. <math>p_i</math> stand for <math>i</math>-gram precision, i.e. the number of <math>i</math>-grams in the candidate translation which are confirmed by the reference.


Each reference <math>n</math>-gram can be used to confirm the candidate <math>n</math>-gram only once (''clipping''), making it impossible to game BLEU by producing many occurrences of a single common word (such as ''"the"'').
Each reference n-gram can be used to confirm the candidate n-gram only once (''clipping''), making it impossible to game BLEU by producing many occurrences of a single common word (such as ''"the"'').


BP stands for ''brevity penalty''. Since BLEU is a kind of precision, short outputs (which only contain words that the system is sure about) would score highly without BP. This penalty is defined simply as:
BP stands for ''brevity penalty''. Since BLEU is a kind of precision, short outputs (which only contain words that the system is sure about) would score highly without BP. This penalty is defined simply as:
Line 43: Line 49:
\text{BP} = \begin{cases} 1, & \mbox{if }  c > r \\ \exp(1 - r/c),  & \mbox{if } c \leq r. \end{cases}
\text{BP} = \begin{cases} 1, & \mbox{if }  c > r \\ \exp(1 - r/c),  & \mbox{if } c \leq r. \end{cases}
</math>
</math>
Where <math>r</math> and <math>c</math> is again the number of tokens in the reference translation and candidate translation, respectively.


=== Example ===
=== Example ===
Line 68: Line 76:
|}
|}


The number of MT <math>n</math>-grams is 5, 4, 3, 2 respectively for unigrams, bigrams etc. The MT output is one word shorter than the reference, therefore:
The number of confirmed MT n-grams is 5, 4, 3, 2 respectively for unigrams, bigrams etc. The MT output is one word shorter than the reference, therefore:


<math>\text{BP} = \exp(1 - 6/5) \doteq 0.82</math>
<math>\text{BP} = \exp(1 - 6/5) \doteq 0.82</math>
Line 74: Line 82:
The geometric mean of precisions is:
The geometric mean of precisions is:


<math>\exp(\frac{1}{4} \log \frac{5}{6} + \frac{1}{4} \log \frac{4}{5} + \frac{1}{4} \log \frac{3}{4} + \frac{1}{4} \log \frac{2}{3}) \doteq 0.76</math>
<math>\exp(\frac{1}{4} \log(\frac{5}{6}) + \frac{1}{4} \log (\frac{4}{5}) + \frac{1}{4} \log(\frac{3}{4}) + \frac{1}{4} \log(\frac{2}{3})) \doteq 0.76</math>
 
Note that you can equivalently take the fourth root of the product of the precisions, i.e. <math>\sqrt[4]{\frac{5}{6} \cdot \frac{4}{5} \cdot \frac{3}{4} \cdot \frac{2}{3}}</math>
 
The final BLEU score is then <math>0.82 \cdot 0.76 \doteq 0.62</math>.
 
BLEU is often mutliplied by 100 for readability.
 
BLEU is a document-level metric. This means that counts of confirmed n-grams are collected for all sentences in the translated document and then the geometric mean of n-gram precisions is computed from the accumulated counts. For a single sentence, BLEU is often zero (since there is frequently no matching 4-gram or even trigram).


=== Multiple Reference Translations ===
=== Multiple Reference Translations ===
BLEU supports multiple references. In that case, if an n-gram in the MT output is confirmed by ''any'' of the reference translations, it is counted as correct. If an n-gram occurs multiple times, it has to be seen in one of the references multiple times as well.
The original paper is not clear about BP in this case. The usual practice is to take the reference translation which is closest in length to the MT output and calculate BP from that. (Note that even this specification is not unambiguous since there can be two closest references to the given hypothesis, the longer and the shorter one.)


== Other Metrics ==
== Other Metrics ==


* Translation Error Rate (TER)
* Results of the WMT14 Metrics Shared Task<ref name="wmtmetrics">Matouš Macháček and Ondřej Bojar. ''[http://www.statmt.org/wmt14/pdf/W14-3336.pdf Results of the WMT14 Metrics Shared Task]''</ref> (WMT metrics) -- an annual shared task in automatic evaluation of MT, see the [http://www.statmt.org/wmt15/metrics-task/ task web page].
 
* Translation Error Rate<ref name="ter">Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, John Makhoul. ''[https://www.cs.umd.edu/~snover/pub/amta06/ter_amta.pdf A Study of Translation Edit Rate with Targeted Human Annotation]''</ref> (TER) -- an edit-distance based metric on the level of phrases
 
* METEOR<ref name="meteor">Alon Lavie, Michael Denkowski. ''[http://www.cs.cmu.edu/afs/cs.cmu.edu/project/mteval-1/Papers/MT-Journal-2009/meteor-mtj-2009.pdf The METEOR Metric for Automatic Evaluation of Machine Translation]''</ref> -- a robust metric with support for paraphrasing
 
== Exercises ==


* METEOR
* [https://codex3.ms.mff.cuni.cz/codex-trans/?groupId=3&taskId=8&module=groups%2Ftasks&page=specification Implement BLEU]
* [https://codex3.ms.mff.cuni.cz/codex-trans/?groupId=3&taskId=9&module=groups%2Ftasks&page=specification Implement PER]


== References ==
== References ==


<references />
<references />

Latest revision as of 08:54, 11 February 2015

Lecture 5: Automatic MT Evaluation
Lecture video: web TODO
Youtube
Supplementary materials: File:Bleu.pdf
Exercises: BLEU
PER

{{#ev:youtube|https://www.youtube.com/watch?v=Bj_Hxi91GUM&index=5&list=PLpiLOsNLsfmbeH-b865BwfH15W0sat02V%7C800%7Ccenter}}

Reference Translations

The following picture[1] illustrates the issue of reference translations:

Out of all possible sequences of words in the given language, only some are grammatically correct sentences (). An overlapping set is formed by understandable translations () of the source sentence (note that these are not necessarily grammatical). Possible reference translations can then be viewed as a subset of . Only some of these can be reached by the MT system. Typically, we only have several reference translations at our disposal; often we have just a single reference.

PER

Position-independent error rate[2] (PER) is a simple measure which counts the number of correct words in the MT output, regardless of their position. It is calculated using the following formula:

Where and is the number of tokens in the reference translation and candidate translation, respectively.

BLEU

BLEU[3] (Bilingual evaluation understudy) remains the most popular metric for automatic evaluation of MT output quality.

While PER only looks at individual words, BLEU considers also sequences of words. Informally, we can describe BLEU as the amount of overlap of -grams between the candidate translation and the reference (more specifically unigrams, bigrams, trigrams and 4-grams, in the standard formulation).

The formal definition is as follows:

Where (almost always) and . stand for -gram precision, i.e. the number of -grams in the candidate translation which are confirmed by the reference.

Each reference n-gram can be used to confirm the candidate n-gram only once (clipping), making it impossible to game BLEU by producing many occurrences of a single common word (such as "the").

BP stands for brevity penalty. Since BLEU is a kind of precision, short outputs (which only contain words that the system is sure about) would score highly without BP. This penalty is defined simply as:

Where and is again the number of tokens in the reference translation and candidate translation, respectively.

Example

Consider the following situation:

Source Vom Glück der traumenden Kamele Confirmed
Reference On the happiness of dreaming camels 1 2 3 4
MT Output The happiness of dreaming camels 5 4 3 2

The number of confirmed MT n-grams is 5, 4, 3, 2 respectively for unigrams, bigrams etc. The MT output is one word shorter than the reference, therefore:

The geometric mean of precisions is:

Note that you can equivalently take the fourth root of the product of the precisions, i.e.

The final BLEU score is then .

BLEU is often mutliplied by 100 for readability.

BLEU is a document-level metric. This means that counts of confirmed n-grams are collected for all sentences in the translated document and then the geometric mean of n-gram precisions is computed from the accumulated counts. For a single sentence, BLEU is often zero (since there is frequently no matching 4-gram or even trigram).

Multiple Reference Translations

BLEU supports multiple references. In that case, if an n-gram in the MT output is confirmed by any of the reference translations, it is counted as correct. If an n-gram occurs multiple times, it has to be seen in one of the references multiple times as well.

The original paper is not clear about BP in this case. The usual practice is to take the reference translation which is closest in length to the MT output and calculate BP from that. (Note that even this specification is not unambiguous since there can be two closest references to the given hypothesis, the longer and the shorter one.)

Other Metrics

  • Results of the WMT14 Metrics Shared Task[4] (WMT metrics) -- an annual shared task in automatic evaluation of MT, see the task web page.
  • Translation Error Rate[5] (TER) -- an edit-distance based metric on the level of phrases
  • METEOR[6] -- a robust metric with support for paraphrasing

Exercises

References

  1. Ondřej Bojar, Matouš Macháček, Aleš Tamchyna, Daniel Zeman. Scratching the Surface of Possible Translations
  2. C. Tillmann, S. Vogel, H. Ney, A. Zubiaga, H. Sawaf. Accelerated DP Based Search for Statistical Translation
  3. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu. BLEU: a Method for Automatic Evaluation of Machine Translation
  4. Matouš Macháček and Ondřej Bojar. Results of the WMT14 Metrics Shared Task
  5. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, John Makhoul. A Study of Translation Edit Rate with Targeted Human Annotation
  6. Alon Lavie, Michael Denkowski. The METEOR Metric for Automatic Evaluation of Machine Translation