Scoring and Optimization

From MT Talks
Revision as of 12:19, 25 August 2015 by Tamchyna (talk | contribs)
Jump to navigation Jump to search
Lecture 13: Scoring and Optimization
Lecture video: web TODO
Youtube

{{#ev:youtube|https://www.youtube.com/watch?v=oxhc0Nv_ySw&index=11&list=PLpiLOsNLsfmbeH-b865BwfH15W0sat02V%7C800%7Ccenter}}

Features of MT Models

Phrase Translation Probabilities

Phrase translation probabilities are calculated from occurrences of phrase pairs extracted from the parallel training data. Usually, MT systems work with the following two conditional probabilities:

These probabilities are estimated by simply counting how many times (for the first formula) we saw aligned to and how many times we saw in total. For example, based on the following excerpt from (sorted) extracted phrase pairs, we estimate that .

estimated in the programme ||| naznačena v programu
estimated in the programme ||| naznačena v programu
estimated in the programme ||| naznačena v programu
estimated in the programme ||| odhadován v programu
estimated in the programme ||| odhadovány v programu
estimated in the programme ||| odhadovány v programu 
estimated in the programme ||| předpokládal program
estimated in the programme ||| v programu uvedeným
estimated in the programme ||| v programu uvedeným

Lexical Weights

Lexical weights are a method for smoothing the phrase table. Infrequent phrases have unreliable probability estimates; for instance many long phrases occur together only once in the corpus, resulting in . Several methods exist for computing lexical weights. The most common one is based on word alignment inside the phrase. The probability of each foreign word is estimated as the average of lexical translation probabilities over the English words aligned to it. Thus for the phrase with the set of alignment points , the lexical weight is:

Language Model

The task of language modeling in machine translation is to estimate how likely a sequence of words is in the target language.

When translating, the decoder generates translation hypotheses which are probable according to the translation model (i.e. the phrase table). The language model then scores these hypotheses according to how probable (common, fluent) they are in the target language. The final translation is then something like a compromise -- the sentence that is both fluent and a good translation of the input.

Similarly to the translation model, sequence probabilities are learned from data using maximum likelihood estimation. For language modeling, only monolingual data are needed (a resource available in much larger amounts than parallel texts).

Naturally, the prediction of the whole sequence has to be decomposed, so that it can be reliably estimated. The most common approach are n-gram language models which build upon the Markov assumption: a word depends only on a limited, fixed number of preceding words. The decomposition is done as follows:

The first equality follows from the chain rule and the second from n-th order Markov assumption. Each word is then modeled by at most n preceding words and the probability of the whole sequence is the product of probabilities of individual words. Smoothing is further used to supply probability estimates to unseen n-grams.

A great introduction to language modeling is the video lecture by Jason Eisner. LMs are covered in more depth in the Stanford NLP lectures on Coursera; videos from the Coursera course can be found on YouTube.

Word and Phrase Penalty

For each word and for each phrase produced, the decoder pays a constant cost. Tweaking the word penalty can lead to either very short or very long output sentences (the "penalty" can also be negative -- a reward). Changes to the phrase penalty can lead to outputs consisting of word-by-word translations (small or negative phrase penalty -- use as many phrases as possible) or on the other hand, to outputs consisting of very long phrases (as is usually desirable).

Distortion Penalty

The distortion penalty is the cost which the MT system pays for shuffling words (or phrases) around. There are many definitions possible, the following is commonly used: for each phrase, its value is the distance (measured in words) between its beginning and the end of the preceding phrase. This distance-based reordering can be replaced by more sophisticated models, such as lexicalized reordering.

Decoding

Phrase-Based Search

We have already described the decoding algorithm for phrase-based MT. Here we discuss how feature values are calculated in the search.

Some of the feature functions that we have described are local, i.e. their value only depends on the current phrase pair. For example, lexical weights, phrase translation probabilities or word penalty are local (word penalty is simply the count of words in the target phrase). As we build the translation, we simply add the scores of these local feature functions to the current translation score.

The most prominent example of a non-local feature is the language model.

Decoding in SCFG

Optimization of Feature Weights

Note that there have even been shared tasks in model optimization. One, by invitation only, in 2011 and one in 2015: WMT15 Tuning Task.