Scoring and Optimization: Difference between revisions
No edit summary |
No edit summary |
||
Line 16: | Line 16: | ||
* <math>P(\mathbf{f}|\mathbf{e})</math> | * <math>P(\mathbf{f}|\mathbf{e})</math> | ||
These probabilities are estimated by simply counting how many times (for the first formula) we saw <math>\mathbf{e}</math> aligned to <math>\mathbf{f}</math> and how many times we saw <math>\mathbf{f}</math> in total. For example | These probabilities are estimated by simply counting how many times (for the first formula) we saw <math>\mathbf{e}</math> aligned to <math>\mathbf{f}</math> and how many times we saw <math>\mathbf{f}</math> in total. For example, based on the following excerpt from (sorted) extracted phrase pairs, we estimate that <math>P(\text{naznačena v programu} | \text{estimated in the programme}) = 3/9</math>. | ||
estimated in the programme ||| naznačena v programu | |||
estimated in the programme ||| naznačena v programu | |||
estimated in the programme ||| naznačena v programu | |||
estimated in the programme ||| odhadován v programu | |||
estimated in the programme ||| odhadovány v programu | |||
estimated in the programme ||| odhadovány v programu | |||
estimated in the programme ||| předpokládal program | |||
estimated in the programme ||| v programu uvedeným | |||
estimated in the programme ||| v programu uvedeným | |||
=== Lexical Weights === | === Lexical Weights === |
Revision as of 15:02, 24 August 2015
Lecture video: |
web TODO Youtube |
---|
{{#ev:youtube|https://www.youtube.com/watch?v=rDkZOINdPhw&index=11&list=PLpiLOsNLsfmbeH-b865BwfH15W0sat02V%7C800%7Ccenter}}
Features of MT Models
Phrase Translation Probabilities
Phrase translation probabilities are calculated from occurrences of phrase pairs extracted from the parallel training data. Usually, MT systems work with the following two conditional probabilities:
These probabilities are estimated by simply counting how many times (for the first formula) we saw aligned to and how many times we saw in total. For example, based on the following excerpt from (sorted) extracted phrase pairs, we estimate that .
estimated in the programme ||| naznačena v programu estimated in the programme ||| naznačena v programu estimated in the programme ||| naznačena v programu estimated in the programme ||| odhadován v programu estimated in the programme ||| odhadovány v programu estimated in the programme ||| odhadovány v programu estimated in the programme ||| předpokládal program estimated in the programme ||| v programu uvedeným estimated in the programme ||| v programu uvedeným
Lexical Weights
Lexical weights are a method for smoothing the phrase table. Infrequent phrases have unreliable probability estimates; for instance many long phrases occur together only once in the corpus, resulting in . Several methods exist for computing lexical weights. The most common one is based on word alignment inside the phrase. The probability of each foreign word is estimated as the average of lexical translation probabilities over the English words aligned to it. Thus for the phrase with the set of alignment points , the lexical weight is: