Sentence Alignment: Difference between revisions

From MT Talks
Jump to navigation Jump to search
No edit summary
No edit summary
Line 34: Line 34:
== Distance Function ==
== Distance Function ==


A distance measure (or a cost function) is required so that we can look for a minimal solution. Gale & Church observe that length differences (measured in characters) between matching sentences tend to be normally distributed. Let <math>c</math> be the average ratio between sentence lengths (for 0 mean, <math>c</math> would be 1), <math>s^2</math> be the observed variance, and <math>l_1, l_2</math> lengths of the source and target sentence, respectively. Then we define:
A distance measure (or a cost function) is required so that we can look for a minimal solution. Gale & Church observe that length differences (measured in characters) between matching sentences tend to be normally distributed. Let <math>c</math> be the average ratio between sentence lengths (for zero mean, <math>c</math> would be 1), <math>s^2</math> be the observed variance, and <math>l_1, l_2</math> lengths of the source and target sentence, respectively. Then we define:


<math>
<math>
Line 40: Line 40:
</math>
</math>


 
<math>\delta</math> is a zero-mean, unit-variance, normally distributed random variable. We can use it to define our ''distance measure'' as the inverse of the conditional probability of a match given a difference <math>\delta</math>. Following the Bayes' rule and dropping the (constant) denominator, we obtain:
Gale & Church formulate this as the conditional probability of a match given a difference <math>\delta</math>. Following the Bayes' rule and dropping the (constant) denominator, we obtain:


<math>P(\text{match} | \delta) \propto P(\delta | \text{match}) \cdot P(\text{match})</math>
<math>P(\text{match} | \delta) \propto P(\delta | \text{match}) \cdot P(\text{match})</math>


Gale & Church use a probabilistic view so that the costs can be combined in a principled way (by summing log-probabilities, i.e. by multiplying probabilities). We therefore look for:
We use <math>-log P(\text{match} | \delta)</math> so that we can lower cost is better and that we can sum the values and still have a probability distribution (instead of multiplying them).
 
 
 
Where


First, let us define some notation (identical to the original paper):
Let us define some notation (identical to the original paper):


: <math>d(x_1, y_1, 0, 0)</math> -- the cost of ''substituting'' <math>x_1</math> with <math>y_1</math>
: <math>d(x_1, y_1, 0, 0)</math> -- the cost of ''substituting'' <math>x_1</math> with <math>y_1</math>

Revision as of 14:05, 10 March 2015

Lecture 7: Sentence Alignment
Lecture video: web TODO
Youtube
Exercises: Gale & Church algorithm

{{#ev:youtube|https://www.youtube.com/watch?v=_4lnyoC3mtQ%7C800%7Ccenter}}

Sentence alignment is an essential step in building a translation system. Often, we have some parallel data (texts in the source and target language which are translations of each other) but we don't know exactly which sentences correspond to each other. The task here is to find this correspondence (alignment).

Once sentence alignment is available, we can proceed further by finding word or phrase correspondences within the aligned sentences, but that's a topic for another lecture.

Gale & Church algorithm[1] is an algorithm for sentence alignment. It assumes that documents are already aligned on the level of paragraphs. For each paragraph, it finds which sentences correspond to each other.

It is formulated as a dynamic programming algorithm, quite analogous to Levenshtein distance.

Possible Operations

Similarly to string edit distance, a sentence can be:

  • deleted -- a source-side sentence with no corresponding target-side sentence
  • inserted -- a target-side sentence with no corresponding source-side sentence
  • substituted -- a pair of source- and target-side sentences which correspond to each other 1-1 (ideally, the most frequent scenario)

However, Gale & Church define a few more operations:

  • contraction -- two source-side sentences correspond to one target sentence
  • expansion -- one source-side sentence corresponds to two target sentences
  • merge -- two source-side sentences correspond to two target sentences (but there is not 1-1 correspondence)

Distance Function

A distance measure (or a cost function) is required so that we can look for a minimal solution. Gale & Church observe that length differences (measured in characters) between matching sentences tend to be normally distributed. Let be the average ratio between sentence lengths (for zero mean, would be 1), be the observed variance, and lengths of the source and target sentence, respectively. Then we define:

is a zero-mean, unit-variance, normally distributed random variable. We can use it to define our distance measure as the inverse of the conditional probability of a match given a difference . Following the Bayes' rule and dropping the (constant) denominator, we obtain:

We use so that we can lower cost is better and that we can sum the values and still have a probability distribution (instead of multiplying them).

Let us define some notation (identical to the original paper):

-- the cost of substituting with
-- the cost of deleting
-- the cost of inserting
-- the cost of contracting and to
-- the cost of expanding to and
-- the cost of merging with

Then, the algorithm can be defined very simply using the following recursive formula. Let source-side sentences (within a paragraph) be and let target-side sentences be :

Other Algorithms & Tools

Exercises

References