Sentence Alignment

From MT Talks
Jump to navigation Jump to search
Lecture 7: Sentence Alignment
Lecture video: web TODO
Youtube
Exercises: Gale & Church algorithm

{{#ev:youtube|https://www.youtube.com/watch?v=_4lnyoC3mtQ%7C800%7Ccenter}}

Sentence alignment is an essential step in building a translation system. Often, we have some parallel data (texts in the source and target language which are translations of each other) but we don't know exactly which sentences correspond to each other. The task here is to find this correspondence (alignment).

Once sentence alignment is available, we can proceed further by finding word or phrase correspondences within the aligned sentences, but that's a topic for another lecture.

Gale & Church algorithm[1] is an algorithm for sentence alignment. It assumes that documents are already aligned on the level of paragraphs. For each paragraph, it finds which sentences correspond to each other.

It is formulated as a dynamic programming algorithm, quite analogous to Levenshtein distance.

Possible Operations

Similarly to string edit distance, a sentence can be:

  • deleted -- a source-side sentence with no corresponding target-side sentence
  • inserted -- a target-side sentence with no corresponding source-side sentence
  • substituted -- a pair of source- and target-side sentences which correspond to each other 1-1 (ideally, the most frequent scenario)

However, Gale & Church define a few more operations:

  • contraction -- two source-side sentences correspond to one target sentence
  • expansion -- one source-side sentence corresponds to two target sentences
  • merge -- two source-side sentences correspond to two target sentences (but there is not 1-1 correspondence)

Distance Function

A distance measure (or a cost function) is required so that we can look for a minimal solution. Gale & Church observe that length differences (measured in characters) between matching sentences tend to be normally distributed. Let be the average ratio between sentence lengths (for zero mean, would be 1), be the observed variance, and lengths of the source and target sentence, respectively. Then we define:

is a zero-mean, unit-variance, normally distributed random variable. We can use it to define our distance measure as the inverse of the conditional probability of a match given a difference . Following the Bayes' rule and dropping the (constant) denominator, we obtain:

We use so that lower cost is better and that we can sum the values in the algorithm and still have a probability distribution (instead of multiplying them).

Gale & Church estimate the prior empirically from the data, see Table 5 in the paper.

The likelihood can be formulated as:

Where is the cumulative distribution function for a 0-mean, unit variance normal distribution.

Algorithm Formulation

Let us define some notation (identical to the original paper):

-- the cost of substituting with
-- the cost of deleting
-- the cost of inserting
-- the cost of contracting and to
-- the cost of expanding to and
-- the cost of merging with

Then, the algorithm can be defined very simply using the following recursive formula. Let source-side sentences (within a paragraph) be and let target-side sentences be :

Again, similarly to string edit distance, the minimum total distance can be read off the table cell and backtracking can be used to find the actual alignment.

Other Algorithms & Tools

Exercises

References