MT Talks: Difference between revisions
(→Our Talks: rephrased) |
(→Our Talks: releasing 05) |
||
Line 18: | Line 18: | ||
04 '''[[MT Evaluation in General]]''': Techniques of judging MT quality, dimensions of translation quality, number of possible translations. | 04 '''[[MT Evaluation in General]]''': Techniques of judging MT quality, dimensions of translation quality, number of possible translations. | ||
05 '''[[Automatic MT Evaluation]]''': Two common automatic MT evaluation methods: PER and BLEU | |||
== CodEx == | == CodEx == |
Revision as of 08:38, 11 February 2015
MT Talks is a series of mini-lectures on machine translation.
Our goal is to hit just the right level of detail and technicality to make the talks interesting and attractive to people who are not yet familiar with the field but mix in new observations and insights so that even old pals will have a reason to watch us.
MT Talks and the expanded notes on this wiki will never be the ultimate resource for MT, but we would be very happy to serve as an ultimate commented directory of good pointers.
By the way, this is indeed a Wiki, so your contributions are very welcome! Please register and feel free to add comments, corrections or links to useful resources.
Our Talks
01 Intro: Why is MT difficult, Approaches to MT.
02 MT that Deceives: Serious translation errors even for short and simple inputs.
03 Pre-processing: Normalization and other technical tricks bound to help your MT system.
04 MT Evaluation in General: Techniques of judging MT quality, dimensions of translation quality, number of possible translations.
05 Automatic MT Evaluation: Two common automatic MT evaluation methods: PER and BLEU
CodEx
- Log in to CodEx and solve programming exercises that complement our talks.
- Brief description of CodEx: how to get an account and submit a solution.
Other Videolectures on MT
- Approaches to Machine Translation: Rule-Based, Statistical, Hybrid (an online course on MT by UPC Barcelona)
- Natural Language Processing at Coursera by Michael Collins, includes lectures on word-based and phrase-based models. Further notes
Acknowledgement
The work on this project has been supported by the grant FP7-ICT-2011-7-288487 (MosesCore).