CITI has stopped operations in 2014, to co-launch NOVA LINCS THIS SITE IS NOT BEING UPDATED SINCE 2013
citi banner
Home Page FCT/UNL UNL
  Home  \  Seminars @ CITI  \  Seminar Page Login  
   
banner bottom
File Top
Evaluation of Machine Translation Systems and of Parallel Text Alignement
{ Tue, 22 Nov 2005, 14h00 }

By: VĂ­ctor Bilbao  [ show info ]

In Machine Translation (MT), a correct evaluation of systems is very important, since it allows to compare the performance of different algorithms and to evaluate changes made in-system, in order to correct errors and obtain better translations. Human evaluation is impractical in many cases, since it takes weeks or months (when results should be available within days). It is also costly, due to the necessity of personnel fluent in the languages translated. Due to these problems, interest in automatic evaluation has grown in recent years.

In this talk, several automatic evaluation methods, currently in use, will be overviewed. Most of them are relatively simple metrics, which are language independent and give results resembling those obtained by human evaluators. Special attention will be devoted to several well-known evaluation metrics, such as BLEU (bilingual evaluation understudy), NIST (do National Institute of Standards and Technology), RED (Ranker Based on Edit Distances) and ROUGE measures. Then, I will compare these methods and comment on their intrinsic problems and possible future developments.

Last part of the talk will be devoted to evaluation of aligment systems and how the techniques used are closely related to MT evaluation methods.


File Bottom