TC-STAR 2006 Evaluation Package - SLT Chinese-to-English

Full Official Name: TC-STAR 2006 Evaluation Package - SLT Chinese-to-English
Submission date: Jan. 24, 2014, 4:31 p.m.

TC-STAR is a European integrated project focusing on Speech-to-Speech Translation (SST). To encourage significant breakthrough in all SST technologies, annual open competitive evaluations are organized. Automatic Speech Recognition (ASR), Spoken Language Translation (SLT) and Text-To-Speech (TTS) are evaluated independently and within an end-to-end system. The second TC-STAR evaluation campaign took place in March 2006. Three core technologies were evaluated during the campaign: • Automatic Speech Recognition (ASR), • Spoken Language Translation (SLT), • Text to Speech (TTS). Each evaluation package includes resources, protocols, scoring tools, results of the official campaign, etc., that were used or produced during the second evaluation campaign. The aim of these evaluation packages is to enable external players to evaluate their own system and compare their results with those obtained during the campaign itself. The speech databases made within the TC-STAR project were validated by SPEX, in the Netherlands, to assess their compliance with the TC-STAR format and content specifications. This package includes the material used for the TC-STAR 2006 Spoken Language Translation (SLT) second evaluation campaign for Chinese-to-English translation. The same packages are available for English (ELRA-E0011), Spanish (ELRA-E0012) and Mandarin Chinese (ELRA-E0013) for ASR and for SLT in 2 other directions, English-to-Spanish (ELRA-E0014) and Spanish-to-English (ELRA-E0015). To be able to chain the components, ASR, SLT and TTS evaluation tasks were designed to use common sets of raw data and conditions. Three evaluation tasks, common to ASR, SLT and TTS, were selected: EPPS (European Parliament Plenary Sessions) task, CORTES (Spanish Parliament Sessions) task and VOA (Voice of America) task. The CORTES data were used in addition to the EPPS data to evaluate ASR in Spanish and SLT from Spanish into English. This package was used within the VOA task and consists of 2 data sets: - Development data set: built upon the ASR development data set, in order to enable end-to-end evaluation. Subsets of 25,000 words were selected from the VOA verbatim transcriptions. The source texts were then translated into English by two independent translation agencies. All source text sets and reference translations were formatted using the same SGML DTD that has been used for the NIST Machine Translation evaluations. - Test data set: as for the development set, the same procedure was followed to produce the test data, i.e.: subsets of 25,000 words were selected from the test data set from the manual transcriptions. The source data were then translated into English by two independent agencies

Creator(s)
Distributor(s)
Right Holder(s)