Resource: RATS Speaker Identification
|Reference||RATS Speaker Identification|
|Date of Submission||Sept. 15, 2021, 8:05 p.m.|
|Resource Type||Primary Text|
|Media Type||Text, Audio|
|Language||Arabic, Dari, Persian, Pushto, Urdu|
|Format/MIME Type||audio/x-flac, text/plain|
|Access Medium||Web Download|
RATS Speaker Identification was developed by the Linguistic Data Consortium (LDC) and is comprised of approximately 1,900 hours of Levantine Arabic, Farsi, Dari, Pashto and Urdu conversational telephone speech with annotations of speech segments. The audio was retransmitted over eight channels, making 17,000 hours of total audio. The corpus was created to provide training and development sets for the Speaker Identification (SID) task in the DARPA RATS (Robust Automatic Transcription of Speech) program.
The goal of the RATS program was to develop human language technology systems capable of performing speech detection, language identification, speaker identification and keyword spotting on the severely degraded audio signals that are typical of various radio communication channels, especially those employing various types of handheld portable transceiver systems. To support that goal, LDC assembled a system for the transmission, reception and digital capture of audio data that allowed a single source audio signal to be distributed and recorded over eight distinct transceiver configurations simultaneously. Those configurations included three frequencies -- high, very high and ultra high -- variously combined with amplitude modulation, frequency hopping spread spectrum, narrow-band frequency modulation, single-side-band or wide-band frequency modulation. Annotations on the clear source audio signal, e.g., time boundaries for the duration of speech activity, were projected onto the corresponding eight channels recorded from the radio receivers.
The source audio consists of conversational telephone speech recordings collected by LDC specifically for the RATS program from Levantine Arabic, Pashto, Urdu, Farsi and Dari native speakers. Annotations on the audio files include start time, end time, speech activity detection (SAD) label, SAD provenance, speaker ID, speaker ID provenance, language ID, and language ID provenance.
The data is divided into training and development sets, each containing their own audio and annotation subdirectories.
All audio files are presented as single-channel, 16-bit PCM, 16000 samples per second; lossless FLAC compression is used on all files. When uncompressed, the files have typical "MS-WAV" (RIFF) file headers.
Annotation files are presented as tab-delimited, UTF-8 encoded, plain text.
This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. D10PC20016. The content does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred.
|Creator||Xiaoyi Ma , David Graff , Stephanie Strassel , Kevin Walker , Karen Jones|
|Distributor||Linguistic Data Consortium|
|Rights Holder||Portions © 2014, 2015, 2017, 2018, 2021 Trustees of the University of Pennsylvania|