First DIHARD Challenge Evaluation - Nine Sources

Full Official Name: First DIHARD Challenge Evaluation - Nine Sources
Submission date: July 18, 2019, 7:16 p.m.

*Introduction* First DIHARD Challenge Evaluation - Nine Sources was developed by the Linguistic Data Consortium (LDC) and contains approximately 18 hours of English and Chinese speech data along with corresponding annotations used in support of the First DIHARD Challenge. The First DIHARD Challenge was an attempt to reinvigorate work on diarization through a shared task focusing on "hard" diarization; that is, speech diarization for challenging corpora where there was an expectation that existing state-of-the-art systems would fare poorly. As such, it included speech from a wide sampling of domains representing diversity in number of speakers, speaker demographics, interaction style, recording quality, and environmental conditions, including, but not limited to: clinical interviews, extended child language acquisition recordings, YouTube recordings, and conversations collected in restaurants. *Data* This release, when combined with First DIHARD Challenge Evaluation - SEEDLingS (LDC2019S13), contains the evaluation set audio data and annotation as well as the official scoring tool. The development data for the First DIHARD Challenge is also available from LDC as Eight Sources (LDC2019S09) and SEEDLingS (LDC2019S10). The source data was drawn from the following (all sources are in English unless otherwise indicated): * Autism Diagnostic Observation Schedule (ADOS) interviews * Conversations in Restaurants * DCIEM/HCRC map task (LDC96S38) * Audiobook recordings from LibriVox * Meeting speech collected by LDC in 2001 for the ROAR project (see, e.g., ISL Meeting Speech Part 1 (LDC2004S05)) * 2001 U.S. Supreme Court oral arguments * Mixer 6 Speech (LDC2013S02) * Chinese video collected by LDC as part of the Video Annotation for Speech Technologies (VAST) project * YouthPoint radio interviews All audio is provided in the form of 16 kHz, mono-channel FLAC files. The diarization for each recording is stored as a NIST Rich Transcription Time Marked (RTTM) file. RTTM files are space-separated text files containing one turn per line. Segmentation files are stored as HTK label files. Each of these files contains one speech segment per line. Both of the annotation file types are encoded as UTF-8. More information about the file formats are in the included documentation.

Creator(s)
Distributor(s)
Right Holder(s)