Download now Free registration required
Making accurate verbatim transcriptions is very time-consuming and in the case of extemporaneous speech of native and non-native speakers the task is extremely difficult. While previous research focused on evaluating phonemic transcriptions, the goal of the authors' research is the automatic detection of transcription errors on the orthographic level, which degrade the quality of every following annotation level. Since it is hard to statistically characterize a bad transcription, they use a Novelty Detection approach to model accurate transcriptions only and use models of good transcriptions to reject all inputs that do not fit. A hand-segmented corpus of spontaneous speech is used to build models of correct transcriptions.
- Format: PDF
- Size: 191.84 KB