Exploring Self-supervised Pre-trained ASR Models For Dysarthric and Elderly Speech Recognition. (arXiv:2302.14564v2 [cs.SD] UPDATED)


Exploring Self-supervised Pre-trained ASR Models For Dysarthric and Elderly Speech Recognition. (arXiv:2302.14564v2 [cs.SD] UPDATED)
By: <a href="http://arxiv.org/find/cs/1/au:+Hu_S/0/1/0/all/0/1">Shujie Hu</a>, <a href="http://arxiv.org/find/cs/1/au:+Xie_X/0/1/0/all/0/1">Xurong Xie</a>, <a href="http://arxiv.org/find/cs/1/au:+Jin_Z/0/1/0/all/0/1">Zengrui Jin</a>, <a href="http://arxiv.org/find/cs/1/au:+Geng_M/0/1/0/all/0/1">Mengzhe Geng</a>, <a href="http://arxiv.org/find/cs/1/au:+Wang_Y/0/1/0/all/0/1">Yi Wang</a>, <a href="http://arxiv.org/find/cs/1/au:+Cui_M/0/1/0/all/0/1">Mingyu Cui</a>, <a href="http://arxiv.org/find/cs/1/au:+Deng_J/0/1/0/all/0/1">Jiajun Deng</a>, <a href="http://arxiv.org/find/cs/1/au:+Liu_X/0/1/0/all/0/1">Xunying Liu</a>, <a href="http://arxiv.org/find/cs/1/au:+Meng_H/0/1/0/all/0/1">Helen Meng</a> Posted: June 23, 2023

Automatic recognition of disordered and elderly speech remains a highly
challenging task to date due to the difficulty in collecting such data in large
quantities. This paper explores a series of approaches to integrate domain
adapted SSL pre-trained models into TDNN and Conformer ASR systems for
dysarthric and elderly speech recognition: a) input feature fusion between
standard acoustic frontends and domain adapted wav2vec2.0 speech
representations; b) frame-level joint decoding of TDNN systems separately
trained using standard acoustic features alone and with additional wav2vec2.0
features; and c) multi-pass decoding involving the TDNN/Conformer system
outputs to be rescored using domain adapted wav2vec2.0 models. In addition,
domain adapted wav2vec2.0 representations are utilized in
acoustic-to-articulatory (A2A) inversion to construct multi-modal dysarthric
and elderly speech recognition systems. Experiments conducted on the UASpeech
dysarthric and DementiaBank Pitt elderly speech corpora suggest TDNN and
Conformer ASR systems integrated domain adapted wav2vec2.0 models consistently
outperform the standalone wav2vec2.0 models by statistically significant WER
reductions of 8.22% and 3.43% absolute (26.71% and 15.88% relative) on the two
tasks respectively. The lowest published WERs of 22.56% (52.53% on very low
intelligibility, 39.09% on unseen words) and 18.17% are obtained on the
UASpeech test set of 16 dysarthric speakers, and the DementiaBank Pitt test set
respectively.

Provided by:
http://arxiv.org/icons/sfx.gif

DoctorMorDi

DoctorMorDi

Moderator and Editor