Abstract
Audio-visual speech recognition (AVSR) research has gained a great success recently by improving the noise-robustness of audio-only automatic speech recognition (ASR) with noise-invariant visual information. However, most existing AVSR approaches simply fuse the audio and visual features by concatenation, without explicit interactions to capture the deep correlations between them, which results in sub-optimal multimodal representations for downstream speech recognition task. In this paper, we propose a cross-modal global interaction and local alignment (GILA) approach for AVSR, which captures the deep audio-visual (A-V) correlations from both global and local perspectives. Specifically, we design a global interaction model to capture the A-V complementary relationship on modality level, as well as a local alignment approach to model the A-V temporal consistency on frame level. Such a holistic view of cross-modal correlations enable better multimodal representations for AVSR. Experiments on public benchmarks LRS3 and LRS2 show that our GILA outperforms the supervised learning state-of-the-art. Code is at https://github.com/YUCHEN005/GILA.
Original language | English |
---|---|
Title of host publication | Proceedings of the 32nd International Joint Conference on Artificial Intelligence, IJCAI 2023 |
Editors | Edith Elkind |
Publisher | International Joint Conferences on Artificial Intelligence Organization |
Pages | 5076-5084 |
Number of pages | 9 |
ISBN (Electronic) | 9781956792034 |
DOIs | |
Publication status | Published - 25 Aug 2023 |
Event | 32nd International Joint Conference on Artificial Intelligence, IJCAI 2023 - Macao, China Duration: 19 Aug 2023 → 25 Aug 2023 |
Conference
Conference | 32nd International Joint Conference on Artificial Intelligence, IJCAI 2023 |
---|---|
Country/Territory | China |
City | Macao |
Period | 19/08/23 → 25/08/23 |
Bibliographical note
Funding Information:This research is supported by ST Engineering Mission Software & Services Pte. Ltd under collaboration programme (Research Collaboration No.: REQ0149132). The computational work for this article was partially performed on resources of the National Supercomputing Centre, Singapore (https://www.nscc.sg).