๐Ÿ‘จโ€๐ŸŽ“ Biography

I am Dr. Zhe Li (ๆŽๅ“ฒ), currently a Postdoctoral Fellow at the The University of Hong Kong (HKU).
My research focuses on speech large language models (LLMs) ๐Ÿง  and robust speaker representation learning ๐Ÿ”Š, with a broader interest in multimodal AI for healthcare ๐Ÿฉบ.


๐Ÿ’ผ Research Experience


๐Ÿ”ฌ Research Interests

  • ๐Ÿ”Š Speaker Representation Learning โ€“ disentanglement, cross-lingual robustness, and PEFT strategies
  • ๐Ÿฉบ Multimodal AI for Healthcare โ€“ speech, text, and imaging fusion for disease prediction
  • ๐ŸŒ Low-Resource & Multilingual NLP โ€“ Uyghur, morphologically rich languages, and cross-lingual transfer

You are more than what you have become!

๐Ÿ“ฐ News

๐Ÿ† 2025

  • 29 Sep 2025 โ€” ๐ŸŽ‰ Our paper โ€œWhisMultiNet: Advancing End-to-End Speech Topic Classification with Whisper and MultiGateGNNโ€ has been accepted by IEEE Transactions on Audio, Speech, and Language Processing (T-ASLP)! Thanks to Xiaozhe Qi!
  • 04 Sep 2025 โ€” ๐ŸŽ‰ Our paper โ€œDisentangling Speech Representations Learning with Latent Diffusion for Speaker Verificationโ€ accepted by IEEE Transactions on Audio, Speech, and Language Processing (T-ASLP)! Thanks to Prof. Mak!
  • 20 Aug 2025 โ€” ๐ŸŽ‰ One paper accepted to EMNLP 2025 โ€” see you in Suzhou, China!
  • 18 Jun 2025 โ€” ๐ŸŽ‰ One paper accepted to MICCAI 2025 โ€” see you in Daejeon, South Korea!
  • 14 Jun 2025 โ€” ๐ŸŽ‰ Our paper โ€œMutual Information-Enhanced Contrastive Learning with Margin for Maximal Speaker Separabilityโ€ accepted by IEEE/ACM T-ASLP. Thanks to Prof. Mak!
  • 19 May 2025 โ€” ๐ŸŽ‰ Two papers accepted to Interspeech 2025 โ€” see you in Rotterdam, Netherlands!
  • 04 Mar 2025 โ€” ๐Ÿง‘๐Ÿปโ€๐Ÿซ Paper Sharing Session: I gave a talk on Spectral-Aware Low-Rank Adaptation for Speaker Verification (ICASSP 2025).
  • 11 Feb 2025 โ€” ๐Ÿง‘๐Ÿปโ€๐Ÿ’ป Joined Microsoft Research Asia (MSRA) as a Research Intern, focusing on multimodal large models for healthcare.

๐Ÿ† 2024

  • 21 Dec 2024 โ€” ๐ŸŽ‰ Four papers accepted to ICASSP 2025 โ€” see you in Hyderabad, India!
  • 04 Dec 2024 โ€” ๐Ÿ… Enhancing Multimodal Rumor Detection with Statistical Image Features and Modal Alignment via Contrastive Learning received Best Student Paper Runner-Up Award ๐Ÿฅˆ at PRICAI 2024.
  • 17 Jun 2024 โ€” ๐Ÿง‘๐Ÿปโ€๐Ÿซ Paper Sharing Session: Parameter-efficient Fine-tuning of Speaker-Aware Dynamic Prompts for Speaker Verification (Interspeech 2024).
  • 03 Apr 2024 โ€” ๐Ÿง‘๐Ÿปโ€๐Ÿซ Paper Sharing Session: Dual Parameter-Efficient Fine-Tuning for Speaker Representation via Speaker Prompt Tuning and Adapters (ICASSP 2024).

๐ŸŽค 2023

  • 08 Dec 2023 โ€” Presented Maximal Speaker Separability via Robust Speaker Representation Learning at NCMMSC 2023, Soochow, China.
  • 03 Dec 2023 โ€” Presented Maximal Speaker Separability via Contrastive Learning with Angular Margin and Class-Aware Attention for Hard Samples at International Doctoral Forum 2023, Hong Kong SAR.

๐Ÿ“š 2022โ€“2020

  • 15 May 2023 โ€” Paper Sharing Session: Discriminative Speaker Representation via Contrastive Learning with Class-Aware Attention in Angular Space (ICASSP 2023).
  • 01 Jul 2022 โ€” Participant Talk: Shared on speaker verification at Odyssey-CNSRC Workshop 2022.
  • 29 May 2021 โ€” ๐ŸŽ“ Completed Masterโ€™s oral examination.
  • 14 Nov 2020 โ€” ๐Ÿ… CAAI Award: Received the Excellent Scientific and Technological Achievements Award of the Chinese Association for Artificial Intelligence.
  • 29 Oct 2020 โ€” Video: Uploaded CCL 2020 oral presentation.
  • 11 Oct 2020 โ€” Video: Uploaded CCMT 2020 oral presentation.

Hidden Visit Tracker