Get your daily news on science and technology

Provided by AGP

MLC-SLM Challenge Registration Is in Full Swing

LOS ANGELES, CA, UNITED STATES, May 14, 2026 /EINPresswire.com/ -- 𝗧𝗵𝗲 𝟮𝗻𝗱 𝗠𝘂𝗹𝘁𝗶𝗹𝗶𝗻𝗴𝘂𝗮𝗹 𝗖𝗼𝗻𝘃𝗲𝗿𝘀𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗦𝗽𝗲𝗲𝗰𝗵 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗠𝗼𝗱𝗲𝗹 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲 (𝗠𝗟𝗖-𝗦𝗟𝗠 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲 𝟮𝟬𝟮𝟲) 𝗶𝘀 𝗻𝗼𝘄 𝗮𝘁𝘁𝗿𝗮𝗰𝘁𝗶𝗻𝗴 𝗮𝗰𝘁𝗶𝘃𝗲 𝗿𝗲𝗴𝗶𝘀𝘁𝗿𝗮𝘁𝗶𝗼𝗻!

With the rapid development of large language models (LLMs) and speech language models (Speech LLMs), speech recognition and spoken language understanding are moving toward unified modeling. However, real-world multilingual conversational scenarios still present major challenges, including 𝗹𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗱𝗶𝘃𝗲𝗿𝘀𝗶𝘁𝘆, 𝗮𝗰𝗰𝗲𝗻𝘁 𝘃𝗮𝗿𝗶𝗮𝘁𝗶𝗼𝗻, 𝘀𝗽𝗲𝗮𝗸𝗲𝗿 𝘁𝘂𝗿𝗻𝘀, 𝗰𝗼𝗺𝗽𝗹𝗲𝘅 𝗱𝗶𝗮𝗹𝗼𝗴𝘂𝗲 𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲𝘀, 𝗮𝗻𝗱 𝗶𝗻𝘀𝘂𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝘁 𝘀𝗲𝗺𝗮𝗻𝘁𝗶𝗰 𝘂𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴. Results from the first MLC-SLM Challenge showed that Speech LLMs have achieved strong performance in speech recognition, while there remains significant room for further exploration in 𝘀𝗽𝗲𝗮𝗸𝗲𝗿 𝗱𝗶𝗮𝗿𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗮𝗻𝗱 𝗱𝗲𝗲𝗽𝗲𝗿 𝘀𝗽𝗲𝗲𝗰𝗵 𝘂𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴 for complex multilingual conversations. Building on this, the 2nd MLC-SLM Challenge aims to further advance Speech LLMs in 𝘀𝗽𝗲𝗮𝗸𝗲𝗿 𝗱𝗶𝗮𝗿𝗶𝘇𝗮𝘁𝗶𝗼𝗻, 𝗮𝗰𝗼𝘂𝘀𝘁𝗶𝗰 𝘂𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴, 𝗮𝗻𝗱 𝘀𝗲𝗺𝗮𝗻𝘁𝗶𝗰 𝘂𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴.

The training set for this year’s challenge has been further expanded from the first edition, adding more language variants and accents such as 𝗖𝗮𝗻𝗮𝗱𝗶𝗮𝗻 𝗙𝗿𝗲𝗻𝗰𝗵, 𝗠𝗲𝘅𝗶𝗰𝗮𝗻 𝗦𝗽𝗮𝗻𝗶𝘀𝗵, 𝗮𝗻𝗱 𝗕𝗿𝗮𝘇𝗶𝗹𝗶𝗮𝗻 𝗣𝗼𝗿𝘁𝘂𝗴𝘂𝗲𝘀𝗲. The training data totals approximately 𝟮,𝟭𝟬𝟬 𝗵𝗼𝘂𝗿𝘀 𝗮𝗻𝗱 𝗰𝗼𝘃𝗲𝗿𝘀 𝗮𝗿𝗼𝘂𝗻𝗱 𝟭𝟰 𝗹𝗮𝗻𝗴𝘂𝗮𝗴𝗲𝘀, providing richer and more realistic data support for research on multilingual conversational speech language models.

𝗠𝗮𝗷𝗼𝗿 𝘂𝗽𝗱𝗮𝘁𝗲: 𝘁𝗵𝗲 𝗼𝗳𝗳𝗶𝗰𝗶𝗮𝗹 𝗯𝗮𝘀𝗲𝗹𝗶𝗻𝗲 𝘀𝘆𝘀𝘁𝗲𝗺𝘀 𝗳𝗼𝗿 𝘁𝗵𝗶𝘀 𝘆𝗲𝗮𝗿’𝘀 𝗰𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲 𝗵𝗮𝘃𝗲 𝗻𝗼𝘄 𝗯𝗲𝗲𝗻 𝗿𝗲𝗹𝗲𝗮𝘀𝗲𝗱!

Task 1 focuses on multilingual 𝗰𝗼𝗻𝘃𝗲𝗿𝘀𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝘀𝗽𝗲𝗲𝗰𝗵 𝘀𝗽𝗲𝗮𝗸𝗲𝗿 𝗱𝗶𝗮𝗿𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗮𝗻𝗱 𝗿𝗲𝗰𝗼𝗴𝗻𝗶𝘁𝗶𝗼𝗻. The baseline system is built on 𝗠𝗶𝗰𝗿𝗼𝘀𝗼𝗳𝘁’𝘀 𝗼𝗽𝗲𝗻-𝘀𝗼𝘂𝗿𝗰𝗲 𝗩𝗶𝗯𝗲𝗩𝗼𝗶𝗰𝗲-𝗔𝗦𝗥 𝗺𝗼𝗱𝗲𝗹 and fine-tuned with the challenge training set.

Task 2 focuses on 𝗺𝘂𝗹𝘁𝗶𝗹𝗶𝗻𝗴𝘂𝗮𝗹 𝗰𝗼𝗻𝘃𝗲𝗿𝘀𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝘀𝗽𝗲𝗲𝗰𝗵 𝘂𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴. The baseline system uses 𝗚𝗲𝗺𝗶𝗻𝗶 𝟮.𝟱 𝗣𝗿𝗼 to construct multiple-choice questions for acoustic and semantic understanding, and is fine-tuned based on 𝗤𝘄𝗲𝗻𝟮.𝟱-𝗢𝗺𝗻𝗶-𝟳𝗕 𝗮𝗻𝗱 𝘁𝗵𝗲 𝗺𝘀-𝘀𝘄𝗶𝗳𝘁 𝘁𝗼𝗼𝗹𝗸𝗶𝘁.

Participating teams can now refer to the official baseline systems to accelerate system development, experimental validation, and model optimization.

Teams from both academia and industry are continuing to join the challenge. Notably, employees from 𝗡𝗩𝗜𝗗𝗜𝗔 𝗮𝗻𝗱 𝗝𝗣𝗠𝗼𝗿𝗴𝗮𝗻 𝗖𝗵𝗮𝘀𝗲 have already formed teams to participate, reflecting strong interest from leading global technology and financial institutions in 𝗺𝘂𝗹𝘁𝗶𝗹𝗶𝗻𝗴𝘂𝗮𝗹 𝘀𝗽𝗲𝗲𝗰𝗵 𝗹𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗺𝗼𝗱𝗲𝗹 𝘁𝗲𝗰𝗵𝗻𝗼𝗹𝗼𝗴𝗶𝗲𝘀.

Whether you work on speech recognition, speaker diarization, speech understanding, multimodal large models, or multilingual data and evaluation, MLC-SLM offers a platform to compete and collaborate with researchers, engineers, and industry teams from around the world.

We welcome 𝘂𝗻𝗶𝘃𝗲𝗿𝘀𝗶𝘁𝗶𝗲𝘀, 𝗿𝗲𝘀𝗲𝗮𝗿𝗰𝗵 𝗶𝗻𝘀𝘁𝗶𝘁𝘂𝘁𝗶𝗼𝗻𝘀, 𝗲𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲 𝘁𝗲𝗮𝗺𝘀, 𝗮𝗻𝗱 𝗶𝗻𝗱𝗶𝘃𝗶𝗱𝘂𝗮𝗹 𝗿𝗲𝘀𝗲𝗮𝗿𝗰𝗵𝗲𝗿𝘀 to register and participate. Join us in advancing the development of multilingual conversational speech language models!

Registration is ongoing. We look forward to your participation.
Official Website Link: https://www.nexdata.ai/competition/mlc-slm
Registration Link: https://forms.gle/jfAZ95abGy4ZiNHo7

Nexdata
MLC-SLM Competition Committee
mlc-slmw@nexdata.ai
Visit us on social media:
LinkedIn
Facebook
YouTube
X

Legal Disclaimer:

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Share us

on your social networks:

Sign up for:

Global Tech Reporter

The daily local news briefing you can trust. Every day. Subscribe now.

By signing up, you agree to our Terms & Conditions.