Meta’s new AI takes us a step closer to a universal language translator

Meta has taken another step towards creating a universal language translator.

The company has open-sourced an AI model that translates over 200 languages — many of which aren’t supported by existing systems.

The research is part of a Meta initiative launched earlier this year.

Greetings, humanoids

Subscribe to our newsletter now for a weekly recap of our favorite AI stories in your inbox.

“We call this project No Language Left Behind, and the AI modeling techniques we used from NLLB are helping us make high quality translations on Facebook and Instagram for languages spoken by billions of people around the world,” Meta CEO Mark Zuckerberg said in a Facebook post.

NLLB focuses on lower-resource languages, such as Maori or Maltese. Most people in the world speak these languages, but they lack the training data that AI translations typically require.

Meta’s new model was designed to overcome this challenge.

To do this, the researchers first interviewed speakers of underserved languages to understand their needs. They then developed a novel data mining technique that generates training sentences for low-resource languages.

Next, they trained their model on a mix of the mined data and human-translated data.

The result is NLLB-200 — a massive multilingual translation system for 202 languages.

The team assessed the model’s performance on the FLORES-101 dataset, which evaluates translations of low-resource languages.

“Despite doubling the number of languages, our final model performs 40% better than the previous state of the art on Flores-101,” the study authors wrote.

SOTA comparison