https://arab.news/4b4xe
- Developers use translations of the New Testament to collect data
LONDON: Meta announced on Monday that it has created new artificial intelligence models that can recognize more than 4,000 spoken languages and produce speech in more than 1,100.
The recent Massively Multilingual Speech, or MMS, project would “help preserve the world’s languages and bring the world closer together,” wrote the social networking giant.
To achieve this and to support researchers in the field by providing them with a foundation to build on, Meta announced open-sourcing MMS via the code-hosting service GitHub.
Meta’s developers turned to religious texts, such as translations of the New Testament in the Bible, that are available in multiple languages to overcome the challenge of collecting audio data for the languages.
The texts “have been widely studied for text-based language translation research,” wrote Meta, and “have publicly available audio recordings of people reading these texts in different languages.”
This unconventional approach provided about 32 hours of data per language.
And although this data is often read by male speakers, Meta’s models perform equally well for female voices.
Meta highlighted that it wants to increase MMS’s scale to cover more languages, in addition to handling dialects.