AI that can synthesize and process language with ease has made rapid strides in the last few years with NLP teams delivering key breakthroughs like Wavenet that powers the Google Assistant. But of the 7000+ languages in the world, only a few hundred are represented by text to speech and translation engines.  The world’s most dominate languages, like Chinese and English, take the vast majority of the budget and resources.  Economically it makes sense, because 100 or so languages are spoken by more than 50% of the world. But more than 7000+ other languages are spoken by the rest of the population of planet Earth and they’re badly represented by today’s AI.  In this episode Daniel Jeffries, Chief Technical Evangelist for Pachyderm, the data lineage platform for machine learning operations (MLOps), and Daniel Whitenack, data scientist and language specialist at SIL International, talk about how AI can help us scale translations for the rest of us.