I published a lightning talk I gave at MozWeek on the architecture of our translations training pipeline. It details how we are scaling our training infrastructure to ship new language models, and the architecture of how we train the models. Our models are shrunk down to be ~17mb and run locally and privately on end user's machines.
https://www.youtube.com/watch?v=TfDEAYCeF6s
=> More informations about this toot | View the thread | More toots from gregtatum@fosstodon.org
text/gemini
This content has been proxied by September (3851b).