I published a lightning talk I gave at MozWeek on the architecture of our translations training pipeline. It details how we are scaling our training infrastructure to ship new language models, and the architecture of how we train the models. Our models are shrunk down to be ~17mb and run locally and privately on end user's machines.
https://www.youtube.com/watch?v=TfDEAYCeF6s
=> More informations about this toot | More toots from gregtatum@fosstodon.org
@gregtatum Great work on these #Firefox #translation models. The video made me realize just how much work is behind this feature! 👏
=> More informations about this toot | More toots from rubencapiau@mastodon.social This content has been proxied by September (3851b).Proxy Information
text/gemini