Toot

Written by Greg Tatum on 2025-01-06 at 18:18

I published a lightning talk I gave at MozWeek on the architecture of our translations training pipeline. It details how we are scaling our training infrastructure to ship new language models, and the architecture of how we train the models. Our models are shrunk down to be ~17mb and run locally and privately on end user's machines.

https://www.youtube.com/watch?v=TfDEAYCeF6s

=> More informations about this toot | View the thread | More toots from gregtatum@fosstodon.org

Mentions

Tags

Proxy Information
Original URL
gemini://mastogem.picasoft.net/toot/113782783151396016
Status Code
Success (20)
Meta
text/gemini
Capsule Response Time
201.880465 milliseconds
Gemini-to-HTML Time
0.477445 milliseconds

This content has been proxied by September (3851b).