OpenAI is so cooked and I'm all here for it
https://lemmy.dbzer0.com/post/34984867
=> More informations about this toot | More toots from db0@lemmy.dbzer0.com
CEO personally chose a price too low for company to be profitable.
What a clown.
=> More informations about this toot | More toots from MNByChoice@midwest.social
More like he misjudged subscriber numbers than price.
=> More informations about this toot | More toots from shalafi@lemmy.world
please explain to us how you think having less, or more, subscribers would make this profitable
=> More informations about this toot | More toots from sc_griffith@awful.systems
LLM inference can be batched, reducing the cost per request. If you have too few customers, you can’t fill the optimal batch size.
That said, the optimal batch size on today’s hardware is not big (<20). I would be very very surprised if they couldn’t fill it.
=> More informations about this toot | More toots from BB84@mander.xyz
i would swear that in an earlier version of this message the optimal batch size was estimated to be as large as twenty.
=> More informations about this toot | More toots from mawhrin@awful.systems
yep, original is still visible on mastodon
=> More informations about this toot | More toots from self@awful.systems
text/gemini
This content has been proxied by September (3851b).