Ancestors

Written by db0 on 2025-01-06 at 10:52

OpenAI is so cooked and I'm all here for it

https://lemmy.dbzer0.com/post/34984867

=> View attached media

=> More informations about this toot | More toots from db0@lemmy.dbzer0.com

Written by MNByChoice@midwest.social on 2025-01-06 at 13:04

CEO personally chose a price too low for company to be profitable.

What a clown.

=> More informations about this toot | More toots from MNByChoice@midwest.social

Written by shalafi@lemmy.world on 2025-01-06 at 15:47

More like he misjudged subscriber numbers than price.

=> More informations about this toot | More toots from shalafi@lemmy.world

Written by sc_griffith@awful.systems on 2025-01-06 at 18:08

please explain to us how you think having less, or more, subscribers would make this profitable

=> More informations about this toot | More toots from sc_griffith@awful.systems

Written by BB84@mander.xyz on 2025-01-06 at 20:16

LLM inference can be batched, reducing the cost per request. If you have too few customers, you can’t fill the optimal batch size.

That said, the optimal batch size on today’s hardware is not big (<20). I would be very very surprised if they couldn’t fill it.

=> More informations about this toot | More toots from BB84@mander.xyz

Written by flere-imsaho on 2025-01-07 at 16:48

i would swear that in an earlier version of this message the optimal batch size was estimated to be as large as twenty.

=> More informations about this toot | More toots from mawhrin@awful.systems

Toot

Written by self@awful.systems on 2025-01-07 at 17:02

yep, original is still visible on mastodon

=> More informations about this toot | More toots from self@awful.systems

Descendants

Proxy Information
Original URL
gemini://mastogem.picasoft.net/thread/113788146464599901
Status Code
Success (20)
Meta
text/gemini
Capsule Response Time
259.925942 milliseconds
Gemini-to-HTML Time
1.008348 milliseconds

This content has been proxied by September (3851b).