OpenAI is so cooked and I'm all here for it
https://lemmy.dbzer0.com/post/34984867
=> More informations about this toot | More toots from db0@lemmy.dbzer0.com
CEO personally chose a price too low for company to be profitable.
What a clown.
=> More informations about this toot | More toots from MNByChoice@midwest.social
More like he misjudged subscriber numbers than price.
=> More informations about this toot | More toots from shalafi@lemmy.world
please explain to us how you think having less, or more, subscribers would make this profitable
=> More informations about this toot | More toots from sc_griffith@awful.systems
LLM inference can be batched, reducing the cost per request. If you have too few customers, you can’t fill the optimal batch size.
That said, the optimal batch size on today’s hardware is not big (<20). I would be very very surprised if they couldn’t fill it.
=> More informations about this toot | More toots from BB84@mander.xyz
this sounds like an attempt to demand others disprove the assertion that they’re losing money, in a discussion of an article about Sam saying they’re losing money
=> More informations about this toot | More toots from dgerard@awful.systems
What? I’m not doubting what he said. Just surprised. Look at this. I really hope Sam IPO his company so I can short it.
=> More informations about this toot | More toots from BB84@mander.xyz
oh, so you’re that kind of fygm asshole
good to know
=> More informations about this toot | More toots from froztbyte@awful.systems
Can someone explain why I am being downvoted and attacked in this thread? I swear I am not sealioning. Genuinely confused.
@sc_griffith@awful.systems asked how request frequency might impact cost per request. Batch inference is a reason (ask anyone in the self-hosted LLM community). I noted that this reason only applies at very small scale, probably much smaller than what OpenAI is operating at.
@dgerard@awful.systems why did you say I am demanding someone disprove the assertion? Are you misunderstanding “I would be very very surprised if they couldn’t fill [the optimal batch size] for any few-seconds window” to mean “I would be very very surprised if they are not profitable”?
The tweet I linked shows that LLM inference can be done much more xheaply and efficiently. I am saying that OpenAI is very inefficient and thus economically “cooked”, as the post title will have it. How does this make me FYGM? @froztbyte@awful.systems
=> More informations about this toot | More toots from BB84@mander.xyz
Can someone explain why I am being downvoted and attacked in this thread? I swear I am not sealioning. Genuinely confused.
my god! let me fix that
=> More informations about this toot | More toots from self@awful.systems
i would swear that in an earlier version of this message the optimal batch size was estimated to be as large as twenty.
=> More informations about this toot | More toots from mawhrin@awful.systems
yep, original is still visible on mastodon
=> More informations about this toot | More toots from self@awful.systems This content has been proxied by September (3851b).Proxy Information
text/gemini