Ancestors

Written by db0 on 2025-01-06 at 10:52

OpenAI is so cooked and I'm all here for it

https://lemmy.dbzer0.com/post/34984867

=> View attached media

=> More informations about this toot | More toots from db0@lemmy.dbzer0.com

Written by MNByChoice@midwest.social on 2025-01-06 at 13:04

CEO personally chose a price too low for company to be profitable.

What a clown.

=> More informations about this toot | More toots from MNByChoice@midwest.social

Toot

Written by shalafi@lemmy.world on 2025-01-06 at 15:47

More like he misjudged subscriber numbers than price.

=> More informations about this toot | More toots from shalafi@lemmy.world

Descendants

Written by froztbyte@awful.systems on 2025-01-06 at 16:18

despite that one episode of Leverage where they did some laundering by way of gym memberships, not every shady bullshit business that burns way more than they make can just swizzle the numbers!

(also if you spend maybe half a second thinking about it you’d realize that economies of scale only apply when you can actually have economies of scale. which they can’t. which is why they’re constantly setting more money on fire the harder they try to make their bad product seem good)

=> More informations about this toot | More toots from froztbyte@awful.systems

Written by sc_griffith@awful.systems on 2025-01-06 at 18:08

please explain to us how you think having less, or more, subscribers would make this profitable

=> More informations about this toot | More toots from sc_griffith@awful.systems

Written by EldritchFeminity@lemmy.blahaj.zone on 2025-01-06 at 18:16

Yeah, the tweet clearly says that the subscribers they have are using it more than they expected, which is costing them more than $200 per month per subscriber just to run it.

I could see an argument for an economy of scales kind of situation where adding more users would offset the cost per user, but it seems like here that would just increase their overhead, making the problem worse.

=> More informations about this toot | More toots from EldritchFeminity@lemmy.blahaj.zone

Written by BB84@mander.xyz on 2025-01-06 at 20:16

LLM inference can be batched, reducing the cost per request. If you have too few customers, you can’t fill the optimal batch size.

That said, the optimal batch size on today’s hardware is not big (<20). I would be very very surprised if they couldn’t fill it.

=> More informations about this toot | More toots from BB84@mander.xyz

Written by David Gerard on 2025-01-07 at 01:08

this sounds like an attempt to demand others disprove the assertion that they’re losing money, in a discussion of an article about Sam saying they’re losing money

=> More informations about this toot | More toots from dgerard@awful.systems

Written by BB84@mander.xyz on 2025-01-07 at 02:49

What? I’m not doubting what he said. Just surprised. Look at this. I really hope Sam IPO his company so I can short it.

=> More informations about this toot | More toots from BB84@mander.xyz

Written by froztbyte@awful.systems on 2025-01-07 at 05:15

oh, so you’re that kind of fygm asshole

good to know

=> More informations about this toot | More toots from froztbyte@awful.systems

Written by BB84@mander.xyz on 2025-01-07 at 06:51

Can someone explain why I am being downvoted and attacked in this thread? I swear I am not sealioning. Genuinely confused.

@sc_griffith@awful.systems asked how request frequency might impact cost per request. Batch inference is a reason (ask anyone in the self-hosted LLM community). I noted that this reason only applies at very small scale, probably much smaller than what OpenAI is operating at.

@dgerard@awful.systems why did you say I am demanding someone disprove the assertion? Are you misunderstanding “I would be very very surprised if they couldn’t fill [the optimal batch size] for any few-seconds window” to mean “I would be very very surprised if they are not profitable”?

The tweet I linked shows that LLM inference can be done much more xheaply and efficiently. I am saying that OpenAI is very inefficient and thus economically “cooked”, as the post title will have it. How does this make me FYGM? @froztbyte@awful.systems

=> More informations about this toot | More toots from BB84@mander.xyz

Written by self@awful.systems on 2025-01-07 at 09:35

Can someone explain why I am being downvoted and attacked in this thread? I swear I am not sealioning. Genuinely confused.

my god! let me fix that

=> More informations about this toot | More toots from self@awful.systems

Written by flere-imsaho on 2025-01-07 at 16:48

i would swear that in an earlier version of this message the optimal batch size was estimated to be as large as twenty.

=> More informations about this toot | More toots from mawhrin@awful.systems

Written by self@awful.systems on 2025-01-07 at 17:02

yep, original is still visible on mastodon

=> More informations about this toot | More toots from self@awful.systems

Written by V0ldek@awful.systems on 2025-01-07 at 02:49

Wait but he controls the price, not the subscriber number?

Like even if the issue was low subscriber number (which it isn’t since they’re losing money per subscriber, more subscribers just makes you lose money faster), that’s still the same category of mistake? You control the price and supply, not the demand, you can’t set a stupid price that loses you money and then be like “ah, not my fault, demand was too low” like bozo it’s your product and you set the price

=> More informations about this toot | More toots from V0ldek@awful.systems

Written by froztbyte@awful.systems on 2025-01-07 at 05:19

I believe our esteemed poster was referencing the oft-seen cloud dynamic of “making just enough in margin” where you can tolerate a handful of big users because you have enough lower-usage subscribers in aggregate to counter the heavies. which, y’know, still requires the margin to exist in the first place

alas, hard to have margins in Setting The Money On Fire business models

=> More informations about this toot | More toots from froztbyte@awful.systems

Proxy Information
Original URL
gemini://mastogem.picasoft.net/thread/113782190755923768
Status Code
Success (20)
Meta
text/gemini
Capsule Response Time
303.775164 milliseconds
Gemini-to-HTML Time
7.352749 milliseconds

This content has been proxied by September (3851b).