Ancestors

Toot

Written by db0 on 2025-01-06 at 10:52

OpenAI is so cooked and I'm all here for it

https://lemmy.dbzer0.com/post/34984867

=> View attached media

=> More informations about this toot | More toots from db0@lemmy.dbzer0.com

Descendants

Written by rockSlayer@lemmy.world on 2025-01-06 at 11:21

The plagiarism power virus is too expensive to operate? I’m shocked I tell you

=> More informations about this toot | More toots from rockSlayer@lemmy.world

Written by Soyweiser@awful.systems on 2025-01-07 at 14:55

Now imagine if they actually paid for the training data as well.

=> More informations about this toot | More toots from Soyweiser@awful.systems

Written by DavidGarcia@feddit.nl on 2025-01-06 at 12:30

really looking forward to how these multi-billion dollar AI datacenter investments will work out for big tech companies

that said I’m pretty sure most of that capacity is reserved for the surveillance state anyway

=> More informations about this toot | More toots from DavidGarcia@feddit.nl

Written by AlexWIWA@lemmy.ml on 2025-01-07 at 19:40

I’m excited for the used hardware dump

=> More informations about this toot | More toots from AlexWIWA@lemmy.ml

Written by Mikina@programming.dev on 2025-01-06 at 12:35

Hmm, we should get together some funds to buy a single unlimited subscription, and then let it continuously generate as large and complex prompts as the rate limitting allows.

=> More informations about this toot | More toots from Mikina@programming.dev

Written by chunkystyles@sopuli.xyz on 2025-01-06 at 13:56

Buy two. Ask the other to generate expensive prompts.

=> More informations about this toot | More toots from chunkystyles@sopuli.xyz

Written by apotheotic (she/her) on 2025-01-06 at 13:56

On one hand, heck yes. On the other, part of the reason its so expensive is because of the energy and water usage, so sticking it to the man in this way also is harmful to the environment :(

=> More informations about this toot | More toots from apotheotic@beehaw.org

Written by areyouevenreal@lemm.ee on 2025-01-07 at 07:52

Normally the people talking about water use have no idea what they are talking about. Normally data center cooling is closed loop, much like a cars cooling system. So they don’t use significant amounts of water at all.

=> More informations about this toot | More toots from areyouevenreal@lemm.ee

Written by skillissuer@discuss.tchncs.de on 2025-01-07 at 09:30

hey shithead, what’s evaporative cooling and why metric-chasing design (COE in this case) likes it so much?

=> More informations about this toot | More toots from skillissuer@discuss.tchncs.de

Written by MNByChoice@midwest.social on 2025-01-06 at 13:04

CEO personally chose a price too low for company to be profitable.

What a clown.

=> More informations about this toot | More toots from MNByChoice@midwest.social

Written by Sergio@slrpnk.net on 2025-01-06 at 15:26

They’re still in the first stage of enshittification: gaining market share. In fact, this is probably all just a marketing scheme. “Hi! I’m Crazy Sam Altman and my prices are SO LOW that I’m LOSING MONEY!! Tell your friends and subscribe now!”

=> More informations about this toot | More toots from Sergio@slrpnk.net

Written by skittle07crusher@sh.itjust.works on 2025-01-06 at 21:55

I’m afraid it might be more like Uber, or Funko, apparently, as I just learned tonight.

Sustained somehow for decades before finally turning any profit. Pumped full of cash like it’s foie gras by Wall Street. Inorganic as fuck, promoted like hell by Wall Street, VC, and/or private equity.

Shoved down our throats in the end.

=> More informations about this toot | More toots from skittle07crusher@sh.itjust.works

Written by isles@lemmy.world on 2025-01-06 at 21:59

It was worth it to finally dethrone Big Taxi🙄

=> More informations about this toot | More toots from isles@lemmy.world

Written by shalafi@lemmy.world on 2025-01-06 at 15:47

More like he misjudged subscriber numbers than price.

=> More informations about this toot | More toots from shalafi@lemmy.world

Written by froztbyte@awful.systems on 2025-01-06 at 16:18

despite that one episode of Leverage where they did some laundering by way of gym memberships, not every shady bullshit business that burns way more than they make can just swizzle the numbers!

(also if you spend maybe half a second thinking about it you’d realize that economies of scale only apply when you can actually have economies of scale. which they can’t. which is why they’re constantly setting more money on fire the harder they try to make their bad product seem good)

=> More informations about this toot | More toots from froztbyte@awful.systems

Written by sc_griffith@awful.systems on 2025-01-06 at 18:08

please explain to us how you think having less, or more, subscribers would make this profitable

=> More informations about this toot | More toots from sc_griffith@awful.systems

Written by EldritchFeminity@lemmy.blahaj.zone on 2025-01-06 at 18:16

Yeah, the tweet clearly says that the subscribers they have are using it more than they expected, which is costing them more than $200 per month per subscriber just to run it.

I could see an argument for an economy of scales kind of situation where adding more users would offset the cost per user, but it seems like here that would just increase their overhead, making the problem worse.

=> More informations about this toot | More toots from EldritchFeminity@lemmy.blahaj.zone

Written by BB84@mander.xyz on 2025-01-06 at 20:16

LLM inference can be batched, reducing the cost per request. If you have too few customers, you can’t fill the optimal batch size.

That said, the optimal batch size on today’s hardware is not big (<20). I would be very very surprised if they couldn’t fill it.

=> More informations about this toot | More toots from BB84@mander.xyz

Written by David Gerard on 2025-01-07 at 01:08

this sounds like an attempt to demand others disprove the assertion that they’re losing money, in a discussion of an article about Sam saying they’re losing money

=> More informations about this toot | More toots from dgerard@awful.systems

Written by BB84@mander.xyz on 2025-01-07 at 02:49

What? I’m not doubting what he said. Just surprised. Look at this. I really hope Sam IPO his company so I can short it.

=> More informations about this toot | More toots from BB84@mander.xyz

Written by froztbyte@awful.systems on 2025-01-07 at 05:15

oh, so you’re that kind of fygm asshole

good to know

=> More informations about this toot | More toots from froztbyte@awful.systems

Written by BB84@mander.xyz on 2025-01-07 at 06:51

Can someone explain why I am being downvoted and attacked in this thread? I swear I am not sealioning. Genuinely confused.

@sc_griffith@awful.systems asked how request frequency might impact cost per request. Batch inference is a reason (ask anyone in the self-hosted LLM community). I noted that this reason only applies at very small scale, probably much smaller than what OpenAI is operating at.

@dgerard@awful.systems why did you say I am demanding someone disprove the assertion? Are you misunderstanding “I would be very very surprised if they couldn’t fill [the optimal batch size] for any few-seconds window” to mean “I would be very very surprised if they are not profitable”?

The tweet I linked shows that LLM inference can be done much more xheaply and efficiently. I am saying that OpenAI is very inefficient and thus economically “cooked”, as the post title will have it. How does this make me FYGM? @froztbyte@awful.systems

=> More informations about this toot | More toots from BB84@mander.xyz

Written by self@awful.systems on 2025-01-07 at 09:35

Can someone explain why I am being downvoted and attacked in this thread? I swear I am not sealioning. Genuinely confused.

my god! let me fix that

=> More informations about this toot | More toots from self@awful.systems

Written by flere-imsaho on 2025-01-07 at 16:48

i would swear that in an earlier version of this message the optimal batch size was estimated to be as large as twenty.

=> More informations about this toot | More toots from mawhrin@awful.systems

Written by self@awful.systems on 2025-01-07 at 17:02

yep, original is still visible on mastodon

=> More informations about this toot | More toots from self@awful.systems

Written by V0ldek@awful.systems on 2025-01-07 at 02:49

Wait but he controls the price, not the subscriber number?

Like even if the issue was low subscriber number (which it isn’t since they’re losing money per subscriber, more subscribers just makes you lose money faster), that’s still the same category of mistake? You control the price and supply, not the demand, you can’t set a stupid price that loses you money and then be like “ah, not my fault, demand was too low” like bozo it’s your product and you set the price

=> More informations about this toot | More toots from V0ldek@awful.systems

Written by froztbyte@awful.systems on 2025-01-07 at 05:19

I believe our esteemed poster was referencing the oft-seen cloud dynamic of “making just enough in margin” where you can tolerate a handful of big users because you have enough lower-usage subscribers in aggregate to counter the heavies. which, y’know, still requires the margin to exist in the first place

alas, hard to have margins in Setting The Money On Fire business models

=> More informations about this toot | More toots from froztbyte@awful.systems

Written by where_am_i@sh.itjust.works on 2025-01-06 at 23:23

well, yes. But also this is an extremely difficult to price product. 200$/m is already insane, but now you’re suggesting they should’ve gone even more aggressive. It could turn out almost nobody would use it. An optimal price here is a tricky guess.

Although they probably should’ve sold a “limited subscription”. That does give you max break-even amount of queries per month, or 2x of that, but not 100x, or unlimited. Otherwise exactly what happened can happen.

=> More informations about this toot | More toots from where_am_i@sh.itjust.works

Written by V0ldek@awful.systems on 2025-01-07 at 02:54

“Our product that costs metric kilotons of money to produce but provides little-to-no value is extremely difficult to price” oh no, damn, ye, that’s a tricky one

=> More informations about this toot | More toots from V0ldek@awful.systems

Written by stoly@lemmy.world on 2025-01-07 at 03:40

The real problem is believing that you can run a profitable LLM company.

=> More informations about this toot | More toots from stoly@lemmy.world

Written by Saledovil@sh.itjust.works on 2025-01-07 at 11:19

What the LLMs do, at the end of the day, is statistics. If you want a more precise model, you need to make it larger. Basically, exponentially scaling marginal costs meet exponentially decaying marginal utility.

=> More informations about this toot | More toots from Saledovil@sh.itjust.works

Written by stoly@lemmy.world on 2025-01-07 at 15:51

Some LLM bros must have seen this comment and become offended.

=> More informations about this toot | More toots from stoly@lemmy.world

Written by self@awful.systems on 2025-01-07 at 16:05

guess again

what the locals are probably taking issue with is:

If you want a more precise model, you need to make it larger.

this shit doesn’t get more precise for its advertised purpose when you scale it up. LLMs are garbage technology that plateaued a long time ago and are extremely ill-suited for anything but generating spam; any claims of increased precision (like those that openai makes every time they need more money or attention) are marketing that falls apart the moment you dig deeper — unless you’re the kind of promptfondler who needs LLMs to be good and workable just because it’s technology and because you’re all-in on the grift

=> More informations about this toot | More toots from self@awful.systems

Written by froztbyte@awful.systems on 2025-01-07 at 16:17

look bro just 10 more reps gpt3s bro itl’ll get you there bro I swear bro

=> More informations about this toot | More toots from froztbyte@awful.systems

Written by Saledovil@sh.itjust.works on 2025-01-07 at 16:51

Well, then let me clear it up. The statistics becomes more precise. As in, for a given prefix A, and token x, the difference between the calculated probability of x following A (P(x|A)) to the actual probability of P(x|A) becomes smaller. Obviously, if you are dealing with a novel problem, then the LLM can’t produce a meaningful answer. And if you’re working on a halfway ambitious project, then you’re virtually guaranteed to encounter a novel problem.

=> More informations about this toot | More toots from Saledovil@sh.itjust.works

Written by self@awful.systems on 2025-01-07 at 17:02

Obviously, if you are dealing with a novel problem, then the LLM can’t produce a meaningful answer.

it doesn’t produce any meaningful answers for non-novel problems either

=> More informations about this toot | More toots from self@awful.systems

Written by confusedbytheBasics@lemm.ee on 2025-01-07 at 20:17

I signed up for API access. I run all my queries through that. I pay per query. I’ve spent about $8.70 since 2021.

This seems like a win-win model. I save hundreds of dollars and they make money on every query I run. I’m confused why there are subscriptions at all.

=> More informations about this toot | More toots from confusedbytheBasics@lemm.ee

Written by protist@mander.xyz on 2025-01-06 at 13:14

“I personally chose the price”

Is that how well-run companies operate? The CEO unilaterally decides the price rather than delegating that out to the numbers people they employ?

=> More informations about this toot | More toots from protist@mander.xyz

Written by aviationeast on 2025-01-06 at 13:55

Should have asked chatgpt to play the role of a CEO.

=> More informations about this toot | More toots from aviationeast@lemmy.world

Written by Black616Angel@discuss.tchncs.de on 2025-01-06 at 16:09

This answer would be much funnier if that wasn’t his fucking plan.

=> More informations about this toot | More toots from Black616Angel@discuss.tchncs.de

Written by Grandwolf319@sh.itjust.works on 2025-01-07 at 18:49

Worth the watch just to hear the genuine laughter

=> More informations about this toot | More toots from Grandwolf319@sh.itjust.works

Written by David Gerard on 2025-01-07 at 23:11

jesus fuck how did i never see this before

=> More informations about this toot | More toots from dgerard@awful.systems

Written by Clinicallydepressedpoochie@lemmy.world on 2025-01-07 at 23:31

This is my first experience listening to this guy, and I’ll be darned, it’s a another idiot billionaire.

I’d like to think there are intelligent billionaires but honestly folks, if you win that big and haven’t cashed out to do something more with you’re life, you’re an idiot.

=> More informations about this toot | More toots from Clinicallydepressedpoochie@lemmy.world

Written by lobut@lemmy.world on 2025-01-06 at 15:07

I think I remember Jeff Bezos in “The Everything Store” book seeing a price they charged for AWS and went even lower for growth. So there could be some rationale for that? However, I think switching AI providers is easier than Cloud Providers? Not sure though.

I can imagine the highest users of this being scam artists and stuff though.

I want this AI hype train to die.

=> More informations about this toot | More toots from lobut@lemmy.world

Written by rook@awful.systems on 2025-01-06 at 15:11

A real ceo does everything. Delegation is for losers who can’t cope. Can’t move fast enough and break enough things if you’re constantly waiting for your lackeys to catch up.

If those numbers people were cleverer than the ceo, they’d be the ones in charge, and they aren’t. Checkmate. Do you even read Ayn Rand, bro?

=> More informations about this toot | More toots from rook@awful.systems

Written by Kitathalla@lemy.lol on 2025-01-06 at 16:35

Is that what Ayn Rand is about? All I really remember is that having a name you chose yourself is self-fulfilling.

=> More informations about this toot | More toots from Kitathalla@lemy.lol

Written by zbyte64@awful.systems on 2025-01-06 at 18:32

Oh boy I got a fun video for you: youtu.be/GmJI6qIqURA @26:50

Atlas Shrugged is so bad that if you didn’t know anything about the author, it could be read as a decent satire.

=> More informations about this toot | More toots from zbyte64@awful.systems

Written by Milk_Sheikh@lemm.ee on 2025-01-06 at 18:53

A monologue that last SIXTY PAGES of dry exposition. Barely credible characterization from the protagonist and villains and extremely poor world building.

Anthem is her better book because it keeps to a simple short story format - but still has a very dull plot that shoehorns ideology throughout. There’s far better philosophical fiction writers out there like Camus, Vonnegut, or Koestler. Skip Rand altogether imo

=> More informations about this toot | More toots from Milk_Sheikh@lemm.ee

Written by sp3ctr4l on 2025-01-07 at 02:07

Ayn Rand is about spending your whole life moralizing a social philosophy based on the impossibility of altruism, perfect meritocratic achievement perfectly distributing wealth, and hatred of government taxation, regulation, and social programs…

… and then dying alone, almost totally broke, living off of social security and financial charity from your former secretary.

=> More informations about this toot | More toots from sp3tr4l@lemmy.zip

Written by brbposting@sh.itjust.works on 2025-01-06 at 15:15

I’m guessing that means a team or someone presented their pricing analysis to him, and suggested a price range. And this is his way of taking responsibility for making the final judgment call.

(He’d get blamed either way, anyways)

=> More informations about this toot | More toots from brbposting@sh.itjust.works

Written by zbyte64@awful.systems on 2025-01-06 at 18:29

While the words themselves near an apology, I didn’t read it as taking responsibility. I read it as:

Anyone could have made this same mistake. In fact, dumber people than I would surely have done worse.

=> More informations about this toot | More toots from zbyte64@awful.systems

Written by David Gerard on 2025-01-07 at 01:11

$20/mo sounds like a reasonable subscription-ish price, so he picked that. That OpenAI loses money on every query, well, let’s build up volume!

=> More informations about this toot | More toots from dgerard@awful.systems

Written by froztbyte@awful.systems on 2025-01-06 at 16:13

far, far, far, far, far, far, far fewer business people than you’d expect/guess are data-driven decision makers

and then there’s the whole bayfucker ceo dynamic which adds a whole bunch of extra dumb shit

it’d be funnier if it weren’t for the tunguska-like effect it’s having on human society both at present and in the coming decades to follow :|

=> More informations about this toot | More toots from froztbyte@awful.systems

Written by littlewonder@lemmy.world on 2025-01-07 at 03:21

I endorse this as a data professional. It’s maddening.

=> More informations about this toot | More toots from littlewonder@lemmy.world

Written by stoly@lemmy.world on 2025-01-07 at 03:37

There’s a reason so many companies fail

=> More informations about this toot | More toots from stoly@lemmy.world

Written by froztbyte@awful.systems on 2025-01-07 at 05:13

there is, but this isn’t (the primary) it tbh

=> More informations about this toot | More toots from froztbyte@awful.systems

Written by IMongoose@lemmy.world on 2025-01-07 at 02:41

It works for ice tea and hotdogs, why not AI? (I jest)

=> More informations about this toot | More toots from IMongoose@lemmy.world

Written by azertyfun@sh.itjust.works on 2025-01-08 at 00:48

In tech? Kinda yeah. When a subscription is 14.99 $£€/month it’s a clear “we just think it’s what people think is a fair price for SaaS”.

The trick is that tech usually works on really weird economics where the fixed costs (R&D) are astonishingly high and the marginal costs (servers etc) are virtually nil. That’s how successful tech companies are so profitable, even more than oil companies, because once the R&D is paid off every additional user is free money. And this means that companies don’t have to be profitable any time in particular as long as they promise sufficient projected growth to make up for being a money pit until then. You can get away with anything when your investors believe you’ll eventually have a billion users.

… Of course that doesn’t work when every customer interaction actually costs a buck or two in GPU compute, but I’m sure after a lot of handwaving they were able to explain to their investors how this is totally fine and totally sustainable and they’ll totally make their money back a thousandfold.

=> More informations about this toot | More toots from azertyfun@sh.itjust.works

Written by ramble81@lemm.ee on 2025-01-06 at 13:18

Never offer unlimited on a utility model without guardrails. That’s just business 101.

=> More informations about this toot | More toots from ramble81@lemm.ee

Written by AItoothbrush on 2025-01-06 at 13:32

Okay everyone should create an openai account and start feeding it shit. Ask it the most braindead questions ever, if they your questions as training data itll just fuck the next model up even more.

=> More informations about this toot | More toots from AI_toothbrush@lemmy.zip

Proxy Information
Original URL
gemini://mastogem.picasoft.net/thread/113781030418918091
Status Code
Success (20)
Meta
text/gemini
Capsule Response Time
586.097492 milliseconds
Gemini-to-HTML Time
30.701884 milliseconds

This content has been proxied by September (3851b).