Ancestors

Written by db0 on 2025-01-06 at 10:52

OpenAI is so cooked and I'm all here for it

https://lemmy.dbzer0.com/post/34984867

=> View attached media

=> More informations about this toot | More toots from db0@lemmy.dbzer0.com

Written by MNByChoice@midwest.social on 2025-01-06 at 13:04

CEO personally chose a price too low for company to be profitable.

What a clown.

=> More informations about this toot | More toots from MNByChoice@midwest.social

Written by where_am_i@sh.itjust.works on 2025-01-06 at 23:23

well, yes. But also this is an extremely difficult to price product. 200$/m is already insane, but now you’re suggesting they should’ve gone even more aggressive. It could turn out almost nobody would use it. An optimal price here is a tricky guess.

Although they probably should’ve sold a “limited subscription”. That does give you max break-even amount of queries per month, or 2x of that, but not 100x, or unlimited. Otherwise exactly what happened can happen.

=> More informations about this toot | More toots from where_am_i@sh.itjust.works

Written by stoly@lemmy.world on 2025-01-07 at 03:40

The real problem is believing that you can run a profitable LLM company.

=> More informations about this toot | More toots from stoly@lemmy.world

Written by Saledovil@sh.itjust.works on 2025-01-07 at 11:19

What the LLMs do, at the end of the day, is statistics. If you want a more precise model, you need to make it larger. Basically, exponentially scaling marginal costs meet exponentially decaying marginal utility.

=> More informations about this toot | More toots from Saledovil@sh.itjust.works

Written by stoly@lemmy.world on 2025-01-07 at 15:51

Some LLM bros must have seen this comment and become offended.

=> More informations about this toot | More toots from stoly@lemmy.world

Written by self@awful.systems on 2025-01-07 at 16:05

guess again

what the locals are probably taking issue with is:

If you want a more precise model, you need to make it larger.

this shit doesn’t get more precise for its advertised purpose when you scale it up. LLMs are garbage technology that plateaued a long time ago and are extremely ill-suited for anything but generating spam; any claims of increased precision (like those that openai makes every time they need more money or attention) are marketing that falls apart the moment you dig deeper — unless you’re the kind of promptfondler who needs LLMs to be good and workable just because it’s technology and because you’re all-in on the grift

=> More informations about this toot | More toots from self@awful.systems

Toot

Written by Saledovil@sh.itjust.works on 2025-01-07 at 16:51

Well, then let me clear it up. The statistics becomes more precise. As in, for a given prefix A, and token x, the difference between the calculated probability of x following A (P(x|A)) to the actual probability of P(x|A) becomes smaller. Obviously, if you are dealing with a novel problem, then the LLM can’t produce a meaningful answer. And if you’re working on a halfway ambitious project, then you’re virtually guaranteed to encounter a novel problem.

=> More informations about this toot | More toots from Saledovil@sh.itjust.works

Descendants

Written by self@awful.systems on 2025-01-07 at 17:02

Obviously, if you are dealing with a novel problem, then the LLM can’t produce a meaningful answer.

it doesn’t produce any meaningful answers for non-novel problems either

=> More informations about this toot | More toots from self@awful.systems

Proxy Information
Original URL
gemini://mastogem.picasoft.net/thread/113788103984505271
Status Code
Success (20)
Meta
text/gemini
Capsule Response Time
334.077924 milliseconds
Gemini-to-HTML Time
1.722773 milliseconds

This content has been proxied by September (3851b).