guess again
what the locals are probably taking issue with is:
If you want a more precise model, you need to make it larger.
this shit doesn’t get more precise for its advertised purpose when you scale it up. LLMs are garbage technology that plateaued a long time ago and are extremely ill-suited for anything but generating spam; any claims of increased precision (like those that openai makes every time they need more money or attention) are marketing that falls apart the moment you dig deeper — unless you’re the kind of promptfondler who needs LLMs to be good and workable just because it’s technology and because you’re all-in on the grift
=> More informations about this toot | View the thread | More toots from self@awful.systems
=> View stoly@lemmy.world profile
text/gemini
This content has been proxied by September (3851b).