how much more hype you could take away from the openai valuation if relatively technical people like my colleagues understood a llm has no ability to learn beyond the initial super-costly model training?
=> More informations about this toot | More toots from pony@blovice.bahnhof.cz
@pony shhhh no, AI is MAGICAL. It does NOT involve employing sweaty nerds to tell it when it's being retarded a million times before it starts making some sort of sense consistently. It's SUPER FUTURISTIC CYBERPUNK magic. Not a decision tree built out of nerd sweat and company expensed late night pizza orders.
=> More informations about this toot | More toots from b8tt3ryl8d@mstdn.jp
@b8tt3ryl8d it definitely isn't a decision tree
=> More informations about this toot | More toots from pony@blovice.bahnhof.cz
@pony @b8tt3ryl8d if chatgpt can maintain some context, can't this be used to learn something if it was persistent
=> More informations about this toot | More toots from piggo@piggo.space
@pony @b8tt3ryl8d its not like human brain is some mystical thing totally different from how these things work, it's just an even bigger model
=> More informations about this toot | More toots from piggo@piggo.space
@piggo @b8tt3ryl8d context really is just the input size you allow and it's where you quickly increase computational cost of the entire thing... you can make it "remember" things there (and the web chat form would obviously use as much of the previous conversation there as possible, for example), but you are not really training anything
=> More informations about this toot | More toots from pony@blovice.bahnhof.cz
@pony @b8tt3ryl8d also i still don't understand how it can pull answers from the internet including correct reference links if it was just a word prediction machine
=> More informations about this toot | More toots from piggo@piggo.space This content has been proxied by September (3851b).Proxy Information
text/gemini