Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 6 October 2025
https://awful.systems/post/2505486
=> More informations about this toot | More toots from blakestacey@awful.systems
Following up from this truth bomb: awful.systems/comment/4877052
@Soyweiser: Sorry AGIbros, not even the Dutch believe AGI is near.
For your delectation, here are the HN comments
I’m in the other camp: I remember when we thought an AI capable of solving Go was astronomically impossible and yet here we are. This article reads just like the skeptic essays back then.
Ah yes my coworkers communicate exclusively in Go games and they are always winning because they are AI and I am on the street, poor.
There’s not that much else to sneer at though, plenty of reasonable people.
Here’s the lobste.rs disucssion: lobste.rs/s/4xzxqk
=> More informations about this toot | More toots from gerikson@awful.systems
The best thing about the lobste.rs thread is to identify prompt fondlers among the brethren.
Here’s something I’ve never heard of before:
en.wikipedia.org/wiki/Moravec's_paradox
Moravec wrote in 1988: “it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers[…]”
Apparently he had GPT back then!
Anyway is this anything anyone takes seriously? Steven Pinker makes an appearance in the wiki page, which is a bit of a red flag.
=> More informations about this toot | More toots from gerikson@awful.systems
So to throw my totally-amateur two cents in, it seems like it’s definitely part of the discussion in actual AI circles based on the for-public-consumption reading and viewing I’ve done over the years, though I’ve never heard it mentioned by name. I think a bigger part of the explanation has less to do with human cognition (it’s probably fallacious to assume that AI of any method effectively reproduces those processes) and more to do with the more abstract cognitive tests and games being much more formally defined. Our perception and model of a game of Chess or Go may not be complete enough to solve the game, but it is bounded by the explicitly-defined rules of the game. If your opponent tries to work outside of those bounds by, say, flipping the board over and storming off, the game itself can treat that as a simple forfeit-by-cheating. But our understanding of the real world is not similarly bounded. Things that were thought to be impossible happen with impressive frequency, and our brain is clearly able to handle this somehow. That lack of boundedness requires different capabilities than just being able to operate within expected parameters like existing English GenAI or image generators, I suspect relating to handling uncertainty or lacking information. The assumption that what AI is doing is a mirror to the living mind is wholly unproven.
=> More informations about this toot | More toots from YourNetworkIsHaunted@awful.systems
Moravec’s Paradox is actually more interesting than it appears. You don’t have take his reasoning or Pinker’s seriously but the observation is salient. Also the paradox gets stated in other ways by other scientists, it’s a common theme.
One way I often think about it: in order for your to survive, the intelligence of moving in unknown spaces and managing numerous fuzzy energy systems is way more important to prioritize and master than like, the abstract conceptual spaces that are both not full of calories and are also cheaper to externalize anyways.
It’s part of why I don’t think there is a globally coherent heirarchy of intelligence, or potentially even general intelligence at all. Just, the distances and spaces that a thing occupies, and the competencies that define being in that space.
=> More informations about this toot | More toots from imadabouzu@awful.systems
Yeah, it’s a real thing that happens when programming robots. Kinematics is more difficult than route planning, for example.
=> More informations about this toot | More toots from corbin@awful.systems This content has been proxied by September (3851b).Proxy Information
text/gemini