Ancestors

Toot

Written by joël on 2025-01-29 at 13:03

"prompt engineering" is just telling yourself a story what the text extruder is doing before/while extruding text.

which, in most cases, is you yourself putting the googly eyes on the machine the programmers want you to see as an "intelligence" (which means taking no responsibility)

=> More informations about this toot | More toots from jollysea@chaos.social

Descendants

Written by Pol on 2025-01-29 at 13:34

@jollysea Well it is "starting a good text, so that the extruder extrudes useful text"? (btw, I would disagree with "want to see as an intelligence". Current AI is not that far off from 'human' intelligence, the problem is that it is far away from the reliability of algorythmic computing.)

=> More informations about this toot | More toots from pol_9000@mastodon.opencloud.lu

Written by hambier on 2025-01-29 at 14:25

@pol_9000 @jollysea IMHO the problem with LLMs is that they're already surpassing a sizable proportion of the population in, yes I'll say it, intelligence. Just look at DeepSeek-R1's detailed thought process if you give it some more or less intricate problem to solve.

It's a machine alright, but one that is better at analytical thinking than many humans.

We should focus our fear/anger on what it will do to society, not on denial.

=> More informations about this toot | More toots from hambier@mastodon.opencloud.lu

Written by joël on 2025-01-29 at 16:06

@hambier @pol_9000 I think we can agree to disagree here. I would not consider what LLMs do to be intelligent.

The fact that there are jobs that are replaceable by a mediocre text generator is of course, another question.

=> More informations about this toot | More toots from jollysea@chaos.social

Written by hambier on 2025-01-29 at 16:45

@jollysea @pol_9000 Disagreement is fine.

We've all got our own experiences that shape the way we think about those questions.

=> More informations about this toot | More toots from hambier@mastodon.opencloud.lu

Written by Pol on 2025-01-29 at 17:00

@jollysea @hambier wait... I think the issue is only the definition of "intelligent". LLMs for sure are not perfectly logic/all knowing machines like AIs in science fiction and they will not become that. They imitate human speech based on a broad training set of texts of random quality, and they would probably be indistinguishable from real humans in a double blind test.

=> More informations about this toot | More toots from pol_9000@mastodon.opencloud.lu

Written by hambier on 2025-01-29 at 17:24

@pol_9000 @jollysea Yes, I had a post ready asking for your viewpoints on what to consider intelligent but I didn't want to get into lengthy discussions or give the impression that I'm trying to argue.

It's all about the definition.

Isn't learning pretty much similar to LLM training? Is the average person doing more than pattern recognition and speaking what seems plausible based on learning/training? (What proportion of people is meeting the def.?)

=> More informations about this toot | More toots from hambier@mastodon.opencloud.lu

Written by joël on 2025-01-29 at 17:27

@hambier @pol_9000 I started about 2 times and always reminded myself that I should do something else.

But: I value your input, I like that we're having an interesting discussion, hopefully I'll remember to come back to this tomorrow. :D

=> More informations about this toot | More toots from jollysea@chaos.social

Proxy Information
Original URL
gemini://mastogem.picasoft.net/thread/113911778242813799
Status Code
Success (20)
Meta
text/gemini
Capsule Response Time
256.483242 milliseconds
Gemini-to-HTML Time
1.484072 milliseconds

This content has been proxied by September (3851b).