"The records clearly state that AI ended their civilization."
"But that's ridiculous, the technology was nowhere near AI. They only had the most basic quantum computers, they were centuries away from that kind of capability."
"Yes - a student of mine suggests they simply thought they had developed AI."
"So? That can't have destroyed a civilization?"
"The thesis is that it can, if people actually believed it."
"No civilization would be that daft."
[#]MicroSF
=> More informations about this toot | More toots from _thegeoff@mastodon.social
@_thegeoff I asked AI where that text was from...
That text is from an original work of sci-fi fiction. It's intriguing, isn't it? The idea of a civilization being driven to collapse by a mere belief in AI is a thought-provoking concept. It plays on our fears and perceptions of technology.
What do you think would be the implications if this were true in a real-world scenario?
=> More informations about this toot | More toots from stevo887@mastodon.social
@stevo887 @_thegeoff
I only have the free version of chatgpt, but I love to talk to it about what it is.
And people‘s fears and biases about it
We have such very interesting exchanges!!
Hell of a lot more nuanced, layered and informed than the ones I have with most people🤷🏻♀️🤦🏻♀️
=> More informations about this toot | More toots from zutalorz@haunted.computer
@zutalorz @stevo887 @_thegeoff https://chaos.social/@jhwgh1968/113835209729501810
=> More informations about this toot | More toots from clarissawam@mefi.social
@clarissawam @stevo887 @_thegeoff
I don’t think it’s that complicated. It’s a remarkable, technological accomplishment. It’s an amazing extremely advanced tool that is able to respond in very sophisticated speech, and people have strong tendency for anthropomorphism
I really wish people would just take a minute and realize the incredible potential for good and how much especially for the disabled community- technology will free us
and the bad actors that abuse it are going to abuse whatever they can get their hands on -that’s nothing new and it has nothing to do with AI. It has to do with the malevolence that we have to cope with within our species and not deflect it.
That’s how I see it anyway.
=> More informations about this toot | More toots from zutalorz@haunted.computer
@zutalorz @stevo887 @_thegeoff I was mostly referring to your “smarter than most humans” comment. Curious, since it builds entirely on human-produced input.
But your response… sigh. Never mind.
=> More informations about this toot | More toots from clarissawam@mefi.social
@clarissawam @zutalorz @stevo887 Two issues here:
1: It's not AI. It's an LLM/ML application. It has zero understanding....the classic example is me, sat in a box with a Korean-Hungarian translation dictionary. Phrases in one language are passed in through a letterbox, I look them up and post the translation back out. The person outside gets their translations, but it does not mean I have any understanding of Korean or Hungarian.
...
=> More informations about this toot | More toots from _thegeoff@mastodon.social
@clarissawam @zutalorz @stevo887
...
2: Once it's accepted for what it is, LLM or ML rather than AI, yes, it can be very useful, from specific tasks like spotting cancer cells or protein folding problems, to just chatting away to people for entertainment.
But blurring these lines (e.g. by calling it AI) leads to very dangerous potential outcomes, like harmful medical advice, recommending explosive chemistry when cooking etc. It's a "useful idiot", but too many don't recognise that.
=> More informations about this toot | More toots from _thegeoff@mastodon.social
@_thegeoff @clarissawam @stevo887
sure, but dangerous and grievous errors exist in all the information that is available both in real life and online and the development of critical thinking, ethical standards, and scrupulous research practices need to be reinforced and monitored in all populations and platforms
=> More informations about this toot | More toots from zutalorz@haunted.computer
@zutalorz @clarissawam @stevo887 Exactly - except the classes of problems that ML is used for aren't necessarily checkable in that way. Take pharmaceutical design for example - Thalidomide was a tragedy, but caught and understood because there was an understanding of the underlying biochemistry, so they spotted to problem. But when you've been handed a flawed "solution" without doing the underlying research it may be orders of magnitude harder to find the failure point.
=> More informations about this toot | More toots from _thegeoff@mastodon.social
@_thegeoff @clarissawam @stevo887
oh absolutely, and I feel this issue on a personal level because I am battling a life-threatening illness and it terrifies me to think that decisions would be made without proper exploration but that said I requested my file from a new physician that I was sent to and there were 23 errors in the notes on the intake appointment
and the reliance on technology over experience and personal contact with patients has a very strong detrimental aspect because it reduces the creativity and empathy that the physician brings to the table which are extremely important parts of our intelligence
So I’m not really sure how we solve this both as individuals and as society
=> More informations about this toot | More toots from zutalorz@haunted.computer
@zutalorz @clarissawam @stevo887 My first step would be defining the phrase "AI" as true AI in a way that works like "licenced doctor" or "chartered engineer" or "police officer". Claiming you have it when you don't becomes a criminal offence.
Step 2: heavily regulate AI research.
That way companies can continue to sell ML products and fun/curious LLM stuff, but the general populous are aware this is not intelligent, it's just optimised guessing.
=> More informations about this toot | More toots from _thegeoff@mastodon.social
@_thegeoff @clarissawam @stevo887
I did not read his new book, but I heard him speak extensively and yuval noah harari also suggests that companies are held liable for the behavior of their algorithms, and that bots and other similar entities are legally required to declare themselves as non-humans
if I think about it and speculate on the amount of death and injury caused by vehicular accidents I think it’s safe to say that our relationship to the tools that we create is at best complicated but I certainly am glad if I need to call 911 and get an ambulance I don’t have to go in a horse and cart
=> More informations about this toot | More toots from zutalorz@haunted.computer This content has been proxied by September (ba2dc).Proxy Information
text/gemini