Ancestors

Toot

Written by Simon Willison on 2025-01-10 at 01:44

Risky post... here are my 1, 3 and 6 year predictions for LLMs/AI, expanded from my appearance on the Oxide and Friends podcast

(My confidence in my ability to predict the future is extremely low)

https://simonwillison.net/2025/Jan/10/ai-predictions/

=> More informations about this toot | More toots from simon@simonwillison.net

Descendants

Written by Simon Willison on 2025-01-10 at 01:45

I am SO MUCH more comfortable writing about things that have actually happened as opposed to pontificating on whatever the next few months and years might bring https://simonwillison.net/2024/Dec/31/llms-in-2024/

=> More informations about this toot | More toots from simon@simonwillison.net

Written by Sara on 2025-01-11 at 07:10

@simon I read your article and it was interesting! I used to be intrigued in AI and machine learning, but the LLM hype bubble has turned me into a skeptic because I haven’t seen any applications that seem worth the cost and ethical concerns. All I see are neat toys, snake oil, or dystopian nightmare fuel. You seem very educated in the space—do you by any chance have suggestions of where to read about genuine “good applications”?

=> More informations about this toot | More toots from fabrefact@xoxo.zone

Written by Sara on 2025-01-11 at 11:43

@simon I did remember the one solid application of LLMs I’ve encountered: machine translation. Somewhat tarnished for me in that it has led to significant job loss for human translators.

=> More informations about this toot | More toots from fabrefact@xoxo.zone

Written by Simon Willison on 2025-01-12 at 01:59

@fabrefact I find that example particularly interesting, because on the one hand it absolutely SUCKS for people in that profession - one of the clearest examples of job losses and salary reductions due to transformer-based AI technology so far

But at the same time, literally billions of people who could never afford a human translator now have the ability to communicate across language barriers which they previously lacked. That's an incredible value for human society!

=> More informations about this toot | More toots from simon@simonwillison.net

Written by Adrian Schönig :kangaroo: on 2025-01-10 at 03:49

@simon That prompted me to ask ChatGPT to give 1, 3 and 6 year predictions of LLMs, and then for tooshbrushes and kitchen stoves. Apparently AI will be everywhere in 3-6 years with your toothbrush doing all cleaning autonomously and stoves will get robotic arms and be self-cooking, too.

=> More informations about this toot | More toots from nighthawk@aus.social

Written by Adrian Hon on 2025-01-10 at 09:31

@simon I'd bet on the Pulitzer and the amazing art. Less so on the civil unrest, mostly because I can't bring myself to see how the jobs AI do in only six years would be that widespread; but maybe "white collar" workers will surprise me.

=> More informations about this toot | More toots from adrianhon@mastodon.social

Written by Simon Willison on 2025-01-10 at 13:00

@adrianhon yesh, my dystopian one is predicated on IF we get something that can do most of the jobs that people do

I doubt think that will happen, and I hope it doesn't happen

=> More informations about this toot | More toots from simon@simonwillison.net

Written by Adrian Hon on 2025-01-10 at 13:19

@simon Robotics! (apparently)

=> More informations about this toot | More toots from adrianhon@mastodon.social

Written by Scott Klein on 2025-01-10 at 18:57

@simon I think the Pulitzer prediction already came true. https://www.niemanlab.org/2024/05/for-the-first-time-two-pulitzer-winners-disclosed-using-ai-in-their-reporting/

=> More informations about this toot | More toots from kleinmatic@journa.host

Written by Simon Willison on 2025-01-10 at 21:49

@kleinmatic hah! That's two of my predictions down already

=> More informations about this toot | More toots from simon@simonwillison.net

Written by Chris Zubak-Skees on 2025-01-11 at 07:28

@simon @kleinmatic I do wonder if those two examples are cases of existing machine learning techniques being labeled AI rather than novel uses of LLMs/generative models. One used labeled data to train a topic model first built in 2021, the other labeled examples in a commercial satellite object detection platform from a company founded in 2016. Certainly I can think of uses for generatively pre-trained models in either case, but I wonder if both were more traditional ML.

=> More informations about this toot | More toots from zubakskees@mastodon.social

Written by Bill Seitz on 2025-01-10 at 23:42

@kleinmatic @simon "We didn’t use AI to replace what would’ve otherwise been done manually. We used AI precisely because it was the type of task that would’ve taken so long to do manually that [it would distract from] other investigative work"

see, everybody's a fuckin rationalizing capitalist

=> More informations about this toot | More toots from billseitz@toolsforthought.social

Written by Bill Seitz on 2025-01-10 at 23:43

@kleinmatic @simon those sound like "machine learning" not GenAI (ok the visual analyst is at-risk)

=> More informations about this toot | More toots from billseitz@toolsforthought.social

Written by Simon Willison on 2025-01-11 at 00:35

@billseitz @kleinmatic yeah, it's not clear if the Missing in Chicago "machine learning model" was based on LLMs - given that the story was published mid-2023 I'd be surprised if they used LLMs to analyze the police reports, the models back then weren't nearly as powerful as 2024-era models (much shorter context length, no multi-modal support)

=> More informations about this toot | More toots from simon@simonwillison.net

Written by Bill Seitz on 2025-01-11 at 00:43

@simon @kleinmatic also, whether it's LLM or "classic" AI, I'd want to know how humans checked the results.

=> More informations about this toot | More toots from billseitz@toolsforthought.social

Written by Simon Willison on 2025-01-11 at 04:11

@billseitz @kleinmatic that's one of the reasons I trust journalists with this stuff: they know how to work with untrusted sources and have a very strong culture of fact checking and editorial standards already

=> More informations about this toot | More toots from simon@simonwillison.net

Written by Martijn Faassen on 2025-01-10 at 22:37

@simon

Now that we know you are way too pessimistic about the timeline yet the events you predict come true early, massive unrest about AGI is imminent.

I thought you were too pessimistic about AI art too - there is a lot of creativity in the ComfyUI space. It's a different creativity and not art itself but it's very creative nonetheless.

=> More informations about this toot | More toots from faassen@fosstodon.org

Written by Beady Belle Fanchannel on 2025-01-11 at 20:14

@simon Great podcast episode! I love the format.

Hopeful countersuggestion to your “agent” prognosis: agents are gonna be a thing, based on a revival of capability-based systems, where every piece of code you run has to be provided a “proof token” that you are allowed to execute it.

=> More informations about this toot | More toots from Profpatsch@mastodon.xyz

Written by Beady Belle Fanchannel on 2025-01-11 at 20:22

@simon Not gonna happen within one year though, probably on a timeframe of 3–8 years.

=> More informations about this toot | More toots from Profpatsch@mastodon.xyz

Proxy Information
Original URL
gemini://mastogem.picasoft.net/thread/113801528840722695
Status Code
Success (20)
Meta
text/gemini
Capsule Response Time
611.936505 milliseconds
Gemini-to-HTML Time
4.275644 milliseconds

This content has been proxied by September (3851b).